본문 바로가기
대메뉴 바로가기
KAIST
Newsletter Vol.25
Receive KAIST news by email!
View
Subscribe
Close
Type your e-mail address here.
Subscribe
Close
KAIST
NEWS
유틸열기
홈페이지 통합검색
-
검색
KOREAN
메뉴 열기
Laboratory+for+Brain+and+Machine+Intelligence
by recently order
by view order
Deep Learning Helps Explore the Structural and Strategic Bases of Autism
Psychiatrists typically diagnose autism spectrum disorders (ASD) by observing a person’s behavior and by leaning on the Diagnostic and Statistical Manual of Mental Disorders (DSM-5), widely considered the “bible” of mental health diagnosis. However, there are substantial differences amongst individuals on the spectrum and a great deal remains unknown by science about the causes of autism, or even what autism is. As a result, an accurate diagnosis of ASD and a prognosis prediction for patients can be extremely difficult. But what if artificial intelligence (AI) could help? Deep learning, a type of AI, deploys artificial neural networks based on the human brain to recognize patterns in a way that is akin to, and in some cases can surpass, human ability. The technique, or rather suite of techniques, has enjoyed remarkable success in recent years in fields as diverse as voice recognition, translation, autonomous vehicles, and drug discovery. A group of researchers from KAIST in collaboration with the Yonsei University College of Medicine has applied these deep learning techniques to autism diagnosis. Their findings were published on August 14 in the journal IEEE Access. Magnetic resonance imaging (MRI) scans of brains of people known to have autism have been used by researchers and clinicians to try to identify structures of the brain they believed were associated with ASD. These researchers have achieved considerable success in identifying abnormal grey and white matter volume and irregularities in cerebral cortex activation and connections as being associated with the condition. These findings have subsequently been deployed in studies attempting more consistent diagnoses of patients than has been achieved via psychiatrist observations during counseling sessions. While such studies have reported high levels of diagnostic accuracy, the number of participants in these studies has been small, often under 50, and diagnostic performance drops markedly when applied to large sample sizes or on datasets that include people from a wide variety of populations and locations. “There was something as to what defines autism that human researchers and clinicians must have been overlooking,” said Keun-Ah Cheon, one of the two corresponding authors and a professor in Department of Child and Adolescent Psychiatry at Severance Hospital of the Yonsei University College of Medicine. “And humans poring over thousands of MRI scans won’t be able to pick up on what we’ve been missing,” she continued. “But we thought AI might be able to.” So the team applied five different categories of deep learning models to an open-source dataset of more than 1,000 MRI scans from the Autism Brain Imaging Data Exchange (ABIDE) initiative, which has collected brain imaging data from laboratories around the world, and to a smaller, but higher-resolution MRI image dataset (84 images) taken from the Child Psychiatric Clinic at Severance Hospital, Yonsei University College of Medicine. In both cases, the researchers used both structural MRIs (examining the anatomy of the brain) and functional MRIs (examining brain activity in different regions). The models allowed the team to explore the structural bases of ASD brain region by brain region, focusing in particular on many structures below the cerebral cortex, including the basal ganglia, which are involved in motor function (movement) as well as learning and memory. Crucially, these specific types of deep learning models also offered up possible explanations of how the AI had come up with its rationale for these findings. “Understanding the way that the AI has classified these brain structures and dynamics is extremely important,” said Sang Wan Lee, the other corresponding author and an associate professor at KAIST. “It’s no good if a doctor can tell a patient that the computer says they have autism, but not be able to say why the computer knows that.” The deep learning models were also able to describe how much a particular aspect contributed to ASD, an analysis tool that can assist psychiatric physicians during the diagnosis process to identify the severity of the autism. “Doctors should be able to use this to offer a personalized diagnosis for patients, including a prognosis of how the condition could develop,” Lee said. “Artificial intelligence is not going to put psychiatrists out of a job,” he explained. “But using AI as a tool should enable doctors to better understand and diagnose complex disorders than they could do on their own.” -ProfileProfessor Sang Wan LeeDepartment of Bio and Brain EngineeringLaboratory for Brain and Machine Intelligence https://aibrain.kaist.ac.kr/ KAIST
2020.09.23
View 9780
New Insights into How the Human Brain Solves Complex Decision-Making Problems
A new study on meta reinforcement learning algorithms helps us understand how the human brain learns to adapt to complexity and uncertainty when learning and making decisions. A research team, led by Professor Sang Wan Lee at KAIST jointly with John O’Doherty at Caltech, succeeded in discovering both a computational and neural mechanism for human meta reinforcement learning, opening up the possibility of porting key elements of human intelligence into artificial intelligence algorithms. This study provides a glimpse into how it might ultimately use computational models to reverse engineer human reinforcement learning. This work was published on Dec 16, 2019 in the journal Nature Communications. The title of the paper is “Task complexity interacts with state-space uncertainty in the arbitration between model-based and model-free learning.” Human reinforcement learning is an inherently complex and dynamic process, involving goal setting, strategy choice, action selection, strategy modification, cognitive resource allocation etc. This a very challenging problem for humans to solve owing to the rapidly changing and multifaced environment in which humans have to operate. To make matters worse, humans often need to often rapidly make important decisions even before getting the opportunity to collect a lot of information, unlike the case when using deep learning methods to model learning and decision-making in artificial intelligence applications. In order to solve this problem, the research team used a technique called 'reinforcement learning theory-based experiment design' to optimize the three variables of the two-stage Markov decision task - goal, task complexity, and task uncertainty. This experimental design technique allowed the team not only to control confounding factors, but also to create a situation similar to that which occurs in actual human problem solving. Secondly, the team used a technique called ‘model-based neuroimaging analysis.’ Based on the acquired behavior and fMRI data, more than 100 different types of meta reinforcement learning algorithms were pitted against each other to find a computational model that can explain both behavioral and neural data. Thirdly, for the sake of a more rigorous verification, the team applied an analytical method called ‘parameter recovery analysis,’ which involves high-precision behavioral profiling of both human subjects and computational models. In this way, the team was able to accurately identify a computational model of meta reinforcement learning, ensuring not only that the model’s apparent behavior is similar to that of humans, but also that the model solves the problem in the same way as humans do. The team found that people tended to increase planning-based reinforcement learning (called model-based control), in response to increasing task complexity. However, they resorted to a simpler, more resource efficient strategy called model-free control, when both uncertainty and task complexity were high. This suggests that both the task uncertainty and the task complexity interact during the meta control of reinforcement learning. Computational fMRI analyses revealed that task complexity interacts with neural representations of the reliability of the learning strategies in the inferior prefrontal cortex. These findings significantly advance understanding of the nature of the computations being implemented in the inferior prefrontal cortex during meta reinforcement learning as well as providing insight into the more general question of how the brain resolves uncertainty and complexity in a dynamically changing environment. Identifying the key computational variables that drive prefrontal meta reinforcement learning, can also inform understanding of how this process might be vulnerable to break down in certain psychiatric disorders such as depression and OCD. Furthermore, gaining a computational understanding of how this process can sometimes lead to increased model-free control, can provide insights into how under some situations task performance might break down under conditions of high cognitive load. Professor Lee said, “This study will be of enormous interest to researchers in both the artificial intelligence and human/computer interaction fields since this holds significant potential for applying core insights gleaned into how human intelligence works with AI algorithms.” This work was funded by the National Institute on Drug Abuse, the National Research Foundation of Korea, the Ministry of Science and ICT, Samsung Research Funding Center of Samsung Electronics. Figure 1 (modified from the figures of the original paper doi:10.1038/s41467-019-13632-1). Computations implemented in the inferior prefrontal cortex during meta reinforcement learning. (A) Computational model of human prefrontal meta reinforcement learning (left) and the brain areas whose neural activity patterns are explained by the latent variables of the model. (B) Examples of behavioral profiles. Shown on the left is choice bias for different goal types and on the right is choice optimality for task complexity and uncertainty. (C) Parameter recoverability analysis. Compared are the effect of task uncertainty (left) and task complexity (right) on choice optimality. -Profile Professor Sang Wan Lee sangwan@kaist.ac.kr Department of Bio and Brain Engineering Director, KAIST Center for Neuroscience-inspired AI KAIST Institute for Artificial Intelligence (http://aibrain.kaist.ac.kr) KAIST Institute for Health, Science, and Technology KAIST (https://www.kaist.ac.kr)
2020.01.31
View 5264
<<
첫번째페이지
<
이전 페이지
1
>
다음 페이지
>>
마지막 페이지 1