본문 바로가기
대메뉴 바로가기
KAIST
Newsletter Vol.25
Receive KAIST news by email!
View
Subscribe
Close
Type your e-mail address here.
Subscribe
Close
KAIST
NEWS
유틸열기
홈페이지 통합검색
-
검색
KOREAN
메뉴 열기
AI
by recently order
by view order
Decoding Brain Signals to Control a Robotic Arm
Advanced brain-machine interface system successfully interprets arm movement directions from neural signals in the brain Researchers have developed a mind-reading system for decoding neural signals from the brain during arm movement. The method, described in the journal Applied Soft Computing, can be used by a person to control a robotic arm through a brain-machine interface (BMI). A BMI is a device that translates nerve signals into commands to control a machine, such as a computer or a robotic limb. There are two main techniques for monitoring neural signals in BMIs: electroencephalography (EEG) and electrocorticography (ECoG). The EEG exhibits signals from electrodes on the surface of the scalp and is widely employed because it is non-invasive, relatively cheap, safe and easy to use. However, the EEG has low spatial resolution and detects irrelevant neural signals, which makes it difficult to interpret the intentions of individuals from the EEG. On the other hand, the ECoG is an invasive method that involves placing electrodes directly on the surface of the cerebral cortex below the scalp. Compared with the EEG, the ECoG can monitor neural signals with much higher spatial resolution and less background noise. However, this technique has several drawbacks. “The ECoG is primarily used to find potential sources of epileptic seizures, meaning the electrodes are placed in different locations for different patients and may not be in the optimal regions of the brain for detecting sensory and movement signals,” explained Professor Jaeseung Jeong, a brain scientist at KAIST. “This inconsistency makes it difficult to decode brain signals to predict movements.” To overcome these problems, Professor Jeong’s team developed a new method for decoding ECoG neural signals during arm movement. The system is based on a machine-learning system for analysing and predicting neural signals called an ‘echo-state network’ and a mathematical probability model called the Gaussian distribution. In the study, the researchers recorded ECoG signals from four individuals with epilepsy while they were performing a reach-and-grasp task. Because the ECoG electrodes were placed according to the potential sources of each patient’s epileptic seizures, only 22% to 44% of the electrodes were located in the regions of the brain responsible for controlling movement. During the movement task, the participants were given visual cues, either by placing a real tennis ball in front of them, or via a virtual reality headset showing a clip of a human arm reaching forward in first-person view. They were asked to reach forward, grasp an object, then return their hand and release the object, while wearing motion sensors on their wrists and fingers. In a second task, they were instructed to imagine reaching forward without moving their arms. The researchers monitored the signals from the ECoG electrodes during real and imaginary arm movements, and tested whether the new system could predict the direction of this movement from the neural signals. They found that the novel decoder successfully classified arm movements in 24 directions in three-dimensional space, both in the real and virtual tasks, and that the results were at least five times more accurate than chance. They also used a computer simulation to show that the novel ECoG decoder could control the movements of a robotic arm. Overall, the results suggest that the new machine learning-based BCI system successfully used ECoG signals to interpret the direction of the intended movements. The next steps will be to improve the accuracy and efficiency of the decoder. In the future, it could be used in a real-time BMI device to help people with movement or sensory impairments. This research was supported by the KAIST Global Singularity Research Program of 2021, Brain Research Program of the National Research Foundation of Korea funded by the Ministry of Science, ICT, and Future Planning, and the Basic Science Research Program through the National Research Foundation of Korea funded by the Ministry of Education. -PublicationHoon-Hee Kim, Jaeseung Jeong, “An electrocorticographic decoder for arm movement for brain-machine interface using an echo state network and Gaussian readout,” Applied SoftComputing online December 31, 2021 (doi.org/10.1016/j.asoc.2021.108393) -ProfileProfessor Jaeseung JeongDepartment of Bio and Brain EngineeringCollege of EngineeringKAIST
2022.03.18
View 9188
Five Projects Ranked in the Top 100 for National R&D Excellence
Five KAIST research projects were selected as the 2021 Top 100 for National R&D Excellence by the Ministry of Science and ICT and the Korea Institute of Science & Technology Evaluation and Planning. The five projects are:-The development of E. coli that proliferates with only formic acid and carbon dioxide by Distinguished Professor Sang Yup Lee from the Department of Chemical and Biomolecular Engineering -An original reverse aging technology that restores an old human skin cell into a younger one by Professor Kwang-Hyun Cho from the Department of Bio and Brain Engineering-The development of next-generation high-efficiency perovskite-silicon tandem solar cells by Professor Byungha Shin from the Department of Materials Science and Engineering-Research on the effects of ultrafine dust in the atmosphere has on energy consumption by Professor Jiyong Eom from the School of Business and Technology Management-Research on a molecular trigger that controls the phase transformation of bio materials by Professor Myungchul Kim from the Department of Bio and Brain Engineering Started in 2006, an Evaluation Committee composed of experts in industries, universities, and research institutes has made the preliminary selections of the most outstanding research projects based on their significance as a scientific and technological development and their socioeconomic effects. The finalists went through an open public evaluation. The final 100 studies are from six fields: 18 from mechanics & materials, 26 from biology & marine sciences, 19 from ICT & electronics, 10 from interdisciplinary research, and nine from natural science and infrastructure. The selected 100 studies will receive a certificate and an award plaque from the minister of MSIT as well as additional points for business and institutional evaluations according to appropriate regulations, and the selected researchers will be strongly recommended as candidates for national meritorious awards. In particular, to help the 100 selected research projects become more accessible for the general public, their main contents will be provided in a free e-book ‘The Top 100 for National R&D Excellence of 2021’ that will be available from online booksellers.
2022.02.17
View 8208
AI Light-Field Camera Reads 3D Facial Expressions
Machine-learned, light-field camera reads facial expressions from high-contrast illumination invariant 3D facial images A joint research team led by Professors Ki-Hun Jeong and Doheon Lee from the KAIST Department of Bio and Brain Engineering reported the development of a technique for facial expression detection by merging near-infrared light-field camera techniques with artificial intelligence (AI) technology. Unlike a conventional camera, the light-field camera contains micro-lens arrays in front of the image sensor, which makes the camera small enough to fit into a smart phone, while allowing it to acquire the spatial and directional information of the light with a single shot. The technique has received attention as it can reconstruct images in a variety of ways including multi-views, refocusing, and 3D image acquisition, giving rise to many potential applications. However, the optical crosstalk between shadows caused by external light sources in the environment and the micro-lens has limited existing light-field cameras from being able to provide accurate image contrast and 3D reconstruction. The joint research team applied a vertical-cavity surface-emitting laser (VCSEL) in the near-IR range to stabilize the accuracy of 3D image reconstruction that previously depended on environmental light. When an external light source is shone on a face at 0-, 30-, and 60-degree angles, the light field camera reduces 54% of image reconstruction errors. Additionally, by inserting a light-absorbing layer for visible and near-IR wavelengths between the micro-lens arrays, the team could minimize optical crosstalk while increasing the image contrast by 2.1 times. Through this technique, the team could overcome the limitations of existing light-field cameras and was able to develop their NIR-based light-field camera (NIR-LFC), optimized for the 3D image reconstruction of facial expressions. Using the NIR-LFC, the team acquired high-quality 3D reconstruction images of facial expressions expressing various emotions regardless of the lighting conditions of the surrounding environment. The facial expressions in the acquired 3D images were distinguished through machine learning with an average of 85% accuracy – a statistically significant figure compared to when 2D images were used. Furthermore, by calculating the interdependency of distance information that varies with facial expression in 3D images, the team could identify the information a light-field camera utilizes to distinguish human expressions. Professor Ki-Hun Jeong said, “The sub-miniature light-field camera developed by the research team has the potential to become the new platform to quantitatively analyze the facial expressions and emotions of humans.” To highlight the significance of this research, he added, “It could be applied in various fields including mobile healthcare, field diagnosis, social cognition, and human-machine interactions.” This research was published in Advanced Intelligent Systems online on December 16, under the title, “Machine-Learned Light-field Camera that Reads Facial Expression from High-Contrast and Illumination Invariant 3D Facial Images.” This research was funded by the Ministry of Science and ICT and the Ministry of Trade, Industry and Energy. -Publication“Machine-learned light-field camera that reads fascial expression from high-contrast and illumination invariant 3D facial images,” Sang-In Bae, Sangyeon Lee, Jae-Myeong Kwon, Hyun-Kyung Kim. Kyung-Won Jang, Doheon Lee, Ki-Hun Jeong, Advanced Intelligent Systems, December 16, 2021 (doi.org/10.1002/aisy.202100182) ProfileProfessor Ki-Hun JeongBiophotonic LaboratoryDepartment of Bio and Brain EngineeringKAIST Professor Doheon LeeDepartment of Bio and Brain EngineeringKAIST
2022.01.21
View 9592
AI Weather Forecasting Research Center Opens
The Kim Jaechul Graduate School of AI in collaboration with the National Institute of Meteorological Sciences (NIMS) under the National Meteorological Administration launched the AI Weather Forecasting Research Center last month. The KAIST AI Weather Forecasting Research Center headed by Professor Seyoung Yoon was established with funding from from the AlphaWeather Development Research Project of the National Institute of Meteorological Sciences. KAIST was finally selected asas the project facilitator. AlphaWeather is an AI system that utilizes and analyzes approximately approximately 150,000 ,000 pieces of weather information per hour to help weather forecasters produce accurate weather forecasts. The research center is composed of three research teams with the following goals: (a) developdevelop AI technology for precipitation nowcasting, (b) developdevelop AI technology for accelerating physical process-based numerical models, and (c) develop dAI technology for supporting weather forecasters. The teams consist of 15 staff member members from NIMS and 61 researchers from the Kim Jaechul Graduate School of AI at KAIST. The research center is developing an AI algorithm for precipitation nowcasting (with up to six hours of lead time), which uses satellite images, radar reflectivity, and data collected from weather stations. It is also developing an AI algorithm for correcting biases in the prediction results from multiple numerical models. Finally, it is Finally, it is developing AI technology that supports weather forecasters by standardizing and automating repetitive manual processes. After verification, the the results obtained will be used by by the Korean National Weather Service as a next-generation forecasting/special-reporting system intelligence engine from 2026.
2022.01.10
View 4698
Face Detection in Untrained Deep Neural Networks
A KAIST team shows that primitive visual selectivity of faces can arise spontaneously in completely untrained deep neural networks Researchers have found that higher visual cognitive functions can arise spontaneously in untrained neural networks. A KAIST research team led by Professor Se-Bum Paik from the Department of Bio and Brain Engineering has shown that visual selectivity of facial images can arise even in completely untrained deep neural networks. This new finding has provided revelatory insights into mechanisms underlying the development of cognitive functions in both biological and artificial neural networks, also making a significant impact on our understanding of the origin of early brain functions before sensory experiences. The study published in Nature Communications on December 16 demonstrates that neuronal activities selective to facial images are observed in randomly initialized deep neural networks in the complete absence of learning, and that they show the characteristics of those observed in biological brains. The ability to identify and recognize faces is a crucial function for social behavior, and this ability is thought to originate from neuronal tuning at the single or multi-neuronal level. Neurons that selectively respond to faces are observed in young animals of various species, and this raises intense debate whether face-selective neurons can arise innately in the brain or if they require visual experience. Using a model neural network that captures properties of the ventral stream of the visual cortex, the research team found that face-selectivity can emerge spontaneously from random feedforward wirings in untrained deep neural networks. The team showed that the character of this innate face-selectivity is comparable to that observed with face-selective neurons in the brain, and that this spontaneous neuronal tuning for faces enables the network to perform face detection tasks. These results imply a possible scenario in which the random feedforward connections that develop in early, untrained networks may be sufficient for initializing primitive visual cognitive functions. Professor Paik said, “Our findings suggest that innate cognitive functions can emerge spontaneously from the statistical complexity embedded in the hierarchical feedforward projection circuitry, even in the complete absence of learning”. He continued, “Our results provide a broad conceptual advance as well as advanced insight into the mechanisms underlying the development of innate functions in both biological and artificial neural networks, which may unravel the mystery of the generation and evolution of intelligence.” This work was supported by the National Research Foundation of Korea (NRF) and by the KAIST singularity research project. -PublicationSeungdae Baek, Min Song, Jaeson Jang, Gwangsu Kim, and Se-Bum Baik, “Face detection in untrained deep neural network,” Nature Communications 12, 7328 on Dec.16, 2021 (https://doi.org/10.1038/s41467-021-27606-9) -ProfileProfessor Se-Bum PaikVisual System and Neural Network LaboratoryProgram of Brain and Cognitive EngineeringDepartment of Bio and Brain EngineeringCollege of EngineeringKAIST
2021.12.21
View 8322
Connecting the Dots to Find New Treatments for Breast Cancer
Systems biologists uncovered new ways of cancer cell reprogramming to treat drug-resistant cancers Scientists at KAIST believe they may have found a way to reverse an aggressive, treatment-resistant type of breast cancer into a less dangerous kind that responds well to treatment. The study involved the use of mathematical models to untangle the complex genetic and molecular interactions that occur in the two types of breast cancer, but could be extended to find ways for treating many others. The study’s findings were published in the journal Cancer Research. Basal-like tumours are the most aggressive type of breast cancer, with the worst prognosis. Chemotherapy is the only available treatment option, but patients experience high recurrence rates. On the other hand, luminal-A breast cancer responds well to drugs that specifically target a receptor on their cell surfaces, called estrogen receptor alpha (ERα). KAIST systems biologist Kwang-Hyun Cho and colleagues analyzed the complex molecular and genetic interactions of basal-like and luminal-A breast cancers to find out if there might be a way to switch the former to the latter and give patients a better chance to respond to treatment. To do this, they accessed large amounts of cancer and patient data to understand which genes and molecules are involved in the two types. They then input this data into a mathematical model that represents genes, proteins and molecules as dots and the interactions between them as lines. The model can be used to conduct simulations and see how interactions change when certain genes are turned on or off. “There have been a tremendous number of studies trying to find therapeutic targets for treating basal-like breast cancer patients,” says Cho. “But clinical trials have failed due to the complex and dynamic nature of cancer. To overcome this issue, we looked at breast cancer cells as a complex network system and implemented a systems biological approach to unravel the underlying mechanisms that would allow us to reprogram basal-like into luminal-A breast cancer cells.” Using this approach, followed by experimental validation on real breast cancer cells, the team found that turning off two key gene regulators, called BCL11A and HDAC1/2, switched a basal-like cancer signalling pathway into a different one used by luminal-A cancer cells. The switch reprograms the cancer cells and makes them more responsive to drugs that target ERα receptors. However, further tests will be needed to confirm that this also works in animal models and eventually humans. “Our study demonstrates that the systems biological approach can be useful for identifying novel therapeutic targets,” says Cho. The researchers are now expanding its breast cancer network model to include all breast cancer subtypes. Their ultimate aim is to identify more drug targets and to understand the mechanisms that could drive drug-resistant cells to turn into drug-sensitive ones. This work was supported by the National Research Foundation of Korea, the Ministry of Science and ICT, Electronics and Telecommunications Research Institute, and the KAIST Grand Challenge 30 Project. -Publication Sea R. Choi, Chae Young Hwang, Jonghoon Lee, and Kwang-Hyun Cho, “Network Analysis Identifies Regulators of Basal-like Breast Cancer Reprogramming and Endocrine TherapyVulnerability,” Cancer Research, November 30. (doi:10.1158/0008-5472.CAN-21-0621) -ProfileProfessor Kwang-Hyun ChoLaboratory for Systems Biology and Bio-Inspired EngineeringDepartment of Bio and Brain EngineeringKAIST
2021.12.07
View 7011
Industrial Liaison Program to Provide Comprehensive Consultation Services
The ILP’s one-stop solutions target all industrial sectors including conglomerates, small and medium-sized enterprises, venture companies, venture capital (VC) firms, and government-affiliated organizations. The Industrial Liaison Center at KAIST launched the Industrial Liaison Program (ILP) on September 28, an industry-academic cooperation project to provide comprehensive solutions to industry partners. The Industrial Liaison Center will recruit member companies for this service every year, targeting all industrial sectors including conglomerates, small and medium-sized enterprises, venture companies, venture capital (VC) firms, and government-affiliated organizations. The program plans to build a one-stop support system that can systematically share and use excellent resource information from KAIST’s research teams, R&D achievements, and infrastructure to provide member companies with much-needed services. More than 40 KAIST professors with abundant academic-industrial collaboration experience will participate in the program. Experts from various fields with different points of view and experiences will jointly provide solutions to ILP member companies. To actively participate in academic-industrial liaisons and joint consultations, KAIST assigned 10 professors from related fields as program directors. The program directors will come from four different fields including AI/robots (Professor Alice Oh, School from the School of Computing, Professor Young Jae Jang from the Department of Industrial & Systems Engineering, and Professor Yong-Hwa Park from Department of Mechanical Engineering), bio/medicine (Professor Daesoo Kim from Department of Biological Sciences and Professor YongKeun Park from Department of Physics), materials/electronics (Professor Sang Ouk Kim from the Department of Materials Science and Engineering and Professors Jun-Bo Yoon and Seonghwan Cho from the School of Electrical Engineering), and environment/energy (Professor Hee-Tak Kim from the Department of Biological Sciences and Professor Hoon Sohn from the Department of Civil and Environmental Engineering). The transdisciplinary board of consulting professors that will lead technology innovation is composed of 30 professors including Professor Min-Soo Kim (School of Computing, AI), Professor Chan Hyuk Kim (Department of Biological Sciences, medicine), Professor Hae-Won Park (Department of Mechanical Engineering, robots), Professor Changho Suh (School of Electrical Engineering, electronics), Professor Haeshin Lee (Department of Chemistry, bio), Professor Il-Doo Kim (Department of Materials Science and Engineering, materials), Professor HyeJin Kim (School of Business Technology and Management), and Professor Byoung Pil Kim (School of Business Technology and Management, technology law) The Head of the Industrial Liaison Center who is also in charge of the program, Professor Keon Jae Lee, said, “In a science and technology-oriented generation where technological supremacy determines national power, it is indispensable to build a new platform upon which innovative academic-industrial cooperation can be pushed forward in the fields of joint consultation, the development of academic-industrial projects, and the foundation of new industries. He added, “KAIST professors carry out world-class research in many different fields and faculty members can come together through the ILP to communicate with representatives from industry to improve their corporations’ global competitiveness and further contribute to our nation’s interests by cultivating strong small enterprises
2021.09.30
View 5576
Hydrogel-Based Flexible Brain-Machine Interface
The interface is easy to insert into the body when dry, but behaves ‘stealthily’ inside the brain when wet Professor Seongjun Park’s research team and collaborators revealed a newly developed hydrogel-based flexible brain-machine interface. To study the structure of the brain or to identify and treat neurological diseases, it is crucial to develop an interface that can stimulate the brain and detect its signals in real time. However, existing neural interfaces are mechanically and chemically different from real brain tissue. This causes foreign body response and forms an insulating layer (glial scar) around the interface, which shortens its lifespan. To solve this problem, the research team developed a ‘brain-mimicking interface’ by inserting a custom-made multifunctional fiber bundle into the hydrogel body. The device is composed not only of an optical fiber that controls specific nerve cells with light in order to perform optogenetic procedures, but it also has an electrode bundle to read brain signals and a microfluidic channel to deliver drugs to the brain. The interface is easy to insert into the body when dry, as hydrogels become solid. But once in the body, the hydrogel will quickly absorb body fluids and resemble the properties of its surrounding tissues, thereby minimizing foreign body response. The research team applied the device on animal models, and showed that it was possible to detect neural signals for up to six months, which is far beyond what had been previously recorded. It was also possible to conduct long-term optogenetic and behavioral experiments on freely moving mice with a significant reduction in foreign body responses such as glial and immunological activation compared to existing devices. “This research is significant in that it was the first to utilize a hydrogel as part of a multifunctional neural interface probe, which increased its lifespan dramatically,” said Professor Park. “With our discovery, we look forward to advancements in research on neurological disorders like Alzheimer’s or Parkinson’s disease that require long-term observation.” The research was published in Nature Communications on June 8, 2021. (Title: Adaptive and multifunctional hydrogel hybrid probes for long-term sensing and modulation of neural activity) The study was conducted jointly with an MIT research team composed of Professor Polina Anikeeva, Professor Xuanhe Zhao, and Dr. Hyunwoo Yook. This research was supported by the National Research Foundation (NRF) grant for emerging research, Korea Medical Device Development Fund, KK-JRC Smart Project, KAIST Global Initiative Program, and Post-AI Project. -PublicationPark, S., Yuk, H., Zhao, R. et al. Adaptive and multifunctional hydrogel hybrid probes for long-term sensing and modulation of neural activity. Nat Commun 12, 3435 (2021). https://doi.org/10.1038/s41467-021-23802-9 -ProfileProfessor Seongjun ParkBio and Neural Interfaces LaboratoryDepartment of Bio and Brain EngineeringKAIST
2021.07.13
View 9288
Prof. Sang Wan Lee Selected for 2021 IBM Academic Award
Professor Sang Wan Lee from the Department of Bio and Brain Engineering was selected as the recipient of the 2021 IBM Global University Program Academic Award. The award recognizes individual faculty members whose emerging science and technology contains significant interest for universities and IBM. Professor Lee, whose research focuses on artificial intelligence and computational neuroscience, won the award for his research proposal titled A Neuroscience-Inspired Approach for Metacognitive Reinforcement Learning. IBM provides a gift of $40,000 to the recipient’s institution in recognition of the selection of the project but not as a contract for services. Professor Lee’s project aims to exploit the unique characteristics of human reinforcement learning. Specifically, he plans to examines the hypothesis that metacognition, a human’s ability to estimate their uncertainty level, serves to guide sample-efficient and near-optimal exploration, making it possible to achieve an optimal balance between model-based and model-free reinforcement learning. He was also selected as the winner of the Google Research Award in 2016 and has been working with DeepMind and University College London to conduct basic research on decision-making brain science to establish a theory on frontal lobe meta-enhance learning. "We plan to conduct joint research for utilizing brain-based artificial intelligence technology and frontal lobe meta-enhanced learning technology modeling in collaboration with an international research team including IBM, DeepMind, MIT, and Oxford,” Professor Lee said.
2021.06.25
View 9195
Ultrafast, on-Chip PCR Could Speed Up Diagnoses during Pandemics
A rapid point-of-care diagnostic plasmofluidic chip can deliver result in only 8 minutes Reverse transcription-polymerase chain reaction (RT-PCR) has been the gold standard for diagnosis during the COVID-19 pandemic. However, the PCR portion of the test requires bulky, expensive machines and takes about an hour to complete, making it difficult to quickly diagnose someone at a testing site. Now, researchers at KAIST have developed a plasmofluidic chip that can perform PCR in only about 8 minutes, which could speed up diagnoses during current and future pandemics. The rapid diagnosis of COVID-19 and other highly contagious viral diseases is important for timely medical care, quarantining and contact tracing. Currently, RT-PCR uses enzymes to reverse transcribe tiny amounts of viral RNA to DNA, and then amplifies the DNA so that it can be detected by a fluorescent probe. It is the most sensitive and reliable diagnostic method. But because the PCR portion of the test requires 30-40 cycles of heating and cooling in special machines, it takes about an hour to perform, and samples must typically be sent away to a lab, meaning that a patient usually has to wait a day or two to receive their diagnosis. Professor Ki-Hun Jeong at the Department of Bio and Brain Engineering and his colleagues wanted to develop a plasmofluidic PCR chip that could quickly heat and cool miniscule volumes of liquids, allowing accurate point-of-care diagnoses in a fraction of the time. The research was reported in ACS Nano on May 19. The researchers devised a postage stamp-sized polydimethylsiloxane chip with a microchamber array for the PCR reactions. When a drop of a sample is added to the chip, a vacuum pulls the liquid into the microchambers, which are positioned above glass nanopillars with gold nanoislands. Any microbubbles, which could interfere with the PCR reaction, diffuse out through an air-permeable wall. When a white LED is turned on beneath the chip, the gold nanoislands on the nanopillars quickly convert light to heat, and then rapidly cool when the light is switched off. The researchers tested the device on a piece of DNA containing a SARS-CoV-2 gene, accomplishing 40 heating and cooling cycles and fluorescence detection in only 5 minutes, with an additional 3 minutes for sample loading. The amplification efficiency was 91%, whereas a comparable conventional PCR process has an efficiency of 98%. With the reverse transcriptase step added prior to sample loading, the entire testing time with the new method could take 10-13 minutes, as opposed to about an hour for typical RT-PCR testing. The new device could provide many opportunities for rapid point-of-care diagnostics during a pandemic, the researchers say. -Publication Ultrafast and Real-Time Nanoplasmonic On-Chip Polymerase Chain Reaction for Rapid and Quantitative Molecular Diagnostics ACS Nano (https://doi.org/10.1021/acsnano.1c02154) -Professor Ki-Hun Jeong Biophotonics Laboratory https://biophotonics.kaist.ac.kr/ Department of Bio and Brain Engineeinrg KAIST
2021.06.08
View 8327
What Guides Habitual Seeking Behavior Explained
A new role of the ventral striatum explains habitual seeking behavior Researchers have been investigating how the brain controls habitual seeking behaviors such as addiction. A recent study by Professor Sue-Hyun Lee from the Department of Bio and Brain Engineering revealed that a long-term value memory maintained in the ventral striatum in the brain is a neural basis of our habitual seeking behavior. This research was conducted in collaboration with the research team lead by Professor Hyoung F. Kim from Seoul National University. Given that addictive behavior is deemed a habitual one, this research provides new insights for developing therapeutic interventions for addiction. Habitual seeking behavior involves strong stimulus responses, mostly rapid and automatic ones. The ventral striatum in the brain has been thought to be important for value learning and addictive behaviors. However, it was unclear if the ventral striatum processes and retains long-term memories that guide habitual seeking. Professor Lee’s team reported a new role of the human ventral striatum where long-term memory of high-valued objects are retained as a single representation and may be used to evaluate visual stimuli automatically to guide habitual behavior. “Our findings propose a role of the ventral striatum as a director that guides habitual behavior with the script of value information written in the past,” said Professor Lee. The research team investigated whether learned values were retained in the ventral striatum while the subjects passively viewed previously learned objects in the absence of any immediate outcome. Neural responses in the ventral striatum during the incidental perception of learned objects were examined using fMRI and single-unit recording. The study found significant value discrimination responses in the ventral striatum after learning and a retention period of several days. Moreover, the similarity of neural representations for good objects increased after learning, an outcome positively correlated with the habitual seeking response for good objects. “These findings suggest that the ventral striatum plays a role in automatic evaluations of objects based on the neural representation of positive values retained since learning, to guide habitual seeking behaviors,” explained Professor Lee. “We will fully investigate the function of different parts of the entire basal ganglia including the ventral striatum. We also expect that this understanding may lead to the development of better treatment for mental illnesses related to habitual behaviors or addiction problems.” This study, supported by the National Research Foundation of Korea, was reported at Nature Communications (https://doi.org/10.1038/s41467-021-22335-5.) -ProfileProfessor Sue-Hyun LeeDepartment of Bio and Brain EngineeringMemory and Cognition Laboratoryhttp://memory.kaist.ac.kr/lecture KAIST
2021.06.03
View 8072
Attachable Skin Monitors that Wick the Sweat Away
- A silicone membrane for wearable devices is more comfortable and breathable thanks to better-sized pores made with the help of citric acid crystals. - A new preparation technique fabricates thin, silicone-based patches that rapidly wick water away from the skin. The technique could reduce the redness and itching caused by wearable biosensors that trap sweat beneath them. The technique was developed by bioengineer and professor Young-Ho Cho and his colleagues at KAIST and reported in the journal Scientific Reports last month. “Wearable bioelectronics are becoming more attractive for the day-to-day monitoring of biological compounds found in sweat, like hormones or glucose, as well as body temperature, heart rate, and energy expenditure,” Professor Cho explained. “But currently available materials can cause skin irritation, so scientists are looking for ways to improve them,” he added. Attachable biosensors often use a silicone-based compound called polydimethylsiloxane (PDMS), as it has a relatively high water vapour transmission rate compared to other materials. Still, this rate is only two-thirds that of skin’s water evaporation rate, meaning sweat still gets trapped underneath it. Current fabrication approaches mix PDMS with beads or solutes, such as sugars or salts, and then remove them to leave pores in their place. Another technique uses gas to form pores in the material. Each technique has its disadvantages, from being expensive and complex to leaving pores of different sizes. A team of researchers led by Professor Cho from the KAIST Department of Bio and Brain Engineering was able to form small, uniform pores by crystallizing citric acid in PDMS and then removing the crystals using ethanol. The approach is significantly cheaper than using beads, and leads to 93.2% smaller and 425% more uniformly-sized pores compared to using sugar. Importantly, the membrane transmits water vapour 2.2 times faster than human skin. The team tested their membrane on human skin for seven days and found that it caused only minor redness and no itching, whereas a non-porous PDMS membrane did. Professor Cho said, “Our method could be used to fabricate porous PDMS membranes for skin-attachable devices used for daily monitoring of physiological signals.” “We next plan to modify our membrane so it can be more readily attached to and removed from skin,” he added. This work was supported by the Ministry of Trade, Industry and Energy (MOTIE) of Korea under the Alchemist Project. Image description: Smaller, more uniformly-sized pores are made in the PDMS membrane by mixing PDMS, toluene, citric acid, and ethanol. Toluene dilutes PDMS so it can easily mix with the other two constituents. Toluene and ethanol are then evaporated, which causes the citric acid to crystallize within the PDMS material. The mixture is placed in a mould where it solidifies into a thin film. The crystals are then removed using ethanol, leaving pores in their place. Image credit: Professor Young-Ho Cho, KAIST Image usage restrictions: News organizations may use or redistribute this image, with proper attribution, as part of news coverage of this paper only. Publication: Yoon, S, et al. (2021) Wearable porous PDMS layer of high moisture permeability for skin trouble reduction. Scientific Reports 11, Article No. 938. Available online at https://doi.org/10.1038/s41598-020-78580-z Profile: Young-Ho Cho, Ph.D Professor mems@kaist.ac.kr https://mems.kaist.ac.kr NanoSentuating Systems Laboratory Department of Bio and Brain Engineering https://kaist.ac.kr Korea Advanced Institute of Science and Technology (KAIST) Daejeon, Republic of Korea (END)
2021.02.22
View 10899
<<
첫번째페이지
<
이전 페이지
1
2
3
4
5
6
7
8
9
10
>
다음 페이지
>>
마지막 페이지 10