본문 바로가기
대메뉴 바로가기
KAIST
Newsletter Vol.26
Receive KAIST news by email!
View
Subscribe
Close
Type your e-mail address here.
Subscribe
Close
KAIST
NEWS
유틸열기
홈페이지 통합검색
-
검색
KOREAN
메뉴 열기
CHI
by recently order
by view order
KAIST Introduces ‘Virtual Teaching Assistant’ That can Answer Even in the Middle of the Night – Successful First Deployment in Classroom
- Research teams led by Prof. Yoonjae Choi (Kim Jaechul Graduate School of AI) and Prof. Hwajeong Hong (Department of Industrial Design) at KAIST developed a Virtual Teaching Assistant (VTA) to support learning and class operations for a course with 477 students. - The VTA responds 24/7 to students’ questions related to theory and practice by referencing lecture slides, coding assignments, and lecture videos. - The system’s source code has been released to support future development of personalized learning support systems and their application in educational settings. < Photo 1. (From left) PhD candidate Sunjun Kweon, Master's candidate Sooyohn Nam, PhD candidate Hyunseung Lim, Professor Hwajung Hong, Professor Yoonjae Choi > “At first, I didn’t have high expectations for the Virtual Teaching Assistant (VTA), but it turned out to be extremely helpful—especially when I had sudden questions late at night, I could get immediate answers,” said Jiwon Yang, a Ph.D. student at KAIST. “I was also able to ask questions I would’ve hesitated to bring up with a human TA, which led me to ask even more and ultimately improved my understanding of the course.” KAIST (President Kwang Hyung Lee) announced on June 5th that a joint research team led by Prof. Yoonjae Choi of the Kim Jaechul Graduate School of AI and Prof. Hwajeong Hong of the Department of Industrial Design has successfully developed and deployed a Virtual Teaching Assistant (VTA) that provides personalized feedback to individual students even in large-scale classes. This study marks one of the first large-scale, real-world deployments in Korea, where the VTA was introduced in the “Programming for Artificial Intelligence” course at the KAIST Kim Jaechul Graduate School of AI, taken by 477 master’s and Ph.D. students during the Fall 2024 semester, to evaluate its effectiveness and practical applicability in an actual educational setting. The AI teaching assistant developed in this study is a course-specialized agent, distinct from general-purpose tools like ChatGPT or conventional chatbots. The research team implemented a Retrieval-Augmented Generation (RAG) architecture, which automatically vectorizes a large volume of course materials—including lecture slides, coding assignments, and video lectures—and uses them as the basis for answering students’ questions. < Photo 2. Teaching Assistant demonstrating to the student how the Virtual Teaching Assistant works> When a student asks a question, the system searches for the most relevant course materials in real time based on the context of the query, and then generates a response. This process is not merely a simple call to a large language model (LLM), but rather a material-grounded question answering system tailored to the course content—ensuring both high reliability and accuracy in learning support. Sunjun Kweon, the first author of the study and head teaching assistant for the course, explained, “Previously, TAs were overwhelmed with repetitive and basic questions—such as concepts already covered in class or simple definitions—which made it difficult to focus on more meaningful inquiries.” He added, “After introducing the VTA, students began to reduce repeated questions and focus on more essential ones. As a result, the burden on TAs was significantly reduced, allowing us to concentrate on providing more advanced learning support.” In fact, compared to the previous year’s course, the number of questions that required direct responses from human TAs decreased by approximately 40%. < Photo 3. A student working with VTA. > The VTA, which was operated over a 14-week period, was actively used by more than half of the enrolled students, with a total of 3,869 Q&A interactions recorded. Notably, students without a background in AI or with limited prior knowledge tended to use the VTA more frequently, indicating that the system provided practical support as a learning aid, especially for those who needed it most. The analysis also showed that students tended to ask the VTA more frequently about theoretical concepts than they did with human TAs. This suggests that the AI teaching assistant created an environment where students felt free to ask questions without fear of judgment or discomfort, thereby encouraging more active engagement in the learning process. According to surveys conducted before, during, and after the course, students reported increased trust, response relevance, and comfort with the VTA over time. In particular, students who had previously hesitated to ask human TAs questions showed higher levels of satisfaction when interacting with the AI teaching assistant. < Figure 1. Internal structure of the AI Teaching Assistant (VTA) applied in this course. It follows a Retrieval-Augmented Generation (RAG) structure that builds a vector database from course materials (PDFs, recorded lectures, coding practice materials, etc.), searches for relevant documents based on student questions and conversation history, and then generates responses based on them. > Professor Yoonjae Choi, the lead instructor of the course and principal investigator of the study, stated, “The significance of this research lies in demonstrating that AI technology can provide practical support to both students and instructors. We hope to see this technology expanded to a wider range of courses in the future.” The research team has released the system’s source code on GitHub, enabling other educational institutions and researchers to develop their own customized learning support systems and apply them in real-world classroom settings. < Figure 2. Initial screen of the AI Teaching Assistant (VTA) introduced in the "Programming for AI" course. It asks for student ID input along with simple guidelines, a mechanism to ensure that only registered students can use it, blocking indiscriminate external access and ensuring limited use based on students. > The related paper, titled “A Large-Scale Real-World Evaluation of an LLM-Based Virtual Teaching Assistant,” was accepted on May 9, 2025, to the Industry Track of ACL 2025, one of the most prestigious international conferences in the field of Natural Language Processing (NLP), recognizing the excellence of the research. < Figure 3. Example conversation with the AI Teaching Assistant (VTA). When a student inputs a class-related question, the system internally searches for relevant class materials and then generates an answer based on them. In this way, VTA provides learning support by reflecting class content in context. > This research was conducted with the support of the KAIST Center for Teaching and Learning Innovation, the National Research Foundation of Korea, and the National IT Industry Promotion Agency.
2025.06.05
View 606
“For the First Time, We Shared a Meaningful Exchange”: KAIST Develops an AI App for Parents and Minimally Verbal Autistic Children Connect
• KAIST team up with NAVER AI Lab and Dodakim Child Development Center Develop ‘AAcessTalk’, an AI-driven Communication Tool bridging the gap Between Children with Autism and their Parents • The project earned the prestigious Best Paper Award at the ACM CHI 2025, the Premier International Conference in Human-Computer Interaction • Families share heartwarming stories of breakthrough communication and newfound understanding. < Photo 1. (From left) Professor Hwajung Hong and Doctoral candidate Dasom Choi of the Department of Industrial Design with SoHyun Park and Young-Ho Kim of Naver Cloud AI Lab > For many families of minimally verbal autistic (MVA) children, communication often feels like an uphill battle. But now, thanks to a new AI-powered app developed by researchers at KAIST in collaboration with NAVER AI Lab and Dodakim Child Development Center, parents are finally experiencing moments of genuine connection with their children. On the 16th, the KAIST (President Kwang Hyung Lee) research team, led by Professor Hwajung Hong of the Department of Industrial Design, announced the development of ‘AAcessTalk,’ an artificial intelligence (AI)-based communication tool that enables genuine communication between children with autism and their parents. This research was recognized for its human-centered AI approach and received international attention, earning the Best Paper Award at the ACM CHI 2025*, an international conference held in Yokohama, Japan.*ACM CHI (ACM Conference on Human Factors in Computing Systems) 2025: One of the world's most prestigious academic conference in the field of Human-Computer Interaction (HCI). This year, approximately 1,200 papers were selected out of about 5,000 submissions, with the Best Paper Award given to only the top 1%. The conference, which drew over 5,000 researchers, was the largest in its history, reflecting the growing interest in ‘Human-AI Interaction.’ Called AACessTalk, the app offers personalized vocabulary cards tailored to each child’s interests and context, while guiding parents through conversations with customized prompts. This creates a space where children’s voices can finally be heard—and where parents and children can connect on a deeper level. Traditional augmentative and alternative communication (AAC) tools have relied heavily on fixed card systems that often fail to capture the subtle emotions and shifting interests of children with autism. AACessTalk breaks new ground by integrating AI technology that adapts in real time to the child’s mood and environment. < Figure. Schematics of AACessTalk system. It provides personalized vocabulary cards for children with autism and context-based conversation guides for parents to focus on practical communication. Large ‘Turn Pass Button’ is placed at the child’s side to allow the child to lead the conversation. > Among its standout features is a large ‘Turn Pass Button’ that gives children control over when to start or end conversations—allowing them to lead with agency. Another feature, the “What about Mom/Dad?” button, encourages children to ask about their parents’ thoughts, fostering mutual engagement in dialogue, something many children had never done before. One parent shared, “For the first time, we shared a meaningful exchange.” Such stories were common among the 11 families who participated in a two-week pilot study, where children used the app to take more initiative in conversations and parents discovered new layers of their children’s language abilities. Parents also reported moments of surprise and joy when their children used unexpected words or took the lead in conversations, breaking free from repetitive patterns. “I was amazed when my child used a word I hadn’t heard before. It helped me understand them in a whole new way,” recalled one caregiver. Professor Hwajung Hong, who led the research at KAIST’s Department of Industrial Design, emphasized the importance of empowering children to express their own voices. “This study shows that AI can be more than a communication aid—it can be a bridge to genuine connection and understanding within families,” she said. Looking ahead, the team plans to refine and expand human-centered AI technologies that honor neurodiversity, with a focus on bringing practical solutions to socially vulnerable groups and enriching user experiences. This research is the result of KAIST Department of Industrial Design doctoral student Dasom Choi's internship at NAVER AI Lab.* Thesis Title: AACessTalk: Fostering Communication between Minimally Verbal Autistic Children and Parents with Contextual Guidance and Card Recommendation* DOI: 10.1145/3706598.3713792* Main Author Information: Dasom Choi (KAIST, NAVER AI Lab, First Author), SoHyun Park (NAVER AI Lab) , Kyungah Lee (Dodakim Child Development Center), Hwajung Hong (KAIST), and Young-Ho Kim (NAVER AI Lab, Corresponding Author) This research was supported by the NAVER AI Lab internship program and grants from the National Research Foundation of Korea: the Doctoral Student Research Encouragement Grant (NRF-2024S1A5B5A19043580) and the Mid-Career Researcher Support Program for the Development of a Generative AI-Based Augmentative and Alternative Communication System for Autism Spectrum Disorder (RS-2024-00458557).
2025.05.19
View 2078
Decoding Fear: KAIST Identifies An Affective Brain Circuit Crucial for Fear Memory Formation by Non-nociceptive Threat Stimulus
Fear memories can form in the brain following exposure to threatening situations such as natural disasters, accidents, or violence. When these memories become excessive or distorted, they can lead to severe mental health disorders, including post-traumatic stress disorder (PTSD), anxiety disorders, and depression. However, the mechanisms underlying fear memory formation triggered by affective pain rather than direct physical pain have remained largely unexplored – until now. A KAIST research team has identified, for the first time, a brain circuit specifically responsible for forming fear memories in the absence of physical pain, marking a significant advance in understanding how psychological distress is processed and drives fear memory formation in the brain. This discovery opens the door to the development of targeted treatments for trauma-related conditions by addressing the underlying neural pathways. < Photo 1. (from left) Professor Jin-Hee Han, Dr. Junho Han and Ph.D. Candidate Boin Suh of the Department of Biological Sciences > KAIST (President Kwang-Hyung Lee) announced on May 15th that the research team led by Professor Jin-Hee Han in the Department of Biological Sciences has identified the pIC-PBN circuit*, a key neural pathway involved in forming fear memories triggered by psychological threats in the absence of sensory pain. This groundbreaking work was conducted through experiments with mice.*pIC–PBN circuit: A newly identified descending neural pathway from the posterior insular cortex (pIC) to the parabrachial nucleus (PBN), specialized for transmitting psychological threat information. Traditionally, the lateral parabrachial nucleus (PBN) has been recognized as a critical part of the ascending pain pathway, receiving pain signals from the spinal cord. However, this study reveals a previously unknown role for the PBN in processing fear induced by non-painful psychological stimuli, fundamentally changing our understanding of its function in the brain. This work is considered the first experimental evidence that 'emotional distress' and 'physical pain' are processed through different neural circuits to form fear memories, making it a significant contribution to the field of neuroscience. It clearly demonstrates the existence of a dedicated pathway (pIC-PBN) for transmitting emotional distress. The study's first author, Dr. Junho Han, shared the personal motivation behind this research: “Our dog, Lego, is afraid of motorcycles. He never actually crashed into one, but ever since having a traumatizing event of having a motorbike almost run into him, just hearing the sound now triggers a fearful response. Humans react similarly – even if you didn’t have a personal experience of being involved in an accident, a near-miss or exposure to alarming media can create lasting fear memories, which may eventually lead to PTSD.” He continued, “Until now, fear memory research has mainly relied on experimental models involving physical pain. However, much of real-world human fears arise from psychological threats, rather than from direct physical harm. Despite this, little was known about the brain circuits responsible for processing these psychological threats that can drive fear memory formation.” To investigate this, the research team developed a novel fear conditioning model that utilizes visual threat stimuli instead of electrical shocks. In this model, mice were exposed to a rapidly expanding visual disk on a ceiling screen, simulating the threat of an approaching predator. This approach allowed the team to demonstrate that fear memories can form in response to a non-nociceptive, psychological threat alone, without the need for physical pain. < Figure 1. Artificial activation of the posterior insular cortex (pIC) to lateral parabrachial nucleus (PBN) neural circuit induces anxiety-like behaviors and fear memory formation in mice. > Using advanced chemogenetic and optogenetic techniques, the team precisely controlled neuronal activity, revealing that the lateral parabrachial nucleus (PBN) is essential to form fear memories in response to visual threats. They further traced the origin of these signals to the posterior insular cortex (pIC), a region known to process negative emotions and pain, confirming a direct connection between the two areas. The study also showed that inhibiting the pIC–PBN circuit significantly reduced fear memory formation in response to visual threats, without affecting innate fear responses or physical pain-based learning. Conversely, artificially activating this circuit alone was sufficient to drive fear memory formation, confirming its role as a key pathway for processing psychological threat information. < Figure 2. Schematic diagram of brain neural circuits transmitting emotional & physical pain threat signals. Visual threat stimuli do not involve physical pain but can create an anxious state and form fear memory through the affective pain signaling pathway. > Professor Jin-Hee Han commented, “This study lays an important foundation for understanding how emotional distress-based mental disorders, such as PTSD, panic disorder, and anxiety disorder, develop, and opens new possibilities for targeted treatment approaches.” The findings, authored by Dr. Junho Han (first author), Ph.D. candidate Boin Suh (second author), and Dr. Jin-Hee Han (corresponding author) of the Department of Biological Sciences, were published online in the international journal Science Advances on May 9, 2025.※ Paper Title: A top-down insular cortex circuit crucial for non-nociceptive fear learning. Science Advances (https://doi.org/10.1101/2024.10.14.618356)※ Author Information: Junho Han (first author), Boin Suh (second author), and Jin-Hee Han (corresponding author) This research was supported by grants from the National Research Foundation of Korea (NRF-2022M3E5E8081183 and NRF-2017M3C7A1031322).
2025.05.15
View 1655
KAIST's Pioneering VR Precision Technology & Choreography Tool Receive Spotlights at CHI 2025
Accurate pointing in virtual spaces is essential for seamless interaction. If pointing is not precise, selecting the desired object becomes challenging, breaking user immersion and reducing overall experience quality. KAIST researchers have developed a technology that offers a vivid, lifelike experience in virtual space, alongside a new tool that assists choreographers throughout the creative process. KAIST (President Kwang-Hyung Lee) announced on May 13th that a research team led by Professor Sang Ho Yoon of the Graduate School of Culture Technology, in collaboration with Professor Yang Zhang of the University of California, Los Angeles (UCLA), has developed the ‘T2IRay’ technology and the ‘ChoreoCraft’ platform, which enables choreographers to work more freely and creatively in virtual reality. These technologies received two Honorable Mention awards, recognizing the top 5% of papers, at CHI 2025*, the best international conference in the field of human-computer interaction, hosted by the Association for Computing Machinery (ACM) from April 25 to May 1. < (From left) PhD candidates Jina Kim and Kyungeun Jung along with Master's candidate, Hyunyoung Han and Professor Sang Ho Yoon of KAIST Graduate School of Culture Technology and Professor Yang Zhang (top) of UCLA > T2IRay: Enabling Virtual Input with Precision T2IRay introduces a novel input method that allows for precise object pointing in virtual environments by expanding traditional thumb-to-index gestures. This approach overcomes previous limitations, such as interruptions or reduced accuracy due to changes in hand position or orientation. The technology uses a local coordinate system based on finger relationships, ensuring continuous input even as hand positions shift. It accurately captures subtle thumb movements within this coordinate system, integrating natural head movements to allow fluid, intuitive control across a wide range. < Figure 1. T2IRay framework utilizing the delicate movements of the thumb and index fingers for AR/VR pointing > Professor Sang Ho Yoon explained, “T2IRay can significantly enhance the user experience in AR/VR by enabling smooth, stable control even when the user’s hands are in motion.” This study, led by first author Jina Kim, was supported by the Excellent New Researcher Support Project of the National Research Foundation of Korea under the Ministry of Science and ICT, as well as the University ICT Research Center (ITRC) Support Project of the Institute of Information and Communications Technology Planning and Evaluation (IITP). ▴ Paper title: T2IRay: Design of Thumb-to-Index Based Indirect Pointing for Continuous and Robust AR/VR Input▴ Paper link: https://doi.org/10.1145/3706598.3713442 ▴ T2IRay demo video: https://youtu.be/ElJlcJbkJPY ChoreoCraft: Creativity Support through VR for Choreographers In addition, Professor Yoon’s team developed ‘ChoreoCraft,’ a virtual reality tool designed to support choreographers by addressing the unique challenges they face, such as memorizing complex movements, overcoming creative blocks, and managing subjective feedback. ChoreoCraft reduces reliance on memory by allowing choreographers to save and refine movements directly within a VR space, using a motion-capture avatar for real-time interaction. It also enhances creativity by suggesting movements that naturally fit with prior choreography and musical elements. Furthermore, the system provides quantitative feedback by analyzing kinematic factors like motion stability and engagement, helping choreographers make data-driven creative decisions. < Figure 2. ChoreoCraft's approaches to encourage creative process > Professor Yoon noted, “ChoreoCraft is a tool designed to address the core challenges faced by choreographers, enhancing both creativity and efficiency. In user tests with professional choreographers, it received high marks for its ability to spark creative ideas and provide valuable quantitative feedback.” This research was conducted in collaboration with doctoral candidate Kyungeun Jung and master’s candidate Hyunyoung Han, alongside the Electronics and Telecommunications Research Institute (ETRI) and One Million Co., Ltd. (CEO Hye-rang Kim), with support from the Cultural and Arts Immersive Service Development Project by the Ministry of Culture, Sports and Tourism. ▴ Paper title: ChoreoCraft: In-situ Crafting of Choreography in Virtual Reality through Creativity Support Tools▴ Paper link: https://doi.org/10.1145/3706598.3714220 ▴ ChoreoCraft demo video: https://youtu.be/Ms1fwiSBjjw *CHI (Conference on Human Factors in Computing Systems): The premier international conference on human-computer interaction, organized by the ACM, was held this year from April 25 to May 1, 2025.
2025.05.13
View 1876
KAIST & CMU Unveils Amuse, a Songwriting AI-Collaborator to Help Create Music
Wouldn't it be great if music creators had someone to brainstorm with, help them when they're stuck, and explore different musical directions together? Researchers of KAIST and Carnegie Mellon University (CMU) have developed AI technology similar to a fellow songwriter who helps create music. KAIST (President Kwang-Hyung Lee) has developed an AI-based music creation support system, Amuse, by a research team led by Professor Sung-Ju Lee of the School of Electrical Engineering in collaboration with CMU. The research was presented at the ACM Conference on Human Factors in Computing Systems (CHI), one of the world’s top conferences in human-computer interaction, held in Yokohama, Japan from April 26 to May 1. It received the Best Paper Award, given to only the top 1% of all submissions. < (From left) Professor Chris Donahue of Carnegie Mellon University, Ph.D. Student Yewon Kim and Professor Sung-Ju Lee of the School of Electrical Engineering > The system developed by Professor Sung-Ju Lee’s research team, Amuse, is an AI-based system that converts various forms of inspiration such as text, images, and audio into harmonic structures (chord progressions) to support composition. For example, if a user inputs a phrase, image, or sound clip such as “memories of a warm summer beach”, Amuse automatically generates and suggests chord progressions that match the inspiration. Unlike existing generative AI, Amuse is differentiated in that it respects the user's creative flow and naturally induces creative exploration through an interactive method that allows flexible integration and modification of AI suggestions. The core technology of the Amuse system is a generation method that blends two approaches: a large language model creates music code based on the user's prompt and inspiration, while another AI model, trained on real music data, filters out awkward or unnatural results using rejection sampling. < Figure 1. Amuse system configuration. After extracting music keywords from user input, a large language model-based code progression is generated and refined through rejection sampling (left). Code extraction from audio input is also possible (right). The bottom is an example visualizing the chord structure of the generated code. > The research team conducted a user study targeting actual musicians and evaluated that Amuse has high potential as a creative companion, or a Co-Creative AI, a concept in which people and AI collaborate, rather than having a generative AI simply put together a song. The paper, in which a Ph.D. student Yewon Kim and Professor Sung-Ju Lee of KAIST School of Electrical and Electronic Engineering and Carnegie Mellon University Professor Chris Donahue participated, demonstrated the potential of creative AI system design in both academia and industry. ※ Paper title: Amuse: Human-AI Collaborative Songwriting with Multimodal Inspirations DOI: https://doi.org/10.1145/3706598.3713818 ※ Research demo video: https://youtu.be/udilkRSnftI?si=FNXccC9EjxHOCrm1 ※ Research homepage: https://nmsl.kaist.ac.kr/projects/amuse/ Professor Sung-Ju Lee said, “Recent generative AI technology has raised concerns in that it directly imitates copyrighted content, thereby violating the copyright of the creator, or generating results one-way regardless of the creator’s intention. Accordingly, the research team was aware of this trend, paid attention to what the creator actually needs, and focused on designing an AI system centered on the creator.” He continued, “Amuse is an attempt to explore the possibility of collaboration with AI while maintaining the initiative of the creator, and is expected to be a starting point for suggesting a more creator-friendly direction in the development of music creation tools and generative AI systems in the future.” This research was conducted with the support of the National Research Foundation of Korea with funding from the government (Ministry of Science and ICT). (RS-2024-00337007)
2025.05.07
View 3035
KAIST provides a comprehensive resource on microbial cell factories for sustainable chemical production
In silico analysis of five industrial microorganisms identifies optimal strains and metabolic engineering strategies for producing 235 valuable chemicals Climate change and the depletion of fossil fuels have raised the global need for sustainable chemical production. In response to these environmental challenges, microbial cell factories are gaining attention as eco-friendly platforms for producing chemicals using renewable resources, while metabolic engineering technologies to enhance these cell factories are becoming crucial tools for maximizing production efficiency. However, difficulties in selecting suitable microbial strains and optimizing complex metabolic pathways continue to pose significant obstacles to practical industrial applications. KAIST (President Kwang-Hyung Lee) announced on 27th of March that Distinguished Professor Sang Yup Lee’s research team in the Department of Chemical and Biomolecular Engineering comprehensively evaluated the production capabilities of various industrial microbial cell factories using in silico simulations and, based on these findings, identified the most suitable microbial strains for producing specific chemicals as well as optimal metabolic engineering strategies. Previously, researchers attempted to determine the best strains and efficient metabolic engineering strategies among numerous microbial candidates through extensive biological experiments and meticulous verification processes. However, this approach required substantial time and costs. Recently, the introduction of genome-scale metabolic models (GEMs), which reconstruct the metabolic networks within an organism based on its entire genome information, has enabled systematic analysis of metabolic fluxes via computer simulations. This development offers a new way to overcome limitations of conventional experimental approaches, revolutionizing both strain selection and metabolic pathway design. Accordingly, Professor Lee’s team at the Department of Chemical and Biomolecular Engineering, KAIST, evaluated the production capabilities of five representative industrial microorganisms—Escherichia coli, Saccharomyces cerevisiae, Bacillus subtilis, Corynebacterium glutamicum, and Pseudomonas putida—for 235 bio-based chemicals. Using GEMs, the researchers calculated both the maximum theoretical yields and the maximum achievable yields under industrial conditions for each chemical, thereby establishing criteria to identify the most suitable strains for each target compound. < Figure 1. Outline of the strategy for improving microbial cell factories using a genome-scale metabolic model (GEM) > The team specifically proposed strategies such as introducing heterologous enzyme reactions derived from other organisms and exchanging cofactors used by microbes to expand metabolic pathways. These strategies were shown to increase yields beyond the innate metabolic capacities of the microorganisms, resulting in higher production of industrially important chemicals such as mevalonic acid, propanol, fatty acids, and isoprenoids. Moreover, by applying a computational approach to analyze metabolic fluxes in silico, the researchers suggested strategies for improving microbial strains to maximize the production of various chemicals. They quantitatively identified the relationships between specific enzyme reactions and target chemical production, as well as the relationships between enzymes and metabolites, determining which enzyme reactions should be up- or down-regulated. Through this, the team presented strategies not only to achieve high theoretical yields but also to maximize actual production capacities. < Figure 2. Comparison of production routes and maximum yields of useful chemicals using representative industrial microorganisms > Dr. Gi Bae Kim, the first author of this paper from the KAIST BioProcess Engineering Research Center, explained, “By introducing metabolic pathways derived from other organisms and exchanging cofactors, it is possible to design new microbial cell factories that surpass existing limitations. The strategies presented in this study will play a pivotal role in making microbial-based production processes more economical and efficient.” In addition, Distinguished Professor Sang Yup Lee noted, “This research serves as a key resource in the field of systems metabolic engineering, reducing difficulties in strain selection and pathway design, and enabling more efficient development of microbial cell factories. We expect it to greatly contribute to the future development of technologies for producing various eco-friendly chemicals, such as biofuels, bioplastics, and functional food materials.” This research was conducted with the support from the Development of platform technologies of microbial cell factories for the next-generation biorefineries project and Development of advanced synthetic biology source technologies for leading the biomanufacturing industry project (Project Leader: Distinguished Professor Sang Yup Lee, KAIST) from National Research Foundation supported by the Korean Ministry of Science and ICT.
2025.03.27
View 2576
KAIST Develops Eco-Friendly, Nylon-Like Plastic Using Microorganisms
Poly(ester amide) amide is a next-generation material that combines the advantages of PET (polyester) and nylon (polyamide), two widely used plastics. However, it could only be produced from fossil fuels, which posed environmental concerns. Using microorganisms, KAIST researchers have successfully developed a new bio-based plastic to replace conventional plastic. KAIST (represented by President Kwang Hyung Lee) announced on the 20th of March that a research team led by Distinguished Professor Sang Yup Lee from the Department of Chemical and Biomolecular Engineering has developed microbial strains through systems metabolic engineering to produce various eco-friendly, bio-based poly(ester amide)s. The team collaborated with researchers from the Korea Research Institute of Chemical Technology (KRICT, President Young-Kook Lee) to analyze and confirm the properties of the resulting plastic. Professor Sang Yup Lee’s research team designed new metabolic pathways that do not naturally exist in microorganisms, and developed a platform microbial strain capable of producing nine different types of poly(ester amide)s, including poly(3-hydroxybutyrate-ran-3-aminopropionate) and poly(3-hydroxybutyrate-ran-4-aminobutyrate). Using glucose derived from abundant biomass sources such as waste wood and weeds, the team successfully produced poly(ester amide)s in an eco-friendly manner. The researchers also confirmed the potential for industrial-scale production by demonstrating high production efficiency (54.57 g/L) using fed-batch fermentation of the engineered strain. In collaboration with researchers Haemin Jeong and Jihoon Shin from KRICT, the KAIST team analyzed the properties of the bio-based plastic and found that it exhibited characteristics similar to high-density polyethylene (HDPE). This means the new plastic is not only eco-friendly but also strong and durable enough to replace conventional plastics. The engineered strains and strategies developed in this study are expected to be useful not only for producing various poly(ester amide)s but also for constructing metabolic pathways for the biosynthesis of other types of polymers. Professor Sang Yup Lee stated, “This study is the first to demonstrate the possibility of producing poly(ester amide)s (plastics) through a renewable bio-based chemical process rather than relying on the petroleum-based chemical industry. We plan to further enhance the production yield and efficiency through continued research.” The study was published online on March 17 in the international journal Nature Chemical Biology. ·Title: Biosynthesis of poly(ester amide)s in engineered Escherichia coli ·DOI: 10.1038/s41589-025-01842-2 ·Authors: A total of seven authors including Tong Un Chae (KAIST, first author), So Young Choi (KAIST, second author), Da-Hee Ahn (KAIST, third author), Woo Dae Jang (KAIST, fourth author), Haemin Jeong (KRICT, fifth author), Jihoon Shin (KRICT, sixth author), and Sang Yup Lee (KAIST, corresponding author). This research was supported by the Ministry of Science and ICT (MSIT) under the Eco-Friendly Chemical Technology Development Project as part of the "Next-Generation Biorefinery Technology Development to Lead the Bio-Chemical Industry" initiative (project led by Distinguished Professor Sang Yup Lee at KAIST).
2025.03.24
View 3719
A Way for Smartwatches to Detect Depression Risks Devised by KAIST and U of Michigan Researchers
- A international joint research team of KAIST and the University of Michigan developed a digital biomarker for predicting symptoms of depression based on data collected by smartwatches - It has the potential to be used as a medical technology to replace the economically burdensome fMRI measurement test - It is expected to expand the scope of digital health data analysis The CORONA virus pandemic also brought about a pandemic of mental illness. Approximately one billion people worldwide suffer from various psychiatric conditions. Korea is one of more serious cases, with approximately 1.8 million patients exhibiting depression and anxiety disorders, and the total number of patients with clinical mental diseases has increased by 37% in five years to approximately 4.65 million. A joint research team from Korea and the US has developed a technology that uses biometric data collected through wearable devices to predict tomorrow's mood and, further, to predict the possibility of developing symptoms of depression. < Figure 1. Schematic diagram of the research results. Based on the biometric data collected by a smartwatch, a mathematical algorithm that solves the inverse problem to estimate the brain's circadian phase and sleep stages has been developed. This algorithm can estimate the degrees of circadian disruption, and these estimates can be used as the digital biomarkers to predict depression risks. > KAIST (President Kwang Hyung Lee) announced on the 15th of January that the research team under Professor Dae Wook Kim from the Department of Brain and Cognitive Sciences and the team under Professor Daniel B. Forger from the Department of Mathematics at the University of Michigan in the United States have developed a technology to predict symptoms of depression such as sleep disorders, depression, loss of appetite, overeating, and decreased concentration in shift workers from the activity and heart rate data collected from smartwatches. According to WHO, a promising new treatment direction for mental illness focuses on the sleep and circadian timekeeping system located in the hypothalamus of the brain, which directly affect impulsivity, emotional responses, decision-making, and overall mood. However, in order to measure endogenous circadian rhythms and sleep states, blood or saliva must be drawn every 30 minutes throughout the night to measure changes in the concentration of the melatonin hormone in our bodies and polysomnography (PSG) must be performed. As such treatments requires hospitalization and most psychiatric patients only visit for outpatient treatment, there has been no significant progress in developing treatment methods that take these two factors into account. In addition, the cost of the PSG test, which is approximately $1000, leaves mental health treatment considering sleep and circadian rhythms out of reach for the socially disadvantaged. The solution to overcome these problems is to employ wearable devices for the easier collection of biometric data such as heart rate, body temperature, and activity level in real time without spatial constraints. However, current wearable devices have the limitation of providing only indirect information on biomarkers required by medical staff, such as the phase of the circadian clock. The joint research team developed a filtering technology that accurately estimates the phase of the circadian clock, which changes daily, such as heart rate and activity time series data collected from a smartwatch. This is an implementation of a digital twin that precisely describes the circadian rhythm in the brain, and it can be used to estimate circadian rhythm disruption. < Figure 2. The suprachiasmatic nucleus located in the hypothalamus of the brain is the central biological clock that regulates the 24-hour physiological rhythm and plays a key role in maintaining the body’s circadian rhythm. If the phase of this biological clock is disrupted, it affects various parts of the brain, which can cause psychiatric conditions such as depression. > The possibility of using the digital twin of this circadian clock to predict the symptoms of depression was verified through collaboration with the research team of Professor Srijan Sen of the Michigan Neuroscience Institute and Professor Amy Bohnert of the Department of Psychiatry of the University of Michigan. The collaborative research team conducted a large-scale prospective cohort study involving approximately 800 shift workers and showed that the circadian rhythm disruption digital biomarker estimated through the technology can predict tomorrow's mood as well as six symptoms, including sleep problems, appetite changes, decreased concentration, and suicidal thoughts, which are representative symptoms of depression. < Figure 3. The circadian rhythm of hormones such as melatonin regulates various physiological functions and behaviors such as heart rate and activity level. These physiological and behavioral signals can be measured in daily life through wearable devices. In order to estimate the body’s circadian rhythm inversely based on the measured biometric signals, a mathematical algorithm is needed. This algorithm plays a key role in accurately identifying the characteristics of circadian rhythms by extracting hidden physiological patterns from biosignals. > Professor Dae Wook Kim said, "It is very meaningful to be able to conduct research that provides a clue for ways to apply wearable biometric data using mathematics that have not previously been utilized for actual disease management." He added, "We expect that this research will be able to present continuous and non-invasive mental health monitoring technology. This is expected to present a new paradigm for mental health care. By resolving some of the major problems socially disadvantaged people may face in current treatment practices, they may be able to take more active steps when experiencing symptoms of depression, such as seeking counsel before things get out of hand." < Figure 4. A mathematical algorithm was devised to circumvent the problems of estimating the phase of the brain's biological clock and sleep stages inversely from the biodata collected by a smartwatch. This algorithm can estimate the degree of daily circadian rhythm disruption, and this estimate can be used as a digital biomarker to predict depression symptoms. > The results of this study, in which Professor Dae Wook Kim of the Department of Brain and Cognitive Sciences at KAIST participated as the joint first author and corresponding author, were published in the online version of the international academic journal npj Digital Medicine on December 5, 2024. (Paper title: The real-world association between digital markers of circadian disruption and mental health risks) DOI: 10.1038/s41746-024-01348-6 This study was conducted with the support of the KAIST's Research Support Program for New Faculty Members, the US National Science Foundation, the US National Institutes of Health, and the US Army Research Institute MURI Program.
2025.01.20
View 5789
KAIST Professor Uichin Lee Receives Distinguished Paper Award from ACM
< Photo. Professor Uichin Lee (left) receiving the award > KAIST (President Kwang Hyung Lee) announced on the 25th of October that Professor Uichin Lee’s research team from the School of Computing received the Distinguished Paper Award at the International Joint Conference on Pervasive and Ubiquitous Computing and International Symposium on Wearable Computing (Ubicomp / ISWC) hosted by the Association for Computing Machinery (ACM) in Melbourne, Australia on October 8. The ACM Ubiquitous Computing Conference is the most prestigious international conference where leading universities and global companies from around the world present the latest research results on ubiquitous computing and wearable technologies in the field of human-computer interaction (HCI). The main conference program is composed of invited papers published in the Proceedings of the ACM (PACM) on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT), which covers the latest research in the field of ubiquitous and wearable computing. The Distinguished Paper Award Selection Committee selected eight papers among 205 papers published in Vol. 7 of the ACM Proceedings (PACM IMWUT) that made outstanding and exemplary contributions to the research community. The committee consists of 16 prominent experts who are current and former members of the journal's editorial board which made the selection after a rigorous review of all papers for a period that stretched over a month. < Figure 1. BeActive mobile app to promote physical activity to form active lifestyle habits > The research that won the Distinguished Paper Award was conducted by Dr. Junyoung Park, a graduate of the KAIST Graduate School of Data Science, as the 1st author, and was titled “Understanding Disengagement in Just-in-Time Mobile Health Interventions” Professor Uichin Lee’s research team explored user engagement of ‘Just-in-Time Mobile Health Interventions’ that actively provide interventions in opportune situations by utilizing sensor data collected from health management apps, based on the premise that these apps are aptly in use to ensure effectiveness. < Figure 2. Traditional user-requested digital behavior change intervention (DBCI) delivery (Pull) vs. Automatic transmission (Push) for Just-in-Time (JIT) mobile DBCI using smartphone sensing technologies > The research team conducted a systematic analysis of user disengagement or the decline in user engagement in digital behavior change interventions. They developed the BeActive system, an app that promotes physical activities designed to help forming active lifestyle habits, and systematically analyzed the effects of users’ self-control ability and boredom-proneness on compliance with behavioral interventions over time. The results of an 8-week field trial revealed that even if just-in-time interventions are provided according to the user’s situation, it is impossible to avoid a decline in participation. However, for users with high self-control and low boredom tendency, the compliance with just-in-time interventions delivered through the app was significantly higher than that of users in other groups. In particular, users with high boredom proneness easily got tired of the repeated push interventions, and their compliance with the app decreased more quickly than in other groups. < Figure 3. Just-in-time Mobile Health Intervention: a demonstrative case of the BeActive system: When a user is identified to be sitting for more than 50 mins, an automatic push notification is sent to recommend a short active break to complete for reward points. > Professor Uichin Lee explained, “As the first study on user engagement in digital therapeutics and wellness services utilizing mobile just-in-time health interventions, this research provides a foundation for exploring ways to empower user engagement.” He further added, “By leveraging large language models (LLMs) and comprehensive context-aware technologies, it will be possible to develop user-centered AI technologies that can significantly boost engagement." < Figure 4. A conceptual illustration of user engagement in digital health apps. Engagement in digital health apps consists of (1) engagement in using digital health apps and (2) engagement in behavioral interventions provided by digital health apps, i.e., compliance with behavioral interventions. Repeated adherences to behavioral interventions recommended by digital health apps can help achieve the distal health goals. > This study was conducted with the support of the 2021 Biomedical Technology Development Program and the 2022 Basic Research and Development Program of the National Research Foundation of Korea funded by the Ministry of Science and ICT. < Figure 5. A conceptual illustration of user disengagement and engagement of digital behavior change intervention (DBCI) apps. In general, user engagement of digital health intervention apps consists of two components: engagement in digital health apps and engagement in behavioral interventions recommended by such apps (known as behavioral compliance or intervention adherence). The distinctive stages of user can be divided into adoption, abandonment, and attrition. > < Figure 6. Trends of changes in frequency of app usage and adherence to behavioral intervention over 8 weeks, ● SC: Self-Control Ability (High-SC: user group with high self-control, Low-SC: user group with low self-control) ● BD: Boredom-Proneness (High-BD: user group with high boredom-proneness, Low-BD: user group with low boredom-proneness). The app usage frequencies were declined over time, but the adherence rates of those participants with High-SC and Low-BD were significantly higher than other groups. >
2024.10.25
View 6358
KAIST begins full-scale cooperation with Taiwan’s Formosa Group
< (From left) Senior Vice President for Planning and Budget Kyung-Soo Kim, and Professor Minee Choi of the Department of Brain and Cognitive Sciences of KAIST along with Chairman of Formosa Group Sandy Wang and KAIST President Kwang-Hyung Lee, and Dean Daesoo Kim of KAIST College of Life Science and Bioengineering > KAIST is pursuing cooperation in the fields of advanced biotechnology and eco-friendly energy with Formosa Plastics Group, one of Taiwan's three largest companies. To this end, Chairman Sandy Wang, a member of Formosa Group's standing committee and leader of the group's bio and eco-friendly energy sector, will visit KAIST on the 13th of this month. This is the first time that the owner of Formosa Group has made an official visit to KAIST. Cooperation between the two institutions began last March when our university signed a memorandum of understanding on comprehensive exchange and cooperation with Ming Chi University of Science and Technology (明志科技大學), Chang Gung University(長庚大學), and Chang Gung Memorial Hospital(長庚記念醫院), three of many institutions established and supported by Formosa Group. Based on this, Chairman Sandy Wang, who visits our university to promote more exchanges and cooperation, talked about ‘the education of children and corporate social return and practice of his father, Chairman Yung-Ching Wang,’ through a special lecture for the school leadership as a part of the Monthly Lecture on KAIST’s Leadership Innovation Day. She then visited KAIST's research and engineering facilities related to Taiwan's future industries, such as advanced biotechnology and eco-friendly energy, and discussed global industry-academic cooperation plans. In the future, the two organizations plan to appoint adjunct professors and promote practical global cooperation, including joint student guidance and research cooperation. We plan to pursue effective mid- to long-term cooperation, such as conducting battery application research with the KAIST Next-Generation ESS Research Center and opening a graduate program specialized in stem cell and gene editing technology in connection with Chang Gung University and Chang Gung Memorial Hospital. The newly established cooperative relationship will also promote Formosa Group's investment and cooperation with KAIST's outstanding venture companies related to bio and eco-friendly energy to lay the foundation for innovative industrial cooperation between Taiwan and Korea. President Kwang-Hyung Lee said, “The Formosa Group has a global network, so we regard it to be a key partner that will position KAIST’s bio and engineering technology in the global stages.” He also said, “With Chairman Sandy Wang’s visit, Taiwan is emerging as a global economic powerhouse,” and added, “We expect to continue our close cooperative relationship with the company.” Formosa Group is a company founded by the late Chairman Yung-Ching Wang, the father of Chairman Sandy Wang. As the world's No. 1 plastic PVC producer, it is leading the core industries of Taiwan's economy, including semiconductors, steel, heavy industry, bio, and batteries. Chairman Yung-Ching Wang was respected by the Taiwanese people by setting an example of returning his wealth to society under the belief that the companies and assets he built ‘belonged to the people.’ Chang Gung University, Chang Gung Memorial Hospital, and Ming Chi University of Technology, which are pursuing cooperation with our university, were also established as part of the social contribution promoted by Chairman Yung-Ching Wang and are receiving financial support from Formosa Group.
2024.05.09
View 5592
KAIST Research Team Develops Sweat-Resistant Wearable Robot Sensor
New electromyography (EMG) sensor technology that allows the long-term stable control of wearable robots and is not affected by the wearer’s sweat and dead skin has gained attention recently. Wearable robots are devices used across a variety of rehabilitation treatments for the elderly and patients recovering from stroke or trauma. A joint research team led by Professor Jae-Woong Jung from the KAIST School of Electrical Engineering (EE) and Professor Jung Kim from the KAIST Department of Mechanical Engineering (ME) announced on January 23rd that they have successfully developed a stretchable and adhesive microneedle sensor that can electrically sense physiological signals at a high level without being affected by the state of the user’s skin. For wearable robots to recognize the intentions behind human movement for their use in rehabilitation treatment, they require a wearable electrophysiological sensor that gives precise EMG measurements. However, existing sensors often show deteriorating signal quality over time and are greatly affected by the user’s skin conditions. Furthermore, the sensor’s higher mechanical hardness causes noise since the contact surface is unable to keep up with the deformation of the skin. These shortcomings limit the reliable, long-term control of wearable robots. < Figure 1. Design and working concept of the Stretchable microNeedle Adhesive Patch (SNAP). (A) Schematic illustration showing the overall system configuration and application of SNAP. (B) Exploded view schematic diagram of a SNAP, consisting of stretchable serpentine interconnects, Au-coated Si microneedle, and ECA made of Ag flakes–silicone composite. (C) Optical images showing high mechanical compliance of SNAP. > However, the recently developed technology is expected to allow long-term and high-quality EMG measurements as it uses a stretchable and adhesive conducting substrate integrated with microneedle arrays that can easily penetrate the stratum corneum without causing discomfort. Through its excellent performance, the sensor is anticipated to be able to stably control wearable robots over a long period of time regardless of the wearer’s changing skin conditions and without the need for a preparation step that removes sweat and dead cells from the surface of their skin. The research team created a stretchable and adhesive microneedle sensor by integrating microneedles into a soft silicon polymer substrate. The hard microneedles penetrate through the stratum corneum, which has high electrical resistance. As a result, the sensor can effectively lower contact resistance with the skin and obtain high-quality electrophysiological signals regardless of contamination. At the same time, the soft and adhesive conducting substrate can adapt to the skin’s surface that stretches with the wearer’s movement, providing a comfortable fit and minimizing noise caused by movement. < Figure 2. Demonstration of the wireless Stretchable microNeedle Adhesive Patch (SNAP) system as an Human-machine interfaces (HMI) for closed-loop control of an exoskeleton robot. (A) Illustration depicting the system architecture and control strategy of an exoskeleton robot. (B) The hardware configuration of the pneumatic back support exoskeleton system. (C) Comparison of root mean square (RMS) of electromyography (EMG) with and without robotic assistance of pretreated skin and non-pretreated skin. > To verify the usability of the new patch, the research team conducted a motion assistance experiment using a wearable robot. They attached the microneedle patch on a user’s leg, where it could sense the electrical signals generated by the muscle. The sensor then sent the detected intention to a wearable robot, allowing the robot to help the wearer lift a heavy object more easily. Professor Jae-Woong Jung, who led the research, said, “The developed stretchable and adhesive microneedle sensor can stability detect EMG signals without being affected by the state of a user’s skin. Through this, we will be able to control wearable robots with higher precision and stability, which will help the rehabilitation of patients who use robots.” The results of this research, written by co-first authors Heesoo Kim and Juhyun Lee, who are both Ph.D. candidates in the KAIST School of EE, were published in Science Advances on January 17th under the title “Skin-preparation-free, stretchable microneedle adhesive patches for reliable electrophysiological sensing and exoskeleton robot control”. This research was supported by the Bio-signal Sensor Integrated Technology Development Project by the National Research Foundation of Korea, the Electronic Medicinal Technology Development Project, and the Step 4 BK21 Project.
2024.01.30
View 7721
KAIST Professor Jiyun Lee becomes the first Korean to receive the Thurlow Award from the American Institute of Navigation
< Distinguished Professor Jiyun Lee from the KAIST Department of Aerospace Engineering > KAIST (President Kwang-Hyung Lee) announced on January 27th that Distinguished Professor Jiyun Lee from the KAIST Department of Aerospace Engineering had won the Colonel Thomas L. Thurlow Award from the American Institute of Navigation (ION) for her achievements in the field of satellite navigation. The American Institute of Navigation (ION) announced Distinguished Professor Lee as the winner of the Thurlow Award at its annual awards ceremony held in conjunction with its international conference in Long Beach, California on January 25th. This is the first time a person of Korean descent has received the award. The Thurlow Award was established in 1945 to honor Colonel Thomas L. Thurlow, who made significant contributions to the development of navigation equipment and the training of navigators. This award aims to recognize an individual who has made an outstanding contribution to the development of navigation and it is awarded to one person each year. Past recipients include MIT professor Charles Stark Draper, who is well-known as the father of inertial navigation and who developed the guidance computer for the Apollo moon landing project. Distinguished Professor Jiyun Lee was recognized for her significant contributions to technological advancements that ensure the safety of satellite-based navigation systems for aviation. In particular, she was recognized as a world authority in the field of navigation integrity architecture design, which is essential for ensuring the stability of intelligent transportation systems and autonomous unmanned systems. Distinguished Professor Lee made a groundbreaking contribution to help ensure the safety of satellite-based navigation systems from ionospheric disturbances, including those affected by sudden changes in external factors such as the solar and space environment. She has achieved numerous scientific discoveries in the field of ionospheric research, while developing new ionospheric threat modeling methods, ionospheric anomaly monitoring and mitigation techniques, and integrity and availability assessment techniques for next-generation augmented navigation systems. She also contributed to the international standardization of technology through the International Civil Aviation Organization (ICAO). Distinguished Professor Lee and her research group have pioneered innovative navigation technologies for the safe and autonomous operation of unmanned aerial vehicles (UAVs) and urban air mobility (UAM). She was the first to propose and develop a low-cost navigation satellite system (GNSS) augmented architecture for UAVs with a near-field network operation concept that ensures high integrity, and a networked ground station-based augmented navigation system for UAM. She also contributed to integrity design techniques, including failure monitoring and integrity risk assessment for multi-sensor integrated navigation systems. < Professor Jiyoon Lee upon receiving the Thurlow Award > Bradford Parkinson, professor emeritus at Stanford University and winner of the 1986 Thurlow Award, who is known as the father of GPS, congratulated Distinguished Professor Lee upon hearing that she was receiving the Thurlow Award and commented that her innovative research has addressed many important topics in the field of navigation and her solutions are highly innovative and highly regarded. Distinguished Professor Lee said, “I am very honored and delighted to receive this award with its deep history and tradition in the field of navigation.” She added, “I will strive to help develop the future mobility industry by securing safe and sustainable navigation technology.”
2024.01.26
View 6749
<<
첫번째페이지
<
이전 페이지
1
2
3
4
5
6
7
8
9
10
>
다음 페이지
>>
마지막 페이지 15