본문 바로가기
대메뉴 바로가기
KAIST
Newsletter Vol.29
Receive KAIST news by email!
View
Subscribe
Close
Type your e-mail address here.
Subscribe
Close
KAIST
NEWS
유틸열기
홈페이지 통합검색
-
검색
KOREAN
메뉴 열기
NI
by recently order
by view order
'Team Atlanta', in which KAIST Professor Insu Yun research team participated, won the DARPA AI Cyber Challenge in the US, with a prize of 5.5 billion KRW
<Photo1. Group Photo of Team Atlanta> Team Atlanta, led by Professor Insu Yun of the Department of Electrical and Electronic Engineering at KAIST and Tae-soo Kim, an executive from Samsung Research, along with researchers from POSTECH and Georgia Tech, won the final championship at the AI Cyber Challenge (AIxCC) hosted by the Defense Advanced Research Projects Agency (DARPA). The final was held at the world's largest hacking conference, DEF CON 33, in Las Vegas on August 8 (local time). With this achievement, the team won a prize of $4 million (approximately 5.5 billion KRW), demonstrating the excellence of their AI-based autonomous cyber defense technology on the global stage. <Photo2.Championship Commemorative:On the left and right are tournament officials. From the second person, Professor Tae-soo Kim(Samsung Research / Georgia Tech), Researcher Hyeong-seok Han (Samsung Research America), and Professor Insu Yun (KAIST)> The AI Cyber Challenge is a two-year global competition co-hosted by DARPA and the Advanced Research Projects Agency for Health (ARPA-H). It challenges contestants to automatically analyze, detect, and fix software vulnerabilities using AI-based Cyber Reasoning Systems (CRS). The total prize money for the competition is $29.5 million, with the winning team receiving $4 million. In the final, Team Atlanta scored a total of 392.76 points, a difference of over 170 points from the second-place team, Trail of Bits, securing a dominant victory. The CRS developed by Team Atlanta successfully and automatically detected various types of vulnerabilities and patched a significant number of them in real time. Among the 7 finalist teams, an average of 77% of the 70 intentionally injected vulnerabilities were found, and 61% of them were patched. The teams also found 18 additional unknown vulnerabilities in real software, proving the potential of AI security technology. All CRS technologies, including those of the winning team, will be provided as open-source and are expected to be used to strengthen the security of core infrastructure such as hospitals, water, and power systems. <Photo3. Final Scoreboard: An overwhelming victory with over 170 points> Professor Insu Yun of KAIST, a member of Team Atlanta, stated, "I am very happy to have achieved such a great result. This is a remarkable achievement that shows Korea's cyber security research has reached the highest level in the world, and it was meaningful to show the capabilities of Korean researchers on the world stage. I will continue to conduct research to protect the digital safety of the nation and global society through the fusion of AI and security technology." KAIST President Kwang-hyung Lee stated, "This victory is another example that proves KAIST is a world-leading institution in the field of future cyber security and AI convergence. We will continue to provide full support to our researchers so they can compete and produce results on the world stage." <Photo4. Results Announcement>
2025.08.10
View 267
Prof. Seungbum Koo’s Team Receives Clinical Biomechanics Award at the 30th International Society of Biomechanics Conference
<(From Left) Ph.D candidate Jeongseok Oh from KAIST, Dr. Seungwoo Yoon from KAIST, Prof.Joon-Ho Wang from Samsung Medical Center, Prof.Seungbum Koo from KAIST> Professor Seungbum Koo’s research team received the Clinical Biomechanics Award at the 30th International Society of Biomechanics (ISB) Conference, held in July 2025 in Stockholm, Sweden. The Plenary Lecture was delivered by first author and Ph.D. candidate Jeongseok Oh. This research was conducted in collaboration with Professor Joon-Ho Wang’s team at Samsung Medical Center. Residual Translational and Rotational Kinematics After Combined ACL and Anterolateral Ligament Reconstruction During Walking Jeongseok Oh, Seungwoo Yoon, Joon-Ho Wang, Seungbum Koo The study analyzed gait-related knee joint motion using high-speed biplane X-ray imaging and three-dimensional kinematic reconstruction in 10 healthy individuals and 10 patients who underwent ACL reconstruction with ALL augmentation. The patient group showed excessive anterior translation and internal rotation, suggesting incomplete restoration of normal joint kinematics post-surgery. These findings provide mechanistic insight into the early onset of knee osteoarthritis often reported in this population.' The ISB conference, held biennially for over 60 years, is the largest international biomechanics meeting. This year, it hosted 1,600 researchers from 46 countries and featured over 1,400 presentations. The Clinical Biomechanics Award is given to one outstanding study selected from five top-rated abstracts invited for full manuscript review. The winning paper is published in Clinical Biomechanics, and the award includes a monetary prize and a Plenary Lecture opportunity. From 2019 to 2023, Koo and Wang’s teams developed a system with support from the Samsung Future Technology Development Program to track knee motion in real time during treadmill walking, using high-speed biplane X-rays and custom three-dimensional reconstruction software. This system, along with proprietary software that precisely reconstructs the three-dimensional motion of joints, was approved for clinical trials by the Ministry of Food and Drug Safety and installed at Samsung Medical Center. It is being used to quantitatively analyze abnormal joint motion patterns in patients with knee ligament injuries and those who have undergone knee surgery. Additionally, Jeongseok Oh was named one of five finalists for the David Winter Young Investigator Award, presenting his work during the award session. This award recognizes promising young researchers in biomechanics worldwide.
2025.08.10
View 152
KAIST Develops ‘Real-Time Programmable Robotic Sheet’ That Can Grasp and Walk on Its Own
<(From left) Prof. Inkyu Park from KAIST, Prof. Yongrok Jeong from Kyungpook National University, Dr. Hyunkyu Park from KAIST and Prof.Jung Kim from KAIST> Folding structures are widely used in robot design as an intuitive and efficient shape-morphing mechanism, with applications explored in space and aerospace robots, soft robots, and foldable grippers (hands). However, existing folding mechanisms have fixed hinges and folding directions, requiring redesign and reconstruction every time the environment or task changes. A Korean research team has now developed a “field-programmable robotic folding sheet” that can be programmed in real time according to its surroundings, significantly enhancing robots’ shape-morphing capabilities and opening new possibilities in robotics. KAIST (President Kwang Hyung Lee) announced on the 6th that Professors Jung Kim and Inkyu Park of the Department of Mechanical Engineering have developed the foundational technology for a “field-programmable robotic folding sheet” that enables real-time shape programming. This technology is a successful application of the “field-programmability” concept to foldable structures. It proposes an integrated material technology and programming methodology that can instantly reflect user commands—such as “where to fold, in which direction, and by how much”—onto the material's shape in real time. The robotic sheet consists of a thin and flexible polymer substrate embedded with a micro metal resistor network. These metal resistors simultaneously serve as heaters and temperature sensors, allowing the system to sense and control its folding state without any external devices. Furthermore, using software that combines genetic algorithms and deep neural networks, the user can input desired folding locations, directions, and intensities. The sheet then autonomously repeats heating and cooling cycles to create the precise desired shape. In particular, closed-loop control of the temperature distribution enhances real-time folding precision and compensates for environmental changes. It also improves the traditionally slow response time of heat-based folding technologies. The ability to program shapes in real time enables a wide variety of robotic functions to be implemented on the fly, without the need for complex hardware redesign. In fact, the research team demonstrated an adaptive robotic hand (gripper) that can change its grasping strategy to suit various object shapes using a single material. They also placed the same robotic sheet on the ground to allow it to walk or crawl, showcasing bioinspired locomotion strategies. This presents potential for expanding into environmentally adaptive autonomous robots that can alter their form in response to surroundings. Professor Jung Kim stated, “This study brings us a step closer to realizing ‘morphological intelligence,’ a concept where shape itself embodies intelligence and enables smart motion. In the future, we plan to evolve this into a next-generation physical AI platform with applications in disaster-response robots, customized medical assistive devices, and space exploration tools—by improving materials and structures for greater load support and faster cooling, and expanding to electrode-free, fully integrated designs of various forms and sizes.” This research, co-led by Dr. Hyunkyu Park (currently at Samsung Advanced Institute of Technology, Samsung Electronics) and Professor Yongrok Jeong (currently at Kyungpook National University), was published in the August 2025 online edition of the international journal Nature Communications. ※ Paper title: Field-programmable robotic folding sheet ※ DOI: 10.1038/s41467-025-61838-3 This research was supported by the National Research Foundation of Korea (Ministry of Science and ICT). (RS-2021-NR059641, 2021R1A2C3008742) Video file: https://drive.google.com/file/d/18R0oW7SJVYH-gd1Er_S-9Myar8dm8Fzp/view?usp=sharing
2025.08.06
View 369
Is 24-hour health monitoring possible with ambient light energy?
<(From left) Ph.D candidate Youngmin Sim, Ph.D candidate Do Yun Park, Dr. Chanho Park, Professor Kyeongha Kwon> Miniaturization and weight reduction of medical wearable devices for continuous health monitoring such as heart rate, blood oxygen saturation, and sweat component analysis remain major challenges. In particular, optical sensors consume a significant amount of power for LED operation and wireless transmission, requiring heavy and bulky batteries. To overcome these limitations, KAIST researchers have developed a next-generation wearable platform that enables 24-hour continuous measurement by using ambient light as an energy source and optimizing power management according to the power environment. KAIST (President Kwang Hyung Lee) announced on the 30th that Professor Kyeongha Kwon's team from the School of Electrical Engineering, in collaboration with Dr. Chanho Park’s team at Northwestern University in the U.S., has developed an adaptive wireless wearable platform that reduces battery load by utilizing ambient light. To address the battery issue of medical wearable devices, Professor Kyeongha Kwon’s research team developed an innovative platform that utilizes ambient natural light as an energy source. This platform integrates three complementary light energy technologies. <Figure1.The wireless wearable platform minimizes the energy required for light sources through i) Photometric system that directly utilizes ambient light passing through windows for measurements, ii) Photovoltaic system that receives power from high-efficiency photovoltaic cells and wireless power receiver coils, and iii) Photoluminescent system that stores light using photoluminescent materials and emits light in dark conditions to support the two aforementioned systems. In-sensor computing minimizes power consumption by wirelessly transmitting only essential data. The adaptive power management system efficiently manages power by automatically selecting the optimal mode among 11 different power modes through a power selector based on the power supply level from the photovoltaic system and battery charge status.> The first core technology, the Photometric Method, is a technique that adaptively adjusts LED brightness depending on the intensity of the ambient light source. By combining ambient natural light with LED light to maintain a constant total illumination level, it automatically dims the LED when natural light is strong and brightens it when natural light is weak. Whereas conventional sensors had to keep the LED on at a fixed brightness regardless of the environment, this technology optimizes LED power in real time according to the surrounding environment. Experimental results showed that it reduced power consumption by as much as 86.22% under sufficient lighting conditions. The second is the Photovoltaic Method using high-efficiency multijunction solar cells. This goes beyond simple solar power generation to convert light in both indoor and outdoor environments into electricity. In particular, the adaptive power management system automatically switches among 11 different power configurations based on ambient conditions and battery status to achieve optimal energy efficiency. The third innovative technology is the Photoluminescent Method. By mixing strontium aluminate microparticles* into the sensor’s silicone encapsulation structure, light from the surroundings is absorbed and stored during the day and slowly released in the dark. As a result, after being exposed to 500W/m² of sunlight for 10 minutes, continuous measurement is possible for 2.5 minutes even in complete darkness. *Strontium aluminate microparticles: A photoluminescent material used in glow-in-the-dark paint or safety signs, which absorbs light and emits it in the dark for an extended time. These three technologies work complementarily—during bright conditions, the first and second methods are active, and in dark conditions, the third method provides additional support—enabling 24-hour continuous operation. The research team applied this platform to various medical sensors to verify its practicality. The photoplethysmography sensor monitors heart rate and blood oxygen saturation in real time, allowing early detection of cardiovascular diseases. The blue light dosimeter accurately measures blue light, which causes skin aging and damage, and provides personalized skin protection guidance. The sweat analysis sensor uses microfluidic technology to simultaneously analyze salt, glucose, and pH in sweat, enabling real-time detection of dehydration and electrolyte imbalances. Additionally, introducing in-sensor data computing significantly reduced wireless communication power consumption. Previously, all raw data had to be transmitted externally, but now only the necessary results are calculated and transmitted within the sensor, reducing data transmission requirements from 400B/s to 4B/s—a 100-fold decrease. To validate performance, the research tested the device on healthy adult subjects in four different environments: bright indoor lighting, dim lighting, infrared lighting, and complete darkness. The results showed measurement accuracy equivalent to that of commercial medical devices in all conditions A mouse model experiment confirmed accurate blood oxygen saturation measurement in hypoxic conditions. <Frigure2.The multimodal device applying the energy harvesting and power management platform consists of i) photoplethysmography (PPG) sensor, ii) blue light dosimeter, iii) photoluminescent microfluidic channel for sweat analysis and biomarker sensors (chloride ion, glucose, and pH), and iv) temperature sensor. This device was implemented with flexible printed circuit board (fPCB) to enable attachment to the skin. A silicon substrate with a window that allows ambient light and measurement light to pass through, along with photoluminescent encapsulation layer, encapsulates the PPG, blue light dosimeter, and temperature sensors, while the photoluminescent microfluidic channel is attached below the photoluminescent encapsulation layer to collect sweat> Professor Kyeongha Kwon of KAIST, who led the research, stated, “This technology will enable 24-hour continuous health monitoring, shifting the medical paradigm from treatment-centered to prevention-centered shifting the medical paradigm from treatment-centered to prevention-centered,” further stating that “cost savings through early diagnosis as well as strengthened technological competitiveness in the next-generation wearable healthcare market are anticipated.” This research was published on July 1 in the international journal Nature Communications, with Do Yun Park, a doctoral student in the AI Semiconductor Graduate Program, as co–first author. ※ Paper title: Adaptive Electronics for Photovoltaic, Photoluminescent and Photometric Methods in Power Harvesting for Wireless and Wearable Sensors ※ DOI: https://doi.org/10.1038/s41467-025-60911-1 ※ URL: https://www.nature.com/articles/s41467-025-60911-1 This research was supported by the National Research Foundation of Korea (Outstanding Young Researcher Program and Regional Innovation Leading Research Center Project), the Ministry of Science and ICT and Institute of Information & Communications Technology Planning & Evaluation (IITP) AI Semiconductor Graduate Program, and the BK FOUR Program (Connected AI Education & Research Program for Industry and Society Innovation, KAIST EE).
2025.07.30
View 536
Immune Signals Directly Modulate Brain's Emotional Circuits: Unraveling the Mechanism Behind Anxiety-Inducing Behaviors
KAIST's Department of Brain and Cognitive Sciences, led by Professor Jeong-Tae Kwon, has collaborated with MIT and Harvard Medical School to make a groundbreaking discovery. For the first time globally, their joint research has revealed that cytokines, released during immune responses, directly influence the brain's emotional circuits to regulate anxiety behavior. The study provided experimental evidence for a bidirectional regulatory mechanism: inflammatory cytokines IL-17A and IL-17C act on specific neurons in the amygdala, a region known for emotional regulation, increasing their excitability and consequently inducing anxiety. Conversely, the anti-inflammatory cytokine IL-10 was found to suppress excitability in these very same neurons, thereby contributing to anxiety alleviation. In a mouse model, the research team observed that while skin inflammation was mitigated by immunotherapy (IL-17RA antibody), anxiety levels paradoxically rose. This was attributed to elevated circulating IL-17 family cytokines leading to the overactivation of amygdala neurons. Key finding: Inflammatory cytokines IL-17A/17C promote anxiety by acting on excitable amygdala neurons (via IL-17RA/RE receptors), whereas anti-inflammatory cytokine IL-10 alleviates anxiety by suppressing excitability through IL-10RA receptors on the same neurons. The researchers further elucidated that the anti-inflammatory cytokine IL-10 works to reduce the excitability of these amygdala neurons, thereby mitigating anxiety responses. This research marks the first instance of demonstrating that immune responses, such as infections or inflammation, directly impact emotional regulation at the level of brain circuits, extending beyond simple physical reactions. This is a profoundly significant achievement, as it proposes a crucial biological mechanism that interlinks immunity, emotion, and behavior through identical neurons within the brain. The findings of this research were published in the esteemed international journal Cell on April 17th of this year. Paper Information: Title: Inflammatory and anti-inflammatory cytokines bidirectionally modulate amygdala circuits regulating anxiety Journal: Cell (Vol. 188, 2190–2220), April 17, 2025 DOI: https://doi.org/10.1016/j.cell.2025.03.005 Corresponding Authors: Professor Gloria Choi (MIT), Professor Jun R. Huh (Harvard Medical School)
2025.07.24
View 474
Approaches to Human-Robot Interaction Using Biosignals
<(From left) Dr. Hwa-young Jeong, Professor Kyung-seo Park, Dr. Yoon-tae Jeong, Dr. Ji-hoon Seo, Professor Min-kyu Je, Professor Jung Kim > A joint research team led by Professor Jung Kim of KAIST Department of Mechanical Engineering and Professor Min-kyu Je of the Department of Electrical and Electronic Engineering recently published a review paper on the latest trends and advancements in intuitive Human-Robot Interaction (HRI) using bio-potential and bio-impedance in the internationally renowned academic journal 'Nature Reviews Electrical Engineering'. This review paper is the result of a collaborative effort by Dr. Kyung-seo Park (DGIST, co-first author), Dr. Hwa-young Jeong (EPFL, co-first author), Dr. Yoon-tae Jeong (IMEC), and Dr. Ji-hoon Seo (UCSD), all doctoral graduates from the two laboratories. Nature Reviews Electrical Engineering is a review specialized journal in the field of electrical, electronic, and artificial intelligence technology, newly launched by Nature Publishing Group last year. It is known to invite world-renowned scholars in the field through strict selection criteria. Professor Jung Kim's research team's paper, titled "Using bio-potential and bio-impedance for intuitive human-robot interaction," was published on July 18, 2025. (DOI: https://doi.org/10.1038/s44287-025-00191-5) This review paper explains how biosignals can be used to quickly and accurately detect movement intentions and introduces advancements in movement prediction technology based on neural signals and muscle activity. It also focuses on the crucial role of integrated circuits (ICs) in maximizing low-noise performance and energy efficiency in biosignal sensing, covering thelatest development trends in low-noise, low-power designs for accurately measuring bio-potential and impedance signals. The review emphasizes the importance of hybrid and multi-modal sensing approaches, presenting the possibility of building robust, intuitive, and scalable HRI systems. The research team stressed that collaboration between sensor and IC design fields is essential for the practical application of biosignal-based HRI systems and stated that interdisciplinary collaboration will play a significant role in the development of next-generation HRI technology. Dr. Hwa-young Jeong, a co-first author of the paper, presented the potential of bio-potential and impedance signals to make human-robot interaction more intuitive and efficient, predicting that it will make significant contributions to the development of HRI technologies such as rehabilitation robots and robotic prostheses using biosignals in the future. This research was supported by several research projects, including the Human Plus Project of the National Research Foundation of Korea.
2025.07.24
View 530
KAIST Ushers in Era of Predicting ‘Optimal Alloys’ Using AI, Without High-Temperature Experiments
<Picture1.(From Left) Prof. Seungbum Hong, Ph.D candidate Youngwoo Choi> Steel alloys used in automobiles and machinery parts are typically manufactured through a melting process at high temperatures. The phenomenon where the components remain unchanged during melting is called “congruent melting.” KAIST researchers have now addressed this process—traditionally only possible through high-temperature experiments—using artificial intelligence (AI). This study draws attention as it proposes a new direction for future alloy development by predicting in advance how well alloy components will mix during melting, a long-standing challenge in the field. KAIST (President Kwang Hyung Lee) announced on the 14th of July that Professor Seungbum Hong’s research team from the Department of Materials Science and Engineering, in international collaboration with Professor Chris Wolverton’s group at Northwestern University, has developed a high-accuracy machine learning model that predicts whether alloy components will remain stable during melting. This was achieved using formation energy data derived from Density Functional Theory (DFT)* calculations. *Density Functional Theory (DFT): A computational quantum mechanical method used to investigate the electronic structure of many-body systems, especially atoms, molecules, and solids, based on electron density. The research team combined formation energy values obtained via DFT with experimental melting reaction data to train a machine learning model on 4,536 binary compounds. Among the various machine learning algorithms tested, the XGBoost-based classification model demonstrated the highest accuracy in predicting whether alloys would mix well, achieving a prediction accuracy of approximately 82.5%. The team also applied the Shapley value method* to analyze the key features of the model. One major finding was that sharp changes in the slope of the formation energy curve (referred to as “convex hull sharpness”) were the most significant factor. A steep slope indicates a composition with energetically favorable (i.e., stable) formation. *Shapley value: An explainability method in AI used to determine how much each feature contributed to a prediction. The most notable significance of this study is that it predicts alloy melting behavior without performing high-temperature experiments. This is especially useful for materials such as high-entropy alloys or ultra-heat-resistant alloys, which are difficult to handle experimentally. The approach could also be extended to the design of complex multi-component alloy systems in the future. Furthermore, the physical indicators identified by the AI model showed high consistency with actual experimental results on how well alloys mix and remain stable. This suggests that the model could be broadly applied to the development of various metal materials and the prediction of structural stability. Professor Seungbum Hong of KAIST stated, “This research demonstrates how data-driven predictive materials development is possible by integrating computational methods, experimental data, and machine learning—departing from the traditional experience-based alloy design.” He added, “In the future, by incorporating state-of-the-art AI techniques such as generative models and reinforcement learning, we could enter an era where completely new alloys are designed automatically.” <Model performance and feature importance analysis for predicting melting congruency. (a) SHAP summary plot showing the impact of individual features on model predictions. (b) Confusion matrix illustrating the model’s classification performance. (c) Receiver operating characteristic (ROC) curve with an AUC (area under the curve) score of 0.87, indicating a strong classification performance.> Ph.D. candidate Youngwoo Choi, from the Department of Materials Science and Engineering at KAIST, participated as the first author. The study was published in the May issue of APL Machine Learning, a prestigious journal in the field of machine learning published by the American Institute of Physics, and was selected as a “Featured Article.” ※ Paper title: Machine learning-based melting congruency prediction of binary compounds using density functional theory-calculated formation energy ※ DOI: 10.1063/5.0247514 This research was supported by the Ministry of Science and ICT and the National Research Foundation of Korea.
2025.07.14
View 470
Professor Jung-woo' Choi ‘s Team Comes in First at the World's Top Acoustic AI Challenge
<Photo1. (From left) Ph.D candidate Yong-hoo Kwon, M.S candidate Do-hwan Kim, Professor Jung-woo Choi, Dr. Dong-heon Lee> 'Acoustic separation and classification technology' is a next-generation artificial intelligence (AI) core technology that enables the early detection of abnormal sounds in areas such as drones, fault detection of factory pipelines, and border surveillance systems, or allows for the separation and editing of spatial audio by sound source when producing AR/VR content. On the 11th of July, a research team led by Professor Jung-woo Choi of KAIST's Department of Electrical and Electronic Engineering won first place in the 'Spatial Semantic Segmentation of Sound Scenes' task of the 'DCASE2025 Challenge,' the world's most prestigious acoustic detection and analysis competition. This year’s challenge featured 86 teams competing across six tasks. In this competition, the KAIST research team achieved the best performance in their first-ever participation to Task 4. Professor Jung-woo Choi’s research team consisted of Dr. Dong-heon, Lee, Ph.D. candidate Young-hoo Kwon, and M.S. candidate Do-hwan Kim. Task 4 titled 'Spatial Semantic Segmentation of Sound Scenes' is a highly demanding task requiring the analysis of spatial information in multi-channel audio signals with overlapping sound sources. The goal was to separate individual sounds and classify them into 18 predefined categories. The research team plans to present their technology at the DCASE workshop in Barcelona this October. <Figure 1. Example of an acoustic scene with multiple mixed sounds> Early this year, Dr. Dong-heon Lee developed a state-of-the-art sound source separation AI that combines Transformer and Mamba architectures. During the competition, centered around researcher Young-hoo Kwon, they completed a ‘chain-of-inference architecture' AI model that performs sound source separation and classification again, using the waveforms and types of the initially separated sound sources as clues. This AI model is inspired by human’s auditory scene analysis mechanism that isolates individual sounds by focusing on incomplete clues such as sound type, rhythm, or direction, when listening to complex sounds. Through this, the team was the only participant to achieve double-digit performance (11 dB) in 'Class-Aware Signal-to-Distortion Ratio Improvement (CA-SDRi)*,' which is the measure for ranking how well the AI separated and classified sounds, proving their technical excellence. Class-Aware Signal-to-Distortion Ratio Improvement (CA-SDRi): Measures how much clearer (less distorted) the desired sound is separated and classified compared to the original audio, in dB (decibels). A higher number indicates more accurate and cleaner sound separation. Prof. Jung-woo Choi remarked, "The research team has showcased world-leading acoustic separation AI models for the past three years, and I am delighted that these results have been officially recognized." He added, "I am proud of every member of the research team for winning first place through focused research, despite the significant increase in difficulty and having only a few weeks for development." <Figure 2. Time-frequency patterns of sound sources separated from a mixed source> The IEEE DCASE Challenge 2025 was held online, with submissions accepted from April 1 to June 15 and results announced on June 30. Since its launch in 2013, the DCASE Challenge has served as a premier global platform of IEEE Signal Processing Society for showcasing cutting-edge AI models in acoustic signal processing. This research was supported by the Mid-Career Researcher Support Project and STEAM Research Project of the National Research Foundation of Korea, funded by the Ministry of Education, Science and Technology, as well as support from the Future Defense Research Center, funded by the Defense Acquisition Program Administration and the Agency for Defense Development.
2025.07.13
View 480
KAIST Kicks Off the Expansion of its Creative Learning Building, a 50th Anniversary Donation Landmark
KAIST announced on July 10th that it held a groundbreaking ceremony on July 9th for the expansion of its Creative Learning Building. This project, which celebrates the university's 50th anniversary, will become a significant donation-funded landmark and marks the official start of its construction. <(From left) President Kwang Hyung Lee, Former President Sung-Chul Shin> The groundbreaking ceremony was attended by key donors who graced the occasion, including KAIST President Kwang Hyung Lee, former President Sung-Chul Shin, Alumni Association President Yoon-Tae Lee, as well as parents and faculty member. The Creative Learning Building serves as a primary space where KAIST undergraduate and graduate students attend lectures, functioning as a central hub for a variety of classes and talks. It also houses student support departments, including the Student Affairs Office, establishing itself as a student-centric complex that integrates educational, counseling, and welfare functions. This expansion is more than just an increase in educational facilities; it's being developed as a "donation landmark" embodying KAIST's identity and future vision. Designed with a focus on creative convergence education, this project aims to create a new educational hub that organically combines education, exchange, and welfare functions The campaign included over 230 participants, including KAIST alumni Byung-gyu Chang, Chairman of Krafton, former Alumni Association President Ki-chul Cha, Dr. Kun-mo Chung (former Minister of Science and Technology), as well as faculty members, parents, and current students. They collectively raised 6.5 billion KRW in donations. The total cost for this expansion project is 9 billion KRW, encompassing a gross floor area of 3,222.92㎡ across five above-ground floors, with completion targeted for September 2026.
2025.07.10
View 534
KAIST Presents a Breakthrough in Overcoming Drug Resistance in Cancer – Hope for Treating Intractable Diseases like Diabetes
<(From the left) Prof. Hyun Uk Kim, Ph.D candiate Hae Deok Jung, Ph.D candidate Jina Lim, Prof.Yoosik Kim from the Department of Chemical and Biomolecular Engineering> One of the biggest obstacles in cancer treatment is drug resistance in cancer cells. Conventional efforts have focused on identifying new drug targets to eliminate these resistant cells, but such approaches can often lead to even stronger resistance. Now, researchers at KAIST have developed a computational framework to predict key metabolic genes that can re-sensitize resistant cancer cells to treatment. This technique holds promise not only for a variety of cancer therapies but also for treating metabolic diseases such as diabetes. On the 7th of July, KAIST (President Kwang Hyung Lee) announced that a research team led by Professors Hyun Uk Kim and Yoosik Kim from the Department of Chemical and Biomolecular Engineering had developed a computational framework that predicts metabolic gene targets to re-sensitize drug-resistant breast cancer cells. This was achieved using a metabolic network model capable of simulating human metabolism. Focusing on metabolic alterations—key characteristics in the formation of drug resistance—the researchers developed a metabolism-based approach to identify gene targets that could enhance drug responsiveness by regulating the metabolism of drug-resistant breast cancer cells. < Computational framework that can identify metabolic gene targets to revert the metabolic state of the drug-resistant cells to that of the drug-sensitive parental cells> The team first constructed cell-specific metabolic network models by integrating proteomic data obtained from two different types of drug-resistant MCF7 breast cancer cell lines: one resistant to doxorubicin and the other to paclitaxel. They then performed gene knockout simulations* on all of the metabolic genes and analyzed the results. *Gene knockout simulation: A computational method to predict changes in a biological network by virtually removing specific genes. As a result, they discovered that suppressing certain genes could make previously resistant cancer cells responsive to anticancer drugs again. Specifically, they identified GOT1 as a target in doxorubicin-resistant cells, GPI in paclitaxel-resistant cells, and SLC1A5 as a common target for both drugs. The predictions were experimentally validated by suppressing proteins encoded by these genes, which led to the re-sensitization of the drug-resistant cancer cells. Furthermore, consistent re-sensitization effects were also observed when the same proteins were inhibited in other types of breast cancer cells that had developed resistance to the same drugs. Professor Yoosik Kim remarked, “Cellular metabolism plays a crucial role in various intractable diseases including infectious and degenerative conditions. This new technology, which predicts metabolic regulation switches, can serve as a foundational tool not only for treating drug-resistant breast cancer but also for a wide range of diseases that currently lack effective therapies.” Professor Hyun Uk Kim, who led the study, emphasized, “The significance of this research lies in our ability to accurately predict key metabolic genes that can make resistant cancer cells responsive to treatment again—using only computer simulations and minimal experimental data. This framework can be widely applied to discover new therapeutic targets in various cancers and metabolic diseases.” The study, in which Ph.D. candidates JinA Lim and Hae Deok Jung from KAIST participated as co-first authors, was published online on June 25 in Proceedings of the National Academy of Sciences (PNAS), a leading multidisciplinary journal that covers top-tier research in life sciences, physics, engineering, and social sciences. ※ Title: Genome-scale knockout simulation and clustering analysis of drug-resistant breast cancer cells reveal drug sensitization targets ※ DOI: https://doi.org/10.1073/pnas.2425384122 ※ Authors: JinA Lim (KAIST, co-first author), Hae Deok Jung (KAIST, co-first author), Han Suk Ryu (Seoul National University Hospital, corresponding author), Yoosik Kim (KAIST, corresponding author), Hyun Uk Kim (KAIST, corresponding author), and five others. This research was supported by the Ministry of Science and ICT through the National Research Foundation of Korea, and the Electronics and Telecommunications Research Institute (ETRI).
2025.07.08
View 1156
KAIST Presents Game-Changing Technology for Intractable Brain Disease Treatment Using Micro OLEDs
<(From left)Professor Kyung Cheol Choi, Professor Hyunjoo J. Lee, Dr. Somin Lee from the School of Electrical Engineering> Optogenetics is a technique that controls neural activity by stimulating neurons expressing light-sensitive proteins with specific wavelengths of light. It has opened new possibilities for identifying causes of brain disorders and developing treatments for intractable neurological diseases. Because this technology requires precise stimulation inside the human brain with minimal damage to soft brain tissue, it must be integrated into a neural probe—a medical device implanted in the brain. KAIST researchers have now proposed a new paradigm for neural probes by integrating micro OLEDs into thin, flexible, implantable medical devices. KAIST (President Kwang Hyung Lee) announced on the 6th of July that professor Kyung Cheol Choi and professor Hyunjoo J. Lee from the School of Electrical Engineering have jointly succeeded in developing an optogenetic neural probe integrated with flexible micro OLEDs. Optical fibers have been used for decades in optogenetic research to deliver light to deep brain regions from external light sources. Recently, research has focused on flexible optical fibers and ultra-miniaturized neural probes that integrate light sources for single-neuron stimulation. The research team focused on micro OLEDs due to their high spatial resolution and flexibility, which allow for precise light delivery to small areas of neurons. This enables detailed brain circuit analysis while minimizing side effects and avoiding restrictions on animal movement. Moreover, micro OLEDs offer precise control of light wavelengths and support multi-site stimulation, making them suitable for studying complex brain functions. However, the device's electrical properties degrade easily in the presence of moisture or water, which limited their use as implantable bioelectronics. Furthermore, optimizing the high-resolution integration process on thin, flexible probes remained a challenge. To address this, the team enhanced the operational reliability of OLEDs in moist, oxygen-rich environments and minimized tissue damage during implantation. They patterned an ultrathin, flexible encapsulation layer* composed of aluminum oxide and parylene-C (Al₂O₃/parylene-C) at widths of 260–600 micrometers (μm) to maintain biocompatibility. *Encapsulation layer: A barrier that completely blocks oxygen and water molecules from the external environment, ensuring the longevity and reliability of the device. When integrating the high-resolution micro OLEDs, the researchers also used parylene-C, the same biocompatible material as the encapsulation layer, to maintain flexibility and safety. To eliminate electrical interference between adjacent OLED pixels and spatially separate them, they introduced a pixel define layer (PDL), enabling the independent operation of eight micro OLEDs. Furthermore, they precisely controlled the residual stress and thickness in the multilayer film structure of the device, ensuring its flexibility even in biological environments. This optimization allowed for probe insertion without bending or external shuttles or needles, minimizing mechanical stress during implantation.
2025.07.07
View 816
KAIST researcher Se Jin Park develops 'SpeechSSM,' opening up possibilities for a 24-hour AI voice assistant.
<(From Left)Prof. Yong Man Ro and Ph.D. candidate Sejin Park> Se Jin Park, a researcher from Professor Yong Man Ro’s team at KAIST, has announced 'SpeechSSM', a spoken language model capable of generating long-duration speech that sounds natural and remains consistent. An efficient processing technique based on linear sequence modeling overcomes the limitations of existing spoken language models, enabling high-quality speech generation without time constraints. It is expected to be widely used in podcasts, audiobooks, and voice assistants due to its ability to generate natural, long-duration speech like humans. Recently, Spoken Language Models (SLMs) have been spotlighted as next-generation technology that surpasses the limitations of text-based language models by learning human speech without text to understand and generate linguistic and non-linguistic information. However, existing models showed significant limitations in generating long-duration content required for podcasts, audiobooks, and voice assistants. Now, KAIST researcher has succeeded in overcoming these limitations by developing 'SpeechSSM,' which enables consistent and natural speech generation without time constraints. KAIST(President Kwang Hyung Lee) announced on the 3rd of July that Ph.D. candidate Sejin Park from Professor Yong Man Ro's research team in the School of Electrical Engineering has developed 'SpeechSSM,' a spoken. a spoken language model capable of generating long-duration speech. This research is set to be presented as an oral paper at ICML (International Conference on Machine Learning) 2025, one of the top machine learning conferences, selected among approximately 1% of all submitted papers. This not only proves outstanding research ability but also serves as an opportunity to once again demonstrate KAIST's world-leading AI research capabilities. A major advantage of Spoken Language Models (SLMs) is their ability to directly process speech without intermediate text conversion, leveraging the unique acoustic characteristics of human speakers, allowing for the rapid generation of high-quality speech even in large-scale models. However, existing models faced difficulties in maintaining semantic and speaker consistency for long-duration speech due to increased 'speech token resolution' and memory consumption when capturing very detailed information by breaking down speech into fine fragments. To solve this problem, Se Jin Park developed 'SpeechSSM,' a spoken language model using a Hybrid State-Space Model, designed to efficiently process and generate long speech sequences. This model employs a 'hybrid structure' that alternately places 'attention layers' focusing on recent information and 'recurrent layers' that remember the overall narrative flow (long-term context). This allows the story to flow smoothly without losing coherence even when generating speech for a long time. Furthermore, memory usage and computational load do not increase sharply with input length, enabling stable and efficient learning and the generation of long-duration speech. SpeechSSM effectively processes unbounded speech sequences by dividing speech data into short, fixed units (windows), processing each unit independently, and then combining them to create long speech. Additionally, in the speech generation phase, it uses a 'Non-Autoregressive' audio synthesis model (SoundStorm), which rapidly generates multiple parts at once instead of slowly creating one character or one word at a time, enabling the fast generation of high-quality speech. While existing models typically evaluated short speech models of about 10 seconds, Se Jin Park created new evaluation tasks for speech generation based on their self-built benchmark dataset, 'LibriSpeech-Long,' capable of generating up to 16 minutes of speech. Compared to PPL (Perplexity), an existing speech model evaluation metric that only indicates grammatical correctness, she proposed new evaluation metrics such as 'SC-L (semantic coherence over time)' to assess content coherence over time, and 'N-MOS-T (naturalness mean opinion score over time)' to evaluate naturalness over time, enabling more effective and precise evaluation. Through these new evaluations, it was confirmed that speech generated by the SpeechSSM spoken language model consistently featured specific individuals mentioned in the initial prompt, and new characters and events unfolded naturally and contextually consistently, despite long-duration generation. This contrasts sharply with existing models, which tended to easily lose their topic and exhibit repetition during long-duration generation. PhD candidate Sejin Park explained, "Existing spoken language models had limitations in long-duration generation, so our goal was to develop a spoken language model capable of generating long-duration speech for actual human use." She added, "This research achievement is expected to greatly contribute to various types of voice content creation and voice AI fields like voice assistants, by maintaining consistent content in long contexts and responding more efficiently and quickly in real time than existing methods." This research, with Se Jin Park as the first author, was conducted in collaboration with Google DeepMind and is scheduled to be presented as an oral presentation at ICML (International Conference on Machine Learning) 2025 on July 16th. Paper Title: Long-Form Speech Generation with Spoken Language Models DOI: 10.48550/arXiv.2412.18603 Ph.D. candidate Se Jin Park has demonstrated outstanding research capabilities as a member of Professor Yong Man Ro's MLLM (multimodal large language model) research team, through her work integrating vision, speech, and language. Her achievements include a spotlight paper presentation at 2024 CVPR (Computer Vision and Pattern Recognition) and an Outstanding Paper Award at 2024 ACL (Association for Computational Linguistics). For more information, you can refer to the publication and accompanying demo: SpeechSSM Publications.
2025.07.04
View 1087
<<
첫번째페이지
<
이전 페이지
1
2
3
4
5
6
7
8
9
10
>
다음 페이지
>>
마지막 페이지 78