본문 바로가기
대메뉴 바로가기
KAIST
Newsletter Vol.29
Receive KAIST news by email!
View
Subscribe
Close
Type your e-mail address here.
Subscribe
Close
KAIST
NEWS
유틸열기
홈페이지 통합검색
-
검색
KOREAN
메뉴 열기
Engineering
by recently order
by view order
'Team Atlanta', in which KAIST Professor Insu Yun research team participated, won the DARPA AI Cyber Challenge in the US, with a prize of 5.5 billion KRW
<Photo1. Group Photo of Team Atlanta> Team Atlanta, led by Professor Insu Yun of the Department of Electrical and Electronic Engineering at KAIST and Tae-soo Kim, an executive from Samsung Research, along with researchers from POSTECH and Georgia Tech, won the final championship at the AI Cyber Challenge (AIxCC) hosted by the Defense Advanced Research Projects Agency (DARPA). The final was held at the world's largest hacking conference, DEF CON 33, in Las Vegas on August 8 (local time). With this achievement, the team won a prize of $4 million (approximately 5.5 billion KRW), demonstrating the excellence of their AI-based autonomous cyber defense technology on the global stage. <Photo2.Championship Commemorative:On the left and right are tournament officials. From the second person, Professor Tae-soo Kim(Samsung Research / Georgia Tech), Researcher Hyeong-seok Han (Samsung Research America), and Professor Insu Yun (KAIST)> The AI Cyber Challenge is a two-year global competition co-hosted by DARPA and the Advanced Research Projects Agency for Health (ARPA-H). It challenges contestants to automatically analyze, detect, and fix software vulnerabilities using AI-based Cyber Reasoning Systems (CRS). The total prize money for the competition is $29.5 million, with the winning team receiving $4 million. In the final, Team Atlanta scored a total of 392.76 points, a difference of over 170 points from the second-place team, Trail of Bits, securing a dominant victory. The CRS developed by Team Atlanta successfully and automatically detected various types of vulnerabilities and patched a significant number of them in real time. Among the 7 finalist teams, an average of 77% of the 70 intentionally injected vulnerabilities were found, and 61% of them were patched. The teams also found 18 additional unknown vulnerabilities in real software, proving the potential of AI security technology. All CRS technologies, including those of the winning team, will be provided as open-source and are expected to be used to strengthen the security of core infrastructure such as hospitals, water, and power systems. <Photo3. Final Scoreboard: An overwhelming victory with over 170 points> Professor Insu Yun of KAIST, a member of Team Atlanta, stated, "I am very happy to have achieved such a great result. This is a remarkable achievement that shows Korea's cyber security research has reached the highest level in the world, and it was meaningful to show the capabilities of Korean researchers on the world stage. I will continue to conduct research to protect the digital safety of the nation and global society through the fusion of AI and security technology." KAIST President Kwang-hyung Lee stated, "This victory is another example that proves KAIST is a world-leading institution in the field of future cyber security and AI convergence. We will continue to provide full support to our researchers so they can compete and produce results on the world stage." <Photo4. Results Announcement>
2025.08.10
View 134
Prof. Seungbum Koo’s Team Receives Clinical Biomechanics Award at the 30th International Society of Biomechanics Conference
<(From Left) Ph.D candidate Jeongseok Oh from KAIST, Dr. Seungwoo Yoon from KAIST, Prof.Joon-Ho Wang from Samsung Medical Center, Prof.Seungbum Koo from KAIST> Professor Seungbum Koo’s research team received the Clinical Biomechanics Award at the 30th International Society of Biomechanics (ISB) Conference, held in July 2025 in Stockholm, Sweden. The Plenary Lecture was delivered by first author and Ph.D. candidate Jeongseok Oh. This research was conducted in collaboration with Professor Joon-Ho Wang’s team at Samsung Medical Center. Residual Translational and Rotational Kinematics After Combined ACL and Anterolateral Ligament Reconstruction During Walking Jeongseok Oh, Seungwoo Yoon, Joon-Ho Wang, Seungbum Koo The study analyzed gait-related knee joint motion using high-speed biplane X-ray imaging and three-dimensional kinematic reconstruction in 10 healthy individuals and 10 patients who underwent ACL reconstruction with ALL augmentation. The patient group showed excessive anterior translation and internal rotation, suggesting incomplete restoration of normal joint kinematics post-surgery. These findings provide mechanistic insight into the early onset of knee osteoarthritis often reported in this population.' The ISB conference, held biennially for over 60 years, is the largest international biomechanics meeting. This year, it hosted 1,600 researchers from 46 countries and featured over 1,400 presentations. The Clinical Biomechanics Award is given to one outstanding study selected from five top-rated abstracts invited for full manuscript review. The winning paper is published in Clinical Biomechanics, and the award includes a monetary prize and a Plenary Lecture opportunity. From 2019 to 2023, Koo and Wang’s teams developed a system with support from the Samsung Future Technology Development Program to track knee motion in real time during treadmill walking, using high-speed biplane X-rays and custom three-dimensional reconstruction software. This system, along with proprietary software that precisely reconstructs the three-dimensional motion of joints, was approved for clinical trials by the Ministry of Food and Drug Safety and installed at Samsung Medical Center. It is being used to quantitatively analyze abnormal joint motion patterns in patients with knee ligament injuries and those who have undergone knee surgery. Additionally, Jeongseok Oh was named one of five finalists for the David Winter Young Investigator Award, presenting his work during the award session. This award recognizes promising young researchers in biomechanics worldwide.
2025.08.10
View 95
Unlocking New Potential for Natural Gas–Based Bioplastic Production
<(From Left)Jaewook Myung from KAIST, Sunho Park from KAIST, Dr. Chungheon Shin from Stanford University, Prof. Craig S. Criddle from Stanford University > KAIST announced that a research team led by Professor Jaewook Myung from the Department of Civil and Environmental Engineering, in collaboration with Stanford University, has identified how ethane (C2H6)—a major constituent of natural gas—affects the core metabolic pathways of the obligate methanotroph Methylosinus trichosporium OB3b. Methane (CH4), a greenhouse gas with roughly 25 times the global warming potential of carbon dioxide, is rarely emitted alone into the environment. It is typically released in mixtures with other gases. In the case of natural gas, ethane can comprise up to 15% of the total composition. Methanotrophs are aerobic bacteria that can utilize methane as their sole source of carbon and energy. Obligate methanotrophs, in particular, strictly utilize only C1 compounds such as methane or methanol. Until now, little was known about how these organisms respond to C2 compounds like ethane, which they cannot use for growth. <Figure 1. Conceptual overview of obligate methanotroph metabolism and PHB biosynthesis under mixed-substrate conditions of methane and ethane> This study reveals that although ethane cannot serve as a growth substrate, its presence significantly affects key metabolic functions in M. trichosporium OB3b—including methane oxidation, cell proliferation, and the intracellular synthesis of polyhydroxybutyrate (PHB), a biodegradable polymer. Under varying methane and oxygen conditions, the team observed that ethane addition consistently resulted in three metabolic effects: reduced cell growth, lower methane consumption, and increased PHB accumulation. These effects intensified with rising ethane concentrations. Notably, ethane oxidation occurred only when methane was present, confirming that it is co-oxidized via particulate methane monooxygenase (pMMO), the key enzyme responsible for methane oxidation. <Figure2. Effects of increasing ethane concentrations on methane and ethane consumption, cell growth, and PHB production in Methylosinus trichosporium OB3b> Further analysis showed that acetate, an intermediate formed during ethane oxidation, played a pivotal role in this response. Higher acetate levels inhibited growth but enhanced PHB production, suggesting that ethane-derived acetate drives contrasting carbon assimilation patterns depending on nutrient conditions—nutrient-balanced growth phase and nutrient-imbalanced PHB accumulation phas. In addition, when external reducing power was supplemented (via methanol or formate), ethane consumption was enhanced significantly, while methane oxidation remained largely unaffected. This finding suggests that ethane, despite not supporting growth, actively competes for intracellular resources such as reducing equivalents. It offers new insights into substrate prioritization and resource allocation in methanotrophs under mixed-substrate conditions. Interestingly, while methane uptake declined in the presence of ethane, the expression of pmoA, the gene encoding pMMO, remained unchanged. This suggests that ethane’s impact occurs beyond the transcriptional level—likely via post-transcriptional or enzymatic regulation. <Figure 3. Mechanistic analysis of ethane-induced metabolic changes in obligate methanotrophs: acetate-driven carbon assimilation change (blue box), intracellular reducing power depletion (red box), and pmoA expression quantitative analysis (green box)> “This is the first study to systematically investigate how obligate methanotrophs respond to complex gas mixtures involving ethane,” said Professor Jaewook Myung. “Our findings show that even non-growth substrates can meaningfully influence microbial metabolism and biopolymer synthesis, opening new possibilities for methane-based biotechnologies and bioplastic production.” The study was supported by the National Research Foundation of Korea, the Ministry of Land, Infrastructure and Transport, and the Ministry of Oceans and Fisheries. The results were published in Applied and Environmental Microbiology, a journal of the American Society for Microbiology.
2025.08.07
View 143
KAIST Develops ‘Real-Time Programmable Robotic Sheet’ That Can Grasp and Walk on Its Own
<(From left) Prof. Inkyu Park from KAIST, Prof. Yongrok Jeong from Kyungpook National University, Dr. Hyunkyu Park from KAIST and Prof.Jung Kim from KAIST> Folding structures are widely used in robot design as an intuitive and efficient shape-morphing mechanism, with applications explored in space and aerospace robots, soft robots, and foldable grippers (hands). However, existing folding mechanisms have fixed hinges and folding directions, requiring redesign and reconstruction every time the environment or task changes. A Korean research team has now developed a “field-programmable robotic folding sheet” that can be programmed in real time according to its surroundings, significantly enhancing robots’ shape-morphing capabilities and opening new possibilities in robotics. KAIST (President Kwang Hyung Lee) announced on the 6th that Professors Jung Kim and Inkyu Park of the Department of Mechanical Engineering have developed the foundational technology for a “field-programmable robotic folding sheet” that enables real-time shape programming. This technology is a successful application of the “field-programmability” concept to foldable structures. It proposes an integrated material technology and programming methodology that can instantly reflect user commands—such as “where to fold, in which direction, and by how much”—onto the material's shape in real time. The robotic sheet consists of a thin and flexible polymer substrate embedded with a micro metal resistor network. These metal resistors simultaneously serve as heaters and temperature sensors, allowing the system to sense and control its folding state without any external devices. Furthermore, using software that combines genetic algorithms and deep neural networks, the user can input desired folding locations, directions, and intensities. The sheet then autonomously repeats heating and cooling cycles to create the precise desired shape. In particular, closed-loop control of the temperature distribution enhances real-time folding precision and compensates for environmental changes. It also improves the traditionally slow response time of heat-based folding technologies. The ability to program shapes in real time enables a wide variety of robotic functions to be implemented on the fly, without the need for complex hardware redesign. In fact, the research team demonstrated an adaptive robotic hand (gripper) that can change its grasping strategy to suit various object shapes using a single material. They also placed the same robotic sheet on the ground to allow it to walk or crawl, showcasing bioinspired locomotion strategies. This presents potential for expanding into environmentally adaptive autonomous robots that can alter their form in response to surroundings. Professor Jung Kim stated, “This study brings us a step closer to realizing ‘morphological intelligence,’ a concept where shape itself embodies intelligence and enables smart motion. In the future, we plan to evolve this into a next-generation physical AI platform with applications in disaster-response robots, customized medical assistive devices, and space exploration tools—by improving materials and structures for greater load support and faster cooling, and expanding to electrode-free, fully integrated designs of various forms and sizes.” This research, co-led by Dr. Hyunkyu Park (currently at Samsung Advanced Institute of Technology, Samsung Electronics) and Professor Yongrok Jeong (currently at Kyungpook National University), was published in the August 2025 online edition of the international journal Nature Communications. ※ Paper title: Field-programmable robotic folding sheet ※ DOI: 10.1038/s41467-025-61838-3 This research was supported by the National Research Foundation of Korea (Ministry of Science and ICT). (RS-2021-NR059641, 2021R1A2C3008742) Video file: https://drive.google.com/file/d/18R0oW7SJVYH-gd1Er_S-9Myar8dm8Fzp/view?usp=sharing
2025.08.06
View 219
Material Innovation Realized with Robotic Arms and AI, Without Human Researchers
<(From Left) M.S candidate Dongwoo Kim from KAIST, Ph.D candidate Hyun-Gi Lee from KAIST, Intern Yeham Kang from KAIST, M.S candidate Seongjae Bae from KAIST, Professor Dong-Hwa Seo from KAIST, (From top right, from left) Senior Researcher Inchul Park from POSCO Holdings, Senior Researcher Jung Woo Park, senior researcher from POSCO Holdings> A joint research team from industry and academia in Korea has successfully developed an autonomous lab that uses AI and automation to create new cathode materials for secondary batteries. This system operates without human intervention, drastically reducing researcher labor and cutting the material discovery period by 93%. * Autonomous Lab: A platform that autonomously designs, conducts, and analyzes experiments to find the optimal material. KAIST (President Kwang Hyung Lee) announced on the 3rd of August that the research team led by Professor Dong-Hwa Seo of the Department of Materials Science and Engineering, in collaboration with the team of LIB Materials Research Center in Energy Materials R&D Laboratories at POSCO Holdings' POSCO N.EX.T Hub (Director Ki Soo Kim), built the lab to explore cathode materials using AI and automation technology. Developing secondary battery cathode materials is a labor-intensive and time-consuming process for skilled researchers. It involves extensive exploration of various compositions and experimental variables through weighing, transporting, mixing, sintering*, and analyzing samples. * Sintering: A process in which powder particles are heated to form a single solid mass through thermal activation. The research team's autonomous lab combines an automated system with an AI model. The system handles all experimental steps—weighing, mixing, pelletizing, sintering, and analysis—without human interference. The AI model then interprets the data, learns from it, and selects the best candidates for the next experiment. <Figure 1. Outline of the Anode Material Autonomous Exploration Laboratory> To increase efficiency, the team designed the automation system with separate modules for each process, which are managed by a central robotic arm. This modular approach reduces the system's reliance on the robotic arm. The team also significantly improved the synthesis speed by using a new high-speed sintering method, which is 50 times faster than the conventional low-speed method. This allows the autonomous lab to acquire 12 times more material data compared to traditional, researcher-led experiments. <Figure 2. Synthesis of Cathode Material Using a High-Speed Sintering Device> The vast amount of data collected is automatically interpreted by the AI model to extract information such as synthesized phases and impurity ratios. This data is systematically stored to create a high-quality database, which then serves as training data for an optimization AI model. This creates a closed-loop experimental system that recommends the next cathode composition and synthesis conditions for the automated system. * Closed-loop experimental system: A system that independently performs all experimental processes without researcher intervention. Operating this intelligent automation system 24 hours a day can secure more than 12 times the experimental data and shorten material discovery time by 93%. For a project requiring 500 experiments, the system can complete the work in about 6 days, whereas a traditional researcher-led approach would take 84 days. During development, POSCO Holdings team managed the overall project planning, reviewed the platform design, and co-developed the partial module design and AI-based experimental model. The KAIST team, led by Professor Dong-hwa Seo, was responsible for the actual system implementation and operation, including platform design, module fabrication, algorithm creation, and system verification and improvement. Professor Dong-Hwa Seo of KAIST stated that this system is a solution to the decrease in research personnel due to the low birth rate in Korea. He expects it will enhance global competitiveness by accelerating secondary battery material development through the acquisition of high-quality data. <Figure 3. Exterior View (Side) of the Cathode Material Autonomous Exploration Laboratory> POSCO N.EX.T Hub plans to apply an upgraded version of this autonomous lab to its own research facilities after 2026 to dramatically speed up next-generation secondary battery material development. They are planning further developments to enhance the system's stability and scalability, and hope this industry-academia collaboration will serve as a model for using innovative technology in real-world R&D. <Figure 4. Exterior View (Front) of the Cathode Material Autonomous Exploration Laboratory> The research was spearheaded by Ph.D. student Hyun-Gi Lee, along with master's students Seongjae Bae and Dongwoo Kim from Professor Dong-Hwa Seo’s lab at KAIST. Senior researchers Jung Woo Park and Inchul Park from LIB Materials Research Center of POSCO N.EX.T Hub's Energy Materials R&D Laboratories (Director Jeongjin Hong) also participated.
2025.08.06
View 109
Is 24-hour health monitoring possible with ambient light energy?
<(From left) Ph.D candidate Youngmin Sim, Ph.D candidate Do Yun Park, Dr. Chanho Park, Professor Kyeongha Kwon> Miniaturization and weight reduction of medical wearable devices for continuous health monitoring such as heart rate, blood oxygen saturation, and sweat component analysis remain major challenges. In particular, optical sensors consume a significant amount of power for LED operation and wireless transmission, requiring heavy and bulky batteries. To overcome these limitations, KAIST researchers have developed a next-generation wearable platform that enables 24-hour continuous measurement by using ambient light as an energy source and optimizing power management according to the power environment. KAIST (President Kwang Hyung Lee) announced on the 30th that Professor Kyeongha Kwon's team from the School of Electrical Engineering, in collaboration with Dr. Chanho Park’s team at Northwestern University in the U.S., has developed an adaptive wireless wearable platform that reduces battery load by utilizing ambient light. To address the battery issue of medical wearable devices, Professor Kyeongha Kwon’s research team developed an innovative platform that utilizes ambient natural light as an energy source. This platform integrates three complementary light energy technologies. <Figure1.The wireless wearable platform minimizes the energy required for light sources through i) Photometric system that directly utilizes ambient light passing through windows for measurements, ii) Photovoltaic system that receives power from high-efficiency photovoltaic cells and wireless power receiver coils, and iii) Photoluminescent system that stores light using photoluminescent materials and emits light in dark conditions to support the two aforementioned systems. In-sensor computing minimizes power consumption by wirelessly transmitting only essential data. The adaptive power management system efficiently manages power by automatically selecting the optimal mode among 11 different power modes through a power selector based on the power supply level from the photovoltaic system and battery charge status.> The first core technology, the Photometric Method, is a technique that adaptively adjusts LED brightness depending on the intensity of the ambient light source. By combining ambient natural light with LED light to maintain a constant total illumination level, it automatically dims the LED when natural light is strong and brightens it when natural light is weak. Whereas conventional sensors had to keep the LED on at a fixed brightness regardless of the environment, this technology optimizes LED power in real time according to the surrounding environment. Experimental results showed that it reduced power consumption by as much as 86.22% under sufficient lighting conditions. The second is the Photovoltaic Method using high-efficiency multijunction solar cells. This goes beyond simple solar power generation to convert light in both indoor and outdoor environments into electricity. In particular, the adaptive power management system automatically switches among 11 different power configurations based on ambient conditions and battery status to achieve optimal energy efficiency. The third innovative technology is the Photoluminescent Method. By mixing strontium aluminate microparticles* into the sensor’s silicone encapsulation structure, light from the surroundings is absorbed and stored during the day and slowly released in the dark. As a result, after being exposed to 500W/m² of sunlight for 10 minutes, continuous measurement is possible for 2.5 minutes even in complete darkness. *Strontium aluminate microparticles: A photoluminescent material used in glow-in-the-dark paint or safety signs, which absorbs light and emits it in the dark for an extended time. These three technologies work complementarily—during bright conditions, the first and second methods are active, and in dark conditions, the third method provides additional support—enabling 24-hour continuous operation. The research team applied this platform to various medical sensors to verify its practicality. The photoplethysmography sensor monitors heart rate and blood oxygen saturation in real time, allowing early detection of cardiovascular diseases. The blue light dosimeter accurately measures blue light, which causes skin aging and damage, and provides personalized skin protection guidance. The sweat analysis sensor uses microfluidic technology to simultaneously analyze salt, glucose, and pH in sweat, enabling real-time detection of dehydration and electrolyte imbalances. Additionally, introducing in-sensor data computing significantly reduced wireless communication power consumption. Previously, all raw data had to be transmitted externally, but now only the necessary results are calculated and transmitted within the sensor, reducing data transmission requirements from 400B/s to 4B/s—a 100-fold decrease. To validate performance, the research tested the device on healthy adult subjects in four different environments: bright indoor lighting, dim lighting, infrared lighting, and complete darkness. The results showed measurement accuracy equivalent to that of commercial medical devices in all conditions A mouse model experiment confirmed accurate blood oxygen saturation measurement in hypoxic conditions. <Frigure2.The multimodal device applying the energy harvesting and power management platform consists of i) photoplethysmography (PPG) sensor, ii) blue light dosimeter, iii) photoluminescent microfluidic channel for sweat analysis and biomarker sensors (chloride ion, glucose, and pH), and iv) temperature sensor. This device was implemented with flexible printed circuit board (fPCB) to enable attachment to the skin. A silicon substrate with a window that allows ambient light and measurement light to pass through, along with photoluminescent encapsulation layer, encapsulates the PPG, blue light dosimeter, and temperature sensors, while the photoluminescent microfluidic channel is attached below the photoluminescent encapsulation layer to collect sweat> Professor Kyeongha Kwon of KAIST, who led the research, stated, “This technology will enable 24-hour continuous health monitoring, shifting the medical paradigm from treatment-centered to prevention-centered shifting the medical paradigm from treatment-centered to prevention-centered,” further stating that “cost savings through early diagnosis as well as strengthened technological competitiveness in the next-generation wearable healthcare market are anticipated.” This research was published on July 1 in the international journal Nature Communications, with Do Yun Park, a doctoral student in the AI Semiconductor Graduate Program, as co–first author. ※ Paper title: Adaptive Electronics for Photovoltaic, Photoluminescent and Photometric Methods in Power Harvesting for Wireless and Wearable Sensors ※ DOI: https://doi.org/10.1038/s41467-025-60911-1 ※ URL: https://www.nature.com/articles/s41467-025-60911-1 This research was supported by the National Research Foundation of Korea (Outstanding Young Researcher Program and Regional Innovation Leading Research Center Project), the Ministry of Science and ICT and Institute of Information & Communications Technology Planning & Evaluation (IITP) AI Semiconductor Graduate Program, and the BK FOUR Program (Connected AI Education & Research Program for Industry and Society Innovation, KAIST EE).
2025.07.30
View 459
Vulnerability Found: One Packet Can Paralyze Smartphones
<(From left) Professor Yongdae Kim, PhD candidate Tuan Dinh Hoang, PhD candidate Taekkyung Oh from KAIST, Professor CheolJun Park from Kyung Hee University; and Professor Insu Yun from KAIST> Smartphones must stay connected to mobile networks at all times to function properly. The core component that enables this constant connectivity is the communication modem (Baseband) inside the device. KAIST researchers, using their self-developed testing framework called 'LLFuzz (Lower Layer Fuzz),' have discovered security vulnerabilities in the lower layers of smartphone communication modems and demonstrated the necessity of standardizing 'mobile communication modem security testing.' *Standardization: In mobile communication, conformance testing, which verifies normal operation in normal situations, has been standardized. However, standards for handling abnormal packets have not yet been established, hence the need for standardized security testing. Professor Yongdae Kim's team from the School of Electrical Engineering at KAIST, in a joint research effort with Professor CheolJun Park's team from Kyung Hee University, announced on the 25th of July that they have discovered critical security vulnerabilities in the lower layers of smartphone communication modems. These vulnerabilities can incapacitate smartphone communication with just a single manipulated wireless packet (a data transmission unit in a network). In particular, these vulnerabilities are extremely severe as they can potentially lead to remote code execution (RCE) The research team utilized their self-developed 'LLFuzz' analysis framework to analyze the lower layer state transitions and error handling logic of the modem to detect security vulnerabilities. LLFuzz was able to precisely extract vulnerabilities caused by implementation errors by comparing and analyzing 3GPP* standard-based state machines with actual device responses. *3GPP: An international collaborative organization that creates global mobile communication standards. The research team conducted experiments on 15 commercial smartphones from global manufacturers, including Apple, Samsung Electronics, Google, and Xiaomi, and discovered a total of 11 vulnerabilities. Among these, seven were assigned official CVE (Common Vulnerabilities and Exposures) numbers, and manufacturers applied security patches for these vulnerabilities. However, the remaining four have not yet been publicly disclosed. While previous security research primarily focused on higher layers of mobile communication, such as NAS (Network Access Stratum) and RRC (Radio Resource Control), the research team concentrated on analyzing the error handling logic of mobile communication's lower layers, which manufacturers have often neglected. These vulnerabilities occurred in the lower layers of the communication modem (RLC, MAC, PDCP, PHY*), and due to their structural characteristics where encryption or authentication is not applied, operational errors could be induced simply by injecting external signals. *RLC, MAC, PDCP, PHY: Lower layers of LTE/5G communication, responsible for wireless resource allocation, error control, encryption, and physical layer transmission. The research team released a demo video showing that when they injected a manipulated wireless packet (malformed MAC packet) into commercial smartphones via a Software-Defined Radio (SDR) device using packets generated on an experimental laptop, the smartphone's communication modem (Baseband) immediately crashed ※ Experiment video: https://drive.google.com/file/d/1NOwZdu_Hf4ScG7LkwgEkHLa_nSV4FPb_/view?usp=drive_link The video shows data being normally transmitted at 23MB per second on the fast.com page, but immediately after the manipulated packet is injected, the transmission stops and the mobile communication signal disappears. This intuitively demonstrates that a single wireless packet can cripple a commercial device's communication modem. The vulnerabilities were found in the 'modem chip,' a core component of smartphones responsible for calls, texts, and data communication, making it a very important component. Qualcomm: Affects over 90 chipsets, including CVE-2025-21477, CVE-2024-23385. MediaTek: Affects over 80 chipsets, including CVE-2024-20076, CVE-2024-20077, CVE-2025-20659. Samsung: CVE-2025-26780 (targets the latest chipsets like Exynos 2400, 5400). Apple: CVE-2024-27870 (shares the same vulnerability as Qualcomm CVE). The problematic modem chips (communication components) are not only in premium smartphones but also in low-end smartphones, tablets, smartwatches, and IoT devices, leading to the widespread potential for user harm due to their broad diffusion. Furthermore, the research team experimentally tested 5G vulnerabilities in the lower layers and found two vulnerabilities in just two weeks. Considering that 5G vulnerability checks have not been generally conducted, it is possible that many more vulnerabilities exist in the mobile communication lower layers of baseband chips. Professor Yongdae Kim explained, "The lower layers of smartphone communication modems are not subject to encryption or authentication, creating a structural risk where devices can accept arbitrary signals from external sources." He added, "This research demonstrates the necessity of standardizing mobile communication modem security testing for smartphones and other IoT devices." The research team is continuing additional analysis of the 5G lower layers using LLFuzz and is also developing tools for testing LTE and 5G upper layers. They are also pursuing collaborations for future tool disclosure. The team's stance is that "as technological complexity increases, systemic security inspection systems must evolve in parallel." First author Tuan Dinh Hoang, a Ph.D. student in the School of Electrical Engineering, will present the research results in August at USENIX Security 2025, one of the world's most prestigious conferences in cybersecurity. ※ Paper Title: LLFuzz: An Over-the-Air Dynamic Testing Framework for Cellular Baseband Lower Layers (Tuan Dinh Hoang and Taekkyung Oh, KAIST; CheolJun Park, Kyung Hee Univ.; Insu Yun and Yongdae Kim, KAIST) ※ Usenix paper site: https://www.usenix.org/conference/usenixsecurity25/presentation/hoang (Not yet public), Lab homepage paper: https://syssec.kaist.ac.kr/pub/2025/LLFuzz_Tuan.pdf ※ Open-source repository: https://github.com/SysSec-KAIST/LLFuzz (To be released) This research was conducted with support from the Institute of Information & Communications Technology Planning & Evaluation (IITP) funded by the Ministry of Science and ICT.
2025.07.25
View 576
KAIST reveals for the first time the mechanism by which alcohol triggers liver inflammation
<(From left)Dr. Keungmo Yang, Professor Won-Il Jeong, Ph.D candidate Kyurae Kim> Excessive alcohol consumption causes alcoholic liver disease, and about 20% of these cases progress to alcohol-associated steatohepatitis (ASH), which can lead to liver cirrhosis and liver failure. Early diagnosis and treatment are therefore extremely important. A KAIST research team has identified a new molecular mechanism in which alcohol-damaged liver cells increase reactive oxygen species (ROS), leading to cell death and inflammatory responses. In addition, they discovered that Kupffer cells, immune cells residing in the liver, act as a “dual-function regulator” that can either promote or suppress inflammation through interactions with liver cells. KAIST (President Kwang-Hyung Lee) announced on the 17th that a research team led by Professor Won-Il Jeong from the Graduate School of Medical Science and Engineering, in collaboration with Professor Won Kim’s team at Seoul National University Boramae Medical Center, has uncovered the molecular pathway of liver damage and inflammation caused by alcohol consumption. This finding offers new clues for the diagnosis and treatment of alcohol-associated liver disease (ALD). Professor Won-Il Jeong’s research team found that during chronic alcohol intake, expression of the vesicular glutamate transporter VGLUT3 increases, leading to glutamate accumulation in hepatocytes. Subsequent binge drinking causes rapid changes in intracellular calcium levels, which then triggers glutamate* secretion. The secreted glutamate stimulates the glutamate receptor mGluR5 on liver-resident macrophages (Kupffer cells), which induces ROS production and activates a pathological pathway resulting in hepatocyte death and inflammation. *Glutamate: A type of amino acid involved in intercellular signaling, protein synthesis, and energy metabolism in various tissues including the brain and liver. In excess, it can cause overexcitation and death of nerve cells. Glutamate accumulation in perivenous hepatocytes through vesicular glutamate transporter 3 after 2-week EtOH intake and its release by binge drinking> A particularly groundbreaking aspect of this study is that damaged hepatocytes and Kupffer cells can form a "pseudosynapse"—a structure similar to a synapse which is previously thought to occur only in the brain—enabling them to exchange signals. This is the first time such a phenomenon has been identified in the liver. This pseudosynapse forms when hepatocytes expand (ballooning) due to alcohol, becoming physically attached to Kupffer cells. Simply put, the damaged hepatocytes don’t just die—they send distress signals to nearby immune cells, prompting a response. This discovery proposes a new paradigm: even in peripheral organs, direct structural contact between cells can allow signal transmission. It also shows that damaged hepatocytes can actively stimulate macrophages and induce regeneration through cell death, revealing the liver’s “autonomous recovery function.” The team also confirmed in animal models that genetic or pharmacological inhibition of VGLUT3, mGluR5, or the ROS-producing enzyme NOX2 reduces alcohol-induced liver damage. They also confirmed that the same mechanism observed in animal models was present in human patients with ALD by analyzing blood and liver tissue samples. Professor Won-Il Jeong of KAIST said, “These findings may serve as new molecular targets for early diagnosis and treatment of ASH in the future.” This study was jointly led by Dr. Keungmo Yang (now at Yeouido St. Mary’s Hospital) and Kyurae Kim, a doctoral candidate at KAIST, who served as co–first authors. It was conducted in collaboration with Professor Won Kim’s team at Seoul National University Boramae Medical Center and was published in the journal Nature Communications on July 1. ※ Article Title: Binge drinking triggers VGLUT3-mediated glutamate secretion and subsequent hepatic inflammation by activating mGluR5/NOX2 in Kupffer cells ※ DOI: https://doi.org/10.1038/s41467-025-60820-3 This study was supported by the Ministry of Science and ICT through the National Research Foundation of Korea's Global Leader Program, Mid-Career Researcher Program, and the Bio & Medical Technology Development Program.
2025.07.17
View 589
KAIST Successfully Implements 3D Brain-Mimicking Platform with 6x Higher Precision
<(From left) Dr. Dongjo Yoon, Professor Je-Kyun Park from the Department of Bio and Brain Engineering, (upper right) Professor Yoonkey Nam, Dr. Soo Jee Kim> Existing three-dimensional (3D) neuronal culture technology has limitations in brain research due to the difficulty of precisely replicating the brain's complex multilayered structure and the lack of a platform that can simultaneously analyze both structure and function. A KAIST research team has successfully developed an integrated platform that can implement brain-like layered neuronal structures using 3D printing technology and precisely measure neuronal activity within them. KAIST (President Kwang Hyung Lee) announced on the 16th of July that a joint research team led by Professors Je-Kyun Park and Yoonkey Nam from the Department of Bio and Brain Engineering has developed an integrated platform capable of fabricating high-resolution 3D multilayer neuronal networks using low-viscosity natural hydrogels with mechanical properties similar to brain tissue, and simultaneously analyzing their structural and functional connectivity. Conventional bioprinting technology uses high-viscosity bioinks for structural stability, but this limits neuronal proliferation and neurite growth. Conversely, neural cell-friendly low-viscosity hydrogels are difficult to precisely pattern, leading to a fundamental trade-off between structural stability and biological function. The research team completed a sophisticated and stable brain-mimicking platform by combining three key technologies that enable the precise creation of brain structure with dilute gels, accurate alignment between layers, and simultaneous observation of neuronal activity. The three core technologies are: ▲ 'Capillary Pinning Effect' technology, which enables the dilute gel (hydrogel) to adhere firmly to a stainless steel mesh (micromesh) to prevent it from flowing, thereby reproducing brain structures with six times greater precision (resolution of 500 μm or less) than conventional methods; ▲ the '3D Printing Aligner,' a cylindrical design that ensures the printed layers are precisely stacked without misalignment, guaranteeing the accurate assembly of multilayer structures and stable integration with microelectrode chips; and ▲ 'Dual-mode Analysis System' technology, which simultaneously measures electrical signals from below and observes cell activity with light (calcium imaging) from above, allowing for the simultaneous verification of the functional operation of interlayer connections through multiple methods. < Figure 1. Platform integrating brain-structure-mimicking neural network model construction and functional measurement technology> The research team successfully implemented a three-layered mini-brain structure using 3D printing with a fibrin hydrogel, which has elastic properties similar to those of the brain, and experimentally verified the process of actual neural cells transmitting and receiving signals within it. Cortical neurons were placed in the upper and lower layers, while the middle layer was left empty but designed to allow neurons to penetrate and connect through it. Electrical signals were measured from the lower layer using a microsensor (electrode chip), and cell activity was observed from the upper layer using light (calcium imaging). The results showed that when electrical stimulation was applied, neural cells in both upper and lower layers responded simultaneously. When a synapse-blocking agent (synaptic blocker) was introduced, the response decreased, proving that the neural cells were genuinely connected and transmitting signals. Professor Je-Kyun Park of KAIST explained, "This research is a joint development achievement of an integrated platform that can simultaneously reproduce the complex multilayered structure and function of brain tissue. Compared to existing technologies where signal measurement was impossible for more than 14 days, this platform maintains a stable microelectrode chip interface for over 27 days, allowing the real-time analysis of structure-function relationships. It can be utilized in various brain research fields such as neurological disease modeling, brain function research, neurotoxicity assessment, and neuroprotective drug screening in the future." The research, in which Dr. Soo Jee Kim and Dr. Dongjo Yoon from KAIST's Department of Bio and Brain Engineering participated as co-first authors, was published online in the international journal 'Biosensors and Bioelectronics' on June 11, 2025. ※Paper: Hybrid biofabrication of multilayered 3D neuronal networks with structural and functional interlayer connectivity ※DOI: https://doi.org/10.1016/j.bios.2025.117688
2025.07.16
View 521
KAIST Develops Robots That React to Danger Like Humans
<(From left) Ph.D candidate See-On Park, Professor Jongwon Lee, and Professor Shinhyun Choi> In the midst of the co-development of artificial intelligence and robotic advancements, developing technologies that enable robots to efficiently perceive and respond to their surroundings like humans has become a crucial task. In this context, Korean researchers are gaining attention for newly implementing an artificial sensory nervous system that mimics the sensory nervous system of living organisms without the need for separate complex software or circuitry. This breakthrough technology is expected to be applied in fields such as in ultra-small robots and robotic prosthetics, where intelligent and energy-efficient responses to external stimuli are essential. KAIST (President Kwang Hyung Lee) announced on July15th that a joint research team led by Endowed Chair Professor Shinhyun Choi of the School of Electrical Engineering at KAIST and Professor Jongwon Lee of the Department of Semiconductor Convergence at Chungnam National University (President Jung Kyum Kim) developed a next-generation neuromorphic semiconductor-based artificial sensory nervous system. This system mimics the functions of a living organism's sensory nervous system, and enables a new type of robotic system that can efficiently responds to external stimuli. In nature, animals — including humans — ignore safe or familiar stimuli and selectively react sensitively to important or dangerous ones. This selective response helps prevent unnecessary energy consumption while maintaining rapid awareness of critical signals. For instance, the sound of an air conditioner or the feel of clothing against the skin soon become familiar and are disregarded. However, if someone calls your name or a sharp object touches your skin, a rapid focus and response occur. These behaviors are regulated by the 'habituation' and 'sensitization' functions in the sensory nervous system. Attempts have been consistently made to apply these sensory nervous system functions of living organisms in order to create robots that efficiently respond to external environments like humans. However, implementing complex neural characteristics such as habituation and sensitization in robots has faced difficulties in miniaturization and energy efficiency due to the need for separate software or complex circuitry. In particular, there have been attempts to utilize memristors, a neuromorphic semiconductor. A memristor is a next-generation electrical device, which has been widely utilized as an artificial synapse due to its ability to store analog value in the form of device resistance. However, existing memristors had limitations in mimicking the complex characteristics of the nervous system because they only allowed simple monotonic changes in conductivity. To overcome these limitations, the research team developed a new memristor capable of reproducing complex neural response patterns such as habituation and sensitization within a single device. By introducing additional layer inside the memristor that alter conductivity in opposite directions, the device can more realistically emulate the dynamic synaptic behaviors of a real nervous system — for example, decreasing its response to repeated safe stimuli but quickly regaining sensitivity when a danger signal is detected. <New memristor mimicking functions of sensory nervous system such as habituation/sensitization> Using this new memristor, the research team built an artificial sensory nervous system capable of recognizing touch and pain, an applied it to a robotic hand to test its performance. When safe tactile stimuli were repeatedly applied, the robot hand, which initially reacted sensitively to unfamiliar tactile stimuli, gradually showed habituation characteristics by ignoring the stimuli. Later, when stimuli were applied along with an electric shock, it recognized this as a danger signal and showed sensitization characteristics by reacting sensitively again. Through this, it was experimentally proven that robots can efficiently respond to stimuli like humans without separate complex software or processors, verifying the possibility of developing energy-efficient neuro-inspired robots. <Robot arm with memristor-based artificial sensory nervous system> See-On Park, researcher at KAIST, stated, "By mimicking the human sensory nervous system with next-generation semiconductors, we have opened up the possibility of implementing a new concept of robots that are smarter and more energy-efficient in responding to external environments." He added, "This technology is expected to be utilized in various fusion fields of next-generation semiconductors and robotics, such as ultra-small robots, military robots, and medical robots like robotic prosthetics". This research was published online on July 1st in the international journal 'Nature Communications,' with Ph.D candidate See-On Park as the first author. Paper Title: Experimental demonstration of third-order memristor-based artificial sensory nervous system for neuro-inspired robotics DOI: https://doi.org/10.1038/s41467-025-60818-x This research was supported by the Korea National Research Foundation's Next-Generation Intelligent Semiconductor Technology Development Project, the Mid-Career Researcher Program, the PIM Artificial Intelligence Semiconductor Core Technology Development Project, the Excellent New Researcher Program, and the Nano Convergence Technology Division, National Nanofab Center's (NNFC) Nano-Medical Device Project.
2025.07.16
View 684
A KAIST Team Engineers a Microbial Platform for Efficient Lutein Production
<(From Left) Ph.D. Candidate Hyunmin Eun, Distinguished Professor Sang Yup Lee, , Dr. Cindy Pricilia Surya Prabowo> The application of systems metabolic engineering strategies, along with the construction of an electron channeling system, has enabled the first gram-per-liter scale production of lutein from Corynebacterium glutamicum, providing a viable alternative to plant-derived lutein production. A research group at KAIST has successfully engineered a microbial strain capable of producing lutein at industrially relevant levels. The team, led by Distinguished Professor Sang Yup Lee from the Department of Chemical and Biomolecular Engineering, developed a novel C. glutamicum strain using systems metabolic engineering strategies to overcome the limitations of previous microbial lutein production efforts. This research is expected to be beneficial for the efficient production of other industrially important natural products used in food, pharmaceuticals, and cosmetics. Lutein is a xanthophyll carotenoid found in egg yolk, fruits, and vegetables, known for its role in protecting our eyes from oxidative stress and reducing the risk of macular degeneration and cataracts. Currently, commercial lutein is predominantly extracted from marigold flowers; however, this approach has several drawbacks, including long cultivation times, high labor costs, and inefficient extraction yields, making it economically unfeasible for large-scale production. These challenges have driven the demand for alternative production methods. To address these issues, KAIST researchers, including Ph.D. Candidate Hyunmin Eun, Dr. Cindy Pricilia Surya Prabowo, and Distinguished Professor Sang Yup Lee, applied systems metabolic engineering strategies to engineer C. glutamicum, a GRAS (Generally Recognized As Safe) microorganism widely used in industrial fermentation. Unlike Escherichia coli, which was previously explored for microbial lutein production, C. glutamicum lacks endotoxins, making it a safer and more viable option for food and pharmaceutical applications. The team’s work, entitled “Gram-per-litre scale production of lutein by engineered Corynebacterium,” was published in Nature Synthesis on 04 July , 2025. This research details the high-level production of lutein using glucose as a renewable carbon source via systems metabolic engineering. The team focused on eliminating metabolic bottlenecks that previously limited microbial lutein synthesis. By employing enzyme scaffold-based electron channeling strategies, the researchers improved metabolic flux towards lutein biosynthesis while minimizing unwanted byproducts. <Lutein production metabolic pathway engineering> To enhance productivity, bottleneck enzymes within the metabolic pathway were identified and optimized. It was determined that electron-requiring cytochrome P450 enzymes played a major role in limiting lutein biosynthesis. To overcome this limitation, an electron channeling strategy was implemented, where engineered cytochrome P450 enzymes and their reductase partners were spatially organized on synthetic scaffolds, allowing more efficient electron transfer and significantly increasing lutein production. The engineered C. glutamicum strain was further optimized in fed-batch fermentation, achieving a record-breaking 1.78 g/L of lutein production within 54 hours, with a content of 19.51 mg/gDCW and a productivity of 32.88 mg/L/h—the highest lutein production performance in any host reported to date. This milestone demonstrates the feasibility of replacing plant-based lutein extraction with microbial fermentation technology. “We can anticipate that this microbial cell factory-based mass production of lutein will be able to replace the current plant extraction-based process,” said Ph.D. Candidate Hyunmin Eun. He emphasized that the integrated metabolic engineering strategies developed in this study could be broadly applied for the efficient production of other valuable natural products used in pharmaceuticals and nutraceuticals. <Schematic diagram of microbial-based lutein production platform> “As maintaining good health in an aging society becomes increasingly important, we expect that the technology and strategies developed here will play pivotal roles in producing other medically and nutritionally significant natural products,” added Distinguished Professor Sang Yup Lee. This work is supported by the Development of Next-generation Biorefinery Platform Technologies for Leading Bio-based Chemicals Industry project 2022M3J5A1056072 and the Development of Platform Technologies of Microbial Cell Factories for the Next-Generation Biorefineries project 2022M3J5A1056117 from the National Research Foundation supported by the Korean Ministry of Science and ICT. Source: Hyunmin Eun (1st), Cindy Pricilia Surya Prabowo (co-1st), and Sang Yup Lee (Corresponding). “Gram-per-litre scale production of lutein by engineered Corynebacterium”. Nature Synthesis (Online published) For further information: Sang Yup Lee, Distinguished Professor of Chemical and Biomolecular Engineering, KAIST (leesy@kaist.ac.kr, Tel: +82-42-350-3930)
2025.07.14
View 1127
KAIST Ushers in Era of Predicting ‘Optimal Alloys’ Using AI, Without High-Temperature Experiments
<Picture1.(From Left) Prof. Seungbum Hong, Ph.D candidate Youngwoo Choi> Steel alloys used in automobiles and machinery parts are typically manufactured through a melting process at high temperatures. The phenomenon where the components remain unchanged during melting is called “congruent melting.” KAIST researchers have now addressed this process—traditionally only possible through high-temperature experiments—using artificial intelligence (AI). This study draws attention as it proposes a new direction for future alloy development by predicting in advance how well alloy components will mix during melting, a long-standing challenge in the field. KAIST (President Kwang Hyung Lee) announced on the 14th of July that Professor Seungbum Hong’s research team from the Department of Materials Science and Engineering, in international collaboration with Professor Chris Wolverton’s group at Northwestern University, has developed a high-accuracy machine learning model that predicts whether alloy components will remain stable during melting. This was achieved using formation energy data derived from Density Functional Theory (DFT)* calculations. *Density Functional Theory (DFT): A computational quantum mechanical method used to investigate the electronic structure of many-body systems, especially atoms, molecules, and solids, based on electron density. The research team combined formation energy values obtained via DFT with experimental melting reaction data to train a machine learning model on 4,536 binary compounds. Among the various machine learning algorithms tested, the XGBoost-based classification model demonstrated the highest accuracy in predicting whether alloys would mix well, achieving a prediction accuracy of approximately 82.5%. The team also applied the Shapley value method* to analyze the key features of the model. One major finding was that sharp changes in the slope of the formation energy curve (referred to as “convex hull sharpness”) were the most significant factor. A steep slope indicates a composition with energetically favorable (i.e., stable) formation. *Shapley value: An explainability method in AI used to determine how much each feature contributed to a prediction. The most notable significance of this study is that it predicts alloy melting behavior without performing high-temperature experiments. This is especially useful for materials such as high-entropy alloys or ultra-heat-resistant alloys, which are difficult to handle experimentally. The approach could also be extended to the design of complex multi-component alloy systems in the future. Furthermore, the physical indicators identified by the AI model showed high consistency with actual experimental results on how well alloys mix and remain stable. This suggests that the model could be broadly applied to the development of various metal materials and the prediction of structural stability. Professor Seungbum Hong of KAIST stated, “This research demonstrates how data-driven predictive materials development is possible by integrating computational methods, experimental data, and machine learning—departing from the traditional experience-based alloy design.” He added, “In the future, by incorporating state-of-the-art AI techniques such as generative models and reinforcement learning, we could enter an era where completely new alloys are designed automatically.” <Model performance and feature importance analysis for predicting melting congruency. (a) SHAP summary plot showing the impact of individual features on model predictions. (b) Confusion matrix illustrating the model’s classification performance. (c) Receiver operating characteristic (ROC) curve with an AUC (area under the curve) score of 0.87, indicating a strong classification performance.> Ph.D. candidate Youngwoo Choi, from the Department of Materials Science and Engineering at KAIST, participated as the first author. The study was published in the May issue of APL Machine Learning, a prestigious journal in the field of machine learning published by the American Institute of Physics, and was selected as a “Featured Article.” ※ Paper title: Machine learning-based melting congruency prediction of binary compounds using density functional theory-calculated formation energy ※ DOI: 10.1063/5.0247514 This research was supported by the Ministry of Science and ICT and the National Research Foundation of Korea.
2025.07.14
View 420
<<
첫번째페이지
<
이전 페이지
1
2
3
4
5
6
7
8
9
10
>
다음 페이지
>>
마지막 페이지 93