본문 바로가기
대메뉴 바로가기
KAIST
Newsletter Vol.27
Receive KAIST news by email!
View
Subscribe
Close
Type your e-mail address here.
Subscribe
Close
KAIST
NEWS
유틸열기
홈페이지 통합검색
-
검색
KOREAN
메뉴 열기
research
by recently order
by view order
KAIST Succeeds in Real-Time Carbon Dioxide Monitoring Without Batteries or External Power
< (From left) Master's Student Gyurim Jang, Professor Kyeongha Kwon > KAIST (President Kwang Hyung Lee) announced on June 9th that a research team led by Professor Kyeongha Kwon from the School of Electrical Engineering, in a joint study with Professor Hanjun Ryu's team at Chung-Ang University, has developed a self-powered wireless carbon dioxide (CO2) monitoring system. This innovative system harvests fine vibrational energy from its surroundings to periodically measure CO2 concentrations. This breakthrough addresses a critical need in environmental monitoring: accurately understanding "how much" CO2 is being emitted to combat climate change and global warming. While CO2 monitoring technology is key to this, existing systems largely rely on batteries or wired power system, imposing limitations on installation and maintenance. The KAIST team tackled this by creating a self-powered wireless system that operates without external power. The core of this new system is an "Inertia-driven Triboelectric Nanogenerator (TENG)" that converts vibrations (with amplitudes ranging from 20-4000 ㎛ and frequencies from 0-300 Hz) generated by industrial equipment or pipelines into electricity. This enables periodic CO2 concentration measurements and wireless transmission without the need for batteries. < Figure 1. Concept and configuration of self-powered wireless CO2 monitoring system using fine vibration harvesting (a) System block diagram (b) Photo of fabricated system prototype > The research team successfully amplified fine vibrations and induced resonance by combining spring-attached 4-stack TENGs. They achieved stable power production of 0.5 mW under conditions of 13 Hz and 0.56 g acceleration. The generated power was then used to operate a CO2 sensor and a Bluetooth Low Energy (BLE) system-on-a-chip (SoC). Professor Kyeongha Kwon emphasized, "For efficient environmental monitoring, a system that can operate continuously without power limitations is essential." She explained, "In this research, we implemented a self-powered system that can periodically measure and wirelessly transmit CO2 concentrations based on the energy generated from an inertia-driven TENG." She added, "This technology can serve as a foundational technology for future self-powered environmental monitoring platforms integrating various sensors." < Figure 2. TENG energy harvesting-based wireless CO2 sensing system operation results (c) Experimental setup (d) Measured CO2 concentration results powered by TENG and conventional DC power source > This research was published on June 1st in the internationally renowned academic journal `Nano Energy (IF 16.8)`. Gyurim Jang, a master's student at KAIST, and Daniel Manaye Tiruneh, a master's student at Chung-Ang University, are the co-first authors of the paper.*Paper Title: Highly compact inertia-driven triboelectric nanogenerator for self-powered wireless CO2 monitoring via fine-vibration harvesting*DOI: 10.1016/j.nanoen.2025.110872 This research was supported by the Saudi Aramco-KAIST CO2 Management Center.
2025.06.09
View 46420
KAIST Professor Jee-Hwan Ryu Receives Global IEEE Robotics Journal Best Paper Award
- Professor Jee-Hwan Ryu of Civil and Environmental Engineering receives the Best Paper Award from the Institute of Electrical and Electronics Engineers (IEEE) Robotics Journal, officially presented at ICRA, a world-renowned robotics conference. - This is the highest level of international recognition, awarded to only the top 5 papers out of approximately 1,500 published in 2024. - Securing a new working channel technology for soft growing robots expands the practicality and application possibilities in the field of soft robotics. < Professor Jee-Hwan Ryu (left), Nam Gyun Kim, Ph.D. Candidate (right) from the KAIST Department of Civil and Environmental Engineering and KAIST Robotics Program > KAIST (President Kwang-Hyung Lee) announced on the 6th that Professor Jee-Hwan Ryu from the Department of Civil and Environmental Engineering received the 2024 Best Paper Award from the Robotics and Automation Letters (RA-L), a premier journal under the IEEE, at the '2025 IEEE International Conference on Robotics and Automation (ICRA)' held in Atlanta, USA, on May 22nd. This Best Paper Award is a prestigious honor presented to only the top 5 papers out of approximately 1,500 published in 2024, boasting high international competition and authority. The award-winning paper by Professor Ryu proposes a novel working channel securing mechanism that significantly expands the practicality and application possibilities of 'Soft Growing Robots,' which are based on soft materials that move or perform tasks through a growing motion similar to plant roots. < IEEE Robotics Journal Award Ceremony > Existing soft growing robots move by inflating or contracting their bodies through increasing or decreasing internal pressure, which can lead to blockages in their internal passages. In contrast, the newly developed soft growing robot achieves a growing function while maintaining the internal passage pressure equal to the external atmospheric pressure, thereby successfully securing an internal passage while retaining the robot's flexible and soft characteristics. This structure allows various materials or tools to be freely delivered through the internal passage (working channel) within the robot and offers the advantage of performing multi-purpose tasks by flexibly replacing equipment according to the working environment. The research team fabricated a prototype to prove the effectiveness of this technology and verified its performance through various experiments. Specifically, in the slide plate experiment, they confirmed whether materials or equipment could pass through the robot's internal channel without obstruction, and in the pipe pulling experiment, they verified if a long pipe-shaped tool could be pulled through the internal channel. < Figure 1. Overall hardware structure of the proposed soft growing robot (left) and a cross-sectional view composing the inflatable structure (right) > Experimental results demonstrated that the internal channel remained stable even while the robot was growing, serving as a key basis for supporting the technology's practicality and scalability. Professor Jee-Hwan Ryu stated, "This award is very meaningful as it signifies the global recognition of Korea's robotics technology and academic achievements. Especially, it holds great significance in achieving technical progress that can greatly expand the practicality and application fields of soft growing robots. This achievement was possible thanks to the dedication and collaboration of the research team, and I will continue to contribute to the development of robotics technology through innovative research." < Figure 2. Material supplying mechanism of the Soft Growing Robot > This research was co-authored by Dongoh Seo, Ph.D. Candidate in Civil and Environmental Engineering, and Nam Gyun Kim, Ph.D. Candidate in Robotics. It was published in IEEE Robotics and Automation Letters on September 1, 2024. (Paper Title: Inflatable-Structure-Based Working-Channel Securing Mechanism for Soft Growing Robots, DOI: 10.1109/LRA.2024.3426322) This project was supported simultaneously by the National Research Foundation of Korea's Future Promising Convergence Technology Pioneer Research Project and Mid-career Researcher Project.
2025.06.09
View 1228
KAIST Introduces ‘Virtual Teaching Assistant’ That can Answer Even in the Middle of the Night – Successful First Deployment in Classroom
- Research teams led by Prof. Yoonjae Choi (Kim Jaechul Graduate School of AI) and Prof. Hwajeong Hong (Department of Industrial Design) at KAIST developed a Virtual Teaching Assistant (VTA) to support learning and class operations for a course with 477 students. - The VTA responds 24/7 to students’ questions related to theory and practice by referencing lecture slides, coding assignments, and lecture videos. - The system’s source code has been released to support future development of personalized learning support systems and their application in educational settings. < Photo 1. (From left) PhD candidate Sunjun Kweon, Master's candidate Sooyohn Nam, PhD candidate Hyunseung Lim, Professor Hwajung Hong, Professor Yoonjae Choi > “At first, I didn’t have high expectations for the Virtual Teaching Assistant (VTA), but it turned out to be extremely helpful—especially when I had sudden questions late at night, I could get immediate answers,” said Jiwon Yang, a Ph.D. student at KAIST. “I was also able to ask questions I would’ve hesitated to bring up with a human TA, which led me to ask even more and ultimately improved my understanding of the course.” KAIST (President Kwang Hyung Lee) announced on June 5th that a joint research team led by Prof. Yoonjae Choi of the Kim Jaechul Graduate School of AI and Prof. Hwajeong Hong of the Department of Industrial Design has successfully developed and deployed a Virtual Teaching Assistant (VTA) that provides personalized feedback to individual students even in large-scale classes. This study marks one of the first large-scale, real-world deployments in Korea, where the VTA was introduced in the “Programming for Artificial Intelligence” course at the KAIST Kim Jaechul Graduate School of AI, taken by 477 master’s and Ph.D. students during the Fall 2024 semester, to evaluate its effectiveness and practical applicability in an actual educational setting. The AI teaching assistant developed in this study is a course-specialized agent, distinct from general-purpose tools like ChatGPT or conventional chatbots. The research team implemented a Retrieval-Augmented Generation (RAG) architecture, which automatically vectorizes a large volume of course materials—including lecture slides, coding assignments, and video lectures—and uses them as the basis for answering students’ questions. < Photo 2. Teaching Assistant demonstrating to the student how the Virtual Teaching Assistant works> When a student asks a question, the system searches for the most relevant course materials in real time based on the context of the query, and then generates a response. This process is not merely a simple call to a large language model (LLM), but rather a material-grounded question answering system tailored to the course content—ensuring both high reliability and accuracy in learning support. Sunjun Kweon, the first author of the study and head teaching assistant for the course, explained, “Previously, TAs were overwhelmed with repetitive and basic questions—such as concepts already covered in class or simple definitions—which made it difficult to focus on more meaningful inquiries.” He added, “After introducing the VTA, students began to reduce repeated questions and focus on more essential ones. As a result, the burden on TAs was significantly reduced, allowing us to concentrate on providing more advanced learning support.” In fact, compared to the previous year’s course, the number of questions that required direct responses from human TAs decreased by approximately 40%. < Photo 3. A student working with VTA. > The VTA, which was operated over a 14-week period, was actively used by more than half of the enrolled students, with a total of 3,869 Q&A interactions recorded. Notably, students without a background in AI or with limited prior knowledge tended to use the VTA more frequently, indicating that the system provided practical support as a learning aid, especially for those who needed it most. The analysis also showed that students tended to ask the VTA more frequently about theoretical concepts than they did with human TAs. This suggests that the AI teaching assistant created an environment where students felt free to ask questions without fear of judgment or discomfort, thereby encouraging more active engagement in the learning process. According to surveys conducted before, during, and after the course, students reported increased trust, response relevance, and comfort with the VTA over time. In particular, students who had previously hesitated to ask human TAs questions showed higher levels of satisfaction when interacting with the AI teaching assistant. < Figure 1. Internal structure of the AI Teaching Assistant (VTA) applied in this course. It follows a Retrieval-Augmented Generation (RAG) structure that builds a vector database from course materials (PDFs, recorded lectures, coding practice materials, etc.), searches for relevant documents based on student questions and conversation history, and then generates responses based on them. > Professor Yoonjae Choi, the lead instructor of the course and principal investigator of the study, stated, “The significance of this research lies in demonstrating that AI technology can provide practical support to both students and instructors. We hope to see this technology expanded to a wider range of courses in the future.” The research team has released the system’s source code on GitHub, enabling other educational institutions and researchers to develop their own customized learning support systems and apply them in real-world classroom settings. < Figure 2. Initial screen of the AI Teaching Assistant (VTA) introduced in the "Programming for AI" course. It asks for student ID input along with simple guidelines, a mechanism to ensure that only registered students can use it, blocking indiscriminate external access and ensuring limited use based on students. > The related paper, titled “A Large-Scale Real-World Evaluation of an LLM-Based Virtual Teaching Assistant,” was accepted on May 9, 2025, to the Industry Track of ACL 2025, one of the most prestigious international conferences in the field of Natural Language Processing (NLP), recognizing the excellence of the research. < Figure 3. Example conversation with the AI Teaching Assistant (VTA). When a student inputs a class-related question, the system internally searches for relevant class materials and then generates an answer based on them. In this way, VTA provides learning support by reflecting class content in context. > This research was conducted with the support of the KAIST Center for Teaching and Learning Innovation, the National Research Foundation of Korea, and the National IT Industry Promotion Agency.
2025.06.05
View 1196
KAIST Research Team Develops Electronic Ink for Room-Temperature Printing of High-Resolution, Variable-Stiffness Electronics
A team of researchers from KAIST and Seoul National University has developed a groundbreaking electronic ink that enables room-temperature printing of variable-stiffness circuits capable of switching between rigid and soft modes. This advancement marks a significant leap toward next-generation wearable, implantable, and robotic devices. < Photo 1. (From left) Professor Jae-Woong Jeong and PhD candidate Simok Lee of the School of Electrical Engineering, (in separate bubbles, from left) Professor Gun-Hee Lee of Pusan National University, Professor Seongjun Park of Seoul National University, Professor Steve Park of the Department of Materials Science and Engineering> Variable-stiffness electronics are at the forefront of adaptive technology, offering the ability for a single device to transition between rigid and soft modes depending on its use case. Gallium, a metal known for its high rigidity contrast between solid and liquid states, is a promising candidate for such applications. However, its use has been hindered by challenges including high surface tension, low viscosity, and undesirable phase transitions during manufacturing. On June 4th, a research team led by Professor Jae-Woong Jeong from the School of Electrical Engineering at KAIST, Professor Seongjun Park from the Digital Healthcare Major at Seoul National University, and Professor Steve Park from the Department of Materials Science and Engineering at KAIST introduced a novel liquid metal electronic ink. This ink allows for micro-scale circuit printing – thinner than a human hair – at room temperature, with the ability to reversibly switch between rigid and soft modes depending on temperature. The new ink combines printable viscosity with excellent electrical conductivity, enabling the creation of complex, high-resolution multilayer circuits comparable to commercial printed circuit boards (PCBs). These circuits can dynamically change stiffness in response to temperature, presenting new opportunities for multifunctional electronics, medical technologies, and robotics. Conventional electronics typically have fixed form factors – either rigid for durability or soft for wearability. Rigid devices like smartphones and laptops offer robust performance but are uncomfortable when worn, while soft electronics are more comfortable but lack precise handling. As demand grows for devices that can adapt their stiffness to context, variable-stiffness electronics are becoming increasingly important. < Figure 1. Fabrication process of stable, high-viscosity electronic ink by dispersing micro-sized gallium particles in a polymer matrix (left). High-resolution large-area circuit printing process through pH-controlled chemical sintering (right). > To address this challenge, the researchers focused on gallium, which melts just below body temperature. Solid gallium is quite stiff, while its liquid form is fluid and soft. Despite its potential, gallium’s use in electronic printing has been limited by its high surface tension and instability when melted. To overcome these issues, the team developed a pH-controlled liquid metal ink printing process. By dispersing micro-sized gallium particles into a hydrophilic polyurethane matrix using a neutral solvent (dimethyl sulfoxide, or DMSO), they created a stable, high-viscosity ink suitable for precision printing. During post-print heating, the DMSO decomposes to form an acidic environment, which removes the oxide layer on the gallium particles. This triggers the particles to coalesce into electrically conductive networks with tunable mechanical properties. The resulting printed circuits exhibit fine feature sizes (~50 μm), high conductivity (2.27 × 10⁶ S/m), and a stiffness modulation ratio of up to 1,465 – allowing the material to shift from plastic-like rigidity to rubber-like softness. Furthermore, the ink is compatible with conventional printing techniques such as screen printing and dip coating, supporting large-area and 3D device fabrication. < Figure 2. Key features of the electronic ink. (i) High-resolution printing and multilayer integration capability. (ii) Batch fabrication capability through large-area screen printing. (iii) Complex three-dimensional structure printing capability through dip coating. (iv) Excellent electrical conductivity and stiffness control capability.> The team demonstrated this technology by developing a multi-functional device that operates as a rigid portable electronic under normal conditions but transforms into a soft wearable healthcare device when attached to the body. They also created a neural probe that remains stiff during surgical insertion for accurate positioning but softens once inside brain tissue to reduce inflammation – highlighting its potential for biomedical implants. < Figure 3. Variable stiffness wearable electronics with high-resolution circuits and multilayer structure comparable to commercial printed circuit boards (PCBs). Functions as a rigid portable electronic device at room temperature, then transforms into a wearable healthcare device by softening at body temperature upon skin contact.> “The core achievement of this research lies in overcoming the longstanding challenges of liquid metal printing through our innovative technology,” said Professor Jeong. “By controlling the ink’s acidity, we were able to electrically and mechanically connect printed gallium particles, enabling the room-temperature fabrication of high-resolution, large-area circuits with tunable stiffness. This opens up new possibilities for future personal electronics, medical devices, and robotics.” < Figure 4. Body-temperature softening neural probe implemented by coating electronic ink on an optical waveguide structure. (Left) Remains rigid during surgery for precise manipulation and brain insertion, then softens after implantation to minimize mechanical stress on the brain and greatly enhance biocompatibility. (Right) > This research was published in Science Advances under the title, “Phase-Change Metal Ink with pH-Controlled Chemical Sintering for Versatile and Scalable Fabrication of Variable Stiffness Electronics.” The work was supported by the National Research Foundation of Korea, the Boston-Korea Project, and the BK21 FOUR Program.
2025.06.04
View 1597
RAIBO Runs over Walls with Feline Agility... Ready for Effortless Search over Mountaineous and Rough Terrains
< Photo 1. Research Team Photo (Professor Jemin Hwangbo, second from right in the front row) > KAIST's quadrupedal robot, RAIBO, can now move at high speed across discontinuous and complex terrains such as stairs, gaps, walls, and debris. It has demonstrated its ability to run on vertical walls, leap over 1.3-meter-wide gaps, sprint at approximately 14.4 km/h over stepping stones, and move quickly and nimbly on terrain combining 30° slopes, stairs, and stepping stones. RAIBO is expected to be deployed soon for practical missions such as disaster site exploration and mountain searches. Professor Jemin Hwangbo's research team in the Department of Mechanical Engineering at our university announced on June 3rd that they have developed a quadrupedal robot navigation framework capable of high-speed locomotion at 14.4 km/h (4m/s) even on discontinuous and complex terrains such as walls, stairs, and stepping stones. The research team developed a quadrupedal navigation system that enables the robot to reach its target destination quickly and safely in complex and discontinuous terrain. To achieve this, they approached the problem by breaking it down into two stages: first, developing a planner for planning foothold positions, and second, developing a tracker to accurately follow the planned foothold positions. First, the planner module quickly searches for physically feasible foothold positions using a sampling-based optimization method with neural network-based heuristics and verifies the optimal path through simulation rollouts. While existing methods considered various factors such as contact timing and robot posture in addition to foothold positions, this research significantly reduced computational complexity by setting only foothold positions as the search space. Furthermore, inspired by the walking method of cats, the introduction of a structure where the hind feet step on the same spots as the front feet further significantly reduced computational complexity. < Figure 1. High-speed navigation across various discontinuous terrains > Second, the tracker module is trained to accurately step on planned positions, and tracking training is conducted through a generative model that competes in environments of appropriate difficulty. The tracker is trained through reinforcement learning to accurately step on planned plots, and during this process, a generative model called the 'map generator' provides the target distribution. This generative model is trained simultaneously and adversarially with the tracker to allow the tracker to progressively adapt to more challenging difficulties. Subsequently, a sampling-based planner was designed to generate feasible foothold plans that can reflect the characteristics and performance of the trained tracker. This hierarchical structure showed superior performance in both planning speed and stability compared to existing techniques, and experiments proved its high-speed locomotion capabilities across various obstacles and discontinuous terrains, as well as its general applicability to unseen terrains. Professor Jemin Hwangbo stated, "We approached the problem of high-speed navigation in discontinuous terrain, which previously required a significantly large amount of computation, from the simple perspective of how to select the footprint positions. Inspired by the placements of cat's paw, allowing the hind feet to step where the front feet stepped drastically reduced computation. We expect this to significantly expand the range of discontinuous terrain that walking robots can overcome and enable them to traverse it at high speeds, contributing to the robot's ability to perform practical missions such as disaster site exploration and mountain searches." This research achievement was published in the May 2025 issue of the international journal Science Robotics. Paper Title: High-speed control and navigation for quadrupedal robots on complex and discrete terrain, (https://www.science.org/doi/10.1126/scirobotics.ads6192)YouTube Link: https://youtu.be/EZbM594T3c4?si=kfxLF2XnVUvYVIyk
2025.06.04
View 1898
Professor Hyun Myung's Team Wins First Place in a Challenge at ICRA by IEEE
< Photo 1. (From left) Daebeom Kim (Team Leader, Ph.D. student), Seungjae Lee (Ph.D. student), Seoyeon Jang (Ph.D. student), Jei Kong (Master's student), Professor Hyun Myung > A team of the Urban Robotics Lab, led by Professor Hyun Myung from the KAIST School of Electrical Engineering, achieved a remarkable first-place overall victory in the Nothing Stands Still Challenge (NSS Challenge) 2025, held at the 2025 IEEE International Conference on Robotics and Automation (ICRA), the world's most prestigious robotics conference, from May 19 to 23 in Atlanta, USA. The NSS Challenge was co-hosted by HILTI, a global construction company based in Liechtenstein, and Stanford University's Gradient Spaces Group. It is an expanded version of the HILTI SLAM (Simultaneous Localization and Mapping)* Challenge, which has been held since 2021, and is considered one of the most prominent challenges at 2025 IEEE ICRA.*SLAM: Refers to Simultaneous Localization and Mapping, a technology where robots, drones, autonomous vehicles, etc., determine their own position and simultaneously create a map of their surroundings. < Photo 2. A scene from the oral presentation on the winning team's technology (Speakers: Seungjae Lee and Seoyeon Jang, Ph.D. candidates of KAIST School of Electrical Engineering) > This challenge primarily evaluates how accurately and robustly LiDAR scan data, collected at various times, can be registered in situations with frequent structural changes, such as construction and industrial environments. In particular, it is regarded as a highly technical competition because it deals with multi-session localization and mapping (Multi-session SLAM) technology that responds to structural changes occurring over multiple timeframes, rather than just single-point registration accuracy. The Urban Robotics Lab team secured first place overall, surpassing National Taiwan University (3rd place) and Northwestern Polytechnical University of China (2nd place) by a significant margin, with their unique localization and mapping technology that solves the problem of registering LiDAR data collected across multiple times and spaces. The winning team will be awarded a prize of $4,000. < Figure 1. Example of Multiway-Registration for Registering Multiple Scans > The Urban Robotics Lab team independently developed a multiway-registration framework that can robustly register multiple scans even without prior connection information. This framework consists of an algorithm for summarizing feature points within scans and finding correspondences (CubicFeat), an algorithm for performing global registration based on the found correspondences (Quatro), and an algorithm for refining results based on change detection (Chamelion). This combination of technologies ensures stable registration performance based on fixed structures, even in highly dynamic industrial environments. < Figure 2. Example of Change Detection Using the Chamelion Algorithm> LiDAR scan registration technology is a core component of SLAM (Simultaneous Localization And Mapping) in various autonomous systems such as autonomous vehicles, autonomous robots, autonomous walking systems, and autonomous flying vehicles. Professor Hyun Myung of the School of Electrical Engineering stated, "This award-winning technology is evaluated as a case that simultaneously proves both academic value and industrial applicability by maximizing the performance of precisely estimating the relative positions between different scans even in complex environments. I am grateful to the students who challenged themselves and never gave up, even when many teams abandoned due to the high difficulty." < Figure 3. Competition Result Board, Lower RMSE (Root Mean Squared Error) Indicates Higher Score (Unit: meters)> The Urban Robotics Lab team first participated in the SLAM Challenge in 2022, winning second place among academic teams, and in 2023, they secured first place overall in the LiDAR category and first place among academic teams in the vision category.
2025.05.30
View 1921
KAIST-UIUC researchers develop a treatment platform to disable the ‘biofilm’ shield of superbugs
< (From left) Ph.D. Candidate Joo Hun Lee (co-author), Professor Hyunjoon Kong (co-corresponding author) and Postdoctoral Researcher Yujin Ahn (co-first author) from the Department of Chemical and Biomolecular Engineering of the University of Illinois at Urbana-Champaign and Ju Yeon Chung (co-first author) from the Integrated Master's and Doctoral Program, and Professor Hyun Jung Chung (co-corresponding author) from the Department of Biological Sciences of KAIST > A major cause of hospital-acquired infections, the super bacteria Methicillin-resistant Staphylococcus aureus (MRSA), not only exhibits strong resistance to existing antibiotics but also forms a dense biofilm that blocks the effects of external treatments. To meet this challenge, KAIST researchers, in collaboration with an international team, successfully developed a platform that utilizes microbubbles to deliver gene-targeted nanoparticles capable of break ing down the biofilms, offering an innovative solution for treating infections resistant to conventional antibiotics. KAIST (represented by President Kwang Hyung Lee) announced on May 29 that a research team led by Professor Hyun Jung Chung from the Department of Biological Sciences, in collaboration with Professor Hyunjoon Kong's team at the University of Illinois, has developed a microbubble-based nano-gene delivery platform (BTN MB) that precisely delivers gene suppressors into bacteria to effectively remove biofilms formed by MRSA. The research team first designed short DNA oligonucleotides that simultaneously suppress three major MRSA genes, related to—biofilm formation (icaA), cell division (ftsZ), and antibiotic resistance (mecA)—and engineered nanoparticles (BTN) to effectively deliver them into the bacteria. < Figure 1. Effective biofilm treatment using biofilm-targeting nanoparticles controlled by microbubbler system. Schematic illustration of BTN delivery with microbubbles (MB), enabling effective permeation of ASOs targeting bacterial genes within biofilms infecting skin wounds. Gene silencing of targets involved in biofilm formation, bacterial proliferation, and antibiotic resistance leads to effective biofilm removal and antibacterial efficacy in vivo. > In addition, microbubbles (MB) were used to increase the permeability of the microbial membrane, specifically the biofilm formed by MRSA. By combining these two technologies, the team implemented a dual-strike strategy that fundamentally blocks bacterial growth and prevents resistance acquisition. This treatment system operates in two stages. First, the MBs induce pressure changes within the bacterial biofilm, allowing the BTNs to penetrate. Then, the BTNs slip through the gaps in the biofilm and enter the bacteria, delivering the gene suppressors precisely. This leads to gene regulation within MRSA, simultaneously blocking biofilm regeneration, cell proliferation, and antibiotic resistance expression. In experiments conducted in a porcine skin model and a mouse wound model infected with MRSA biofilm, the BTN MB treatment group showed a significant reduction in biofilm thickness, as well as remarkable decreases in bacterial count and inflammatory responses. < Figure 2. (a) Schematic illustration on the evaluation of treatment efficacy of BTN-MB gene therapy. (b) Reduction in MRSA biofilm mass via simultaneous inhibition of multiple genes. (c, d) Antibacterial efficacy of BTN-MB over time in a porcine skin infection biofilm model. (e) Schematic of the experimental setup to verify antibacterial efficacy in a mouse skin wound infection model. (f) Wound healing effects in mice. (g) Antibacterial effects at the wound site. (h) Histological analysis results. > These results are difficult to achieve with conventional antibiotic monotherapy and demonstrate the potential for treating a wide range of resistant bacterial infections. Professor Hyun Jung Chung of KAIST, who led the research, stated, “This study presents a new therapeutic solution that combines nanotechnology, gene suppression, and physical delivery strategies to address superbug infections that existing antibiotics cannot resolve. We will continue our research with the aim of expanding its application to systemic infections and various other infectious diseases.” < (From left) Ju Yeon Chung from the Integrated Master's and Doctoral Program, and Professor Hyun Jung Chung from the Department of Biological Sciences > The study was co-first authored by Ju Yeon Chung, a graduate student in the Department of Biological Sciences at KAIST, and Dr. Yujin Ahn from the University of Illinois. The study was published online on May 19 in the journal, Advanced Functional Materials. ※ Paper Title: Microbubble-Controlled Delivery of Biofilm-Targeting Nanoparticles to Treat MRSA Infection ※ DOI: https://doi.org/10.1002/adfm.202508291 This study was supported by the National Research Foundation and the Ministry of Health and Welfare, Republic of Korea; and the National Science Foundation and National Institutes of Health, USA.
2025.05.29
View 1298
KAIST Develops Virtual Staining Technology for 3D Histopathology
Moving beyond traditional methods of observing thinly sliced and stained cancer tissues, a collaborative international research team led by KAIST has successfully developed a groundbreaking technology. This innovation uses advanced optical techniques combined with an artificial intelligence-based deep learning algorithm to create realistic, virtually stained 3D images of cancer tissue without the need for serial sectioning nor staining. This breakthrough is anticipated to pave the way for next-generation non-invasive pathological diagnosis. < Photo 1. (From left) Juyeon Park (Ph.D. Candidate, Department of Physics), Professor YongKeun Park (Department of Physics) (Top left) Professor Su-Jin Shin (Gangnam Severance Hospital), Professor Tae Hyun Hwang (Vanderbilt University School of Medicine) > KAIST (President Kwang Hyung Lee) announced on the 26th that a research team led by Professor YongKeun Park of the Department of Physics, in collaboration with Professor Su-Jin Shin's team at Yonsei University Gangnam Severance Hospital, Professor Tae Hyun Hwang's team at Mayo Clinic, and Tomocube's AI research team, has developed an innovative technology capable of vividly displaying the 3D structure of cancer tissues without separate staining. For over 200 years, conventional pathology has relied on observing cancer tissues under a microscope, a method that only shows specific cross-sections of the 3D cancer tissue. This has limited the ability to understand the three-dimensional connections and spatial arrangements between cells. To overcome this, the research team utilized holotomography (HT), an advanced optical technology, to measure the 3D refractive index information of tissues. They then integrated an AI-based deep learning algorithm to successfully generate virtual H&E* images.* H&E (Hematoxylin & Eosin): The most widely used staining method for observing pathological tissues. Hematoxylin stains cell nuclei blue, and eosin stains cytoplasm pink. The research team quantitatively demonstrated that the images generated by this technology are highly similar to actual stained tissue images. Furthermore, the technology exhibited consistent performance across various organs and tissues, proving its versatility and reliability as a next-generation pathological analysis tool. < Figure 1. Comparison of conventional 3D tissue pathology procedure and the 3D virtual H&E staining technology proposed in this study. The traditional method requires preparing and staining dozens of tissue slides, while the proposed technology can reduce the number of slides by up to 10 times and quickly generate H&E images without the staining process. > Moreover, by validating the feasibility of this technology through joint research with hospitals and research institutions in Korea and the United States, utilizing Tomocube's holotomography equipment, the team demonstrated its potential for full-scale adoption in real-world pathological research settings. Professor YongKeun Park stated, "This research marks a major advancement by transitioning pathological analysis from conventional 2D methods to comprehensive 3D imaging. It will greatly enhance biomedical research and clinical diagnostics, particularly in understanding cancer tumor boundaries and the intricate spatial arrangements of cells within tumor microenvironments." < Figure 2. Results of AI-based 3D virtual H&E staining and quantitative analysis of pathological tissue. The virtually stained images enabled 3D reconstruction of key pathological features such as cell nuclei and glandular lumens. Based on this, various quantitative indicators, including cell nuclear distribution, volume, and surface area, could be extracted. > This research, with Juyeon Park, a student of the Integrated Master’s and Ph.D. Program at KAIST, as the first author, was published online in the prestigious journal Nature Communications on May 22. (Paper title: Revealing 3D microanatomical structures of unlabeled thick cancer tissues using holotomography and virtual H&E staining. [https://doi.org/10.1038/s41467-025-59820-0] This study was supported by the Leader Researcher Program of the National Research Foundation of Korea, the Global Industry Technology Cooperation Center Project of the Korea Institute for Advancement of Technology, and the Korea Health Industry Development Institute.
2025.05.26
View 2222
KAIST’s Next-Generation Small Satellite-2 Completes a Two-Year Mission – the Successful Observation of Arctic and Forest Changes
KAIST (President Kwang-Hyung Lee) announced on the 25th of May that the Next-Generation Small Satellite-2 developed by the Satellite Technology Research Center (SaTReC, Director Jaeheung Han) and launched aboard the third Nuri rocket from the Naro Space Center at 18:24 on May 25, 2023, has successfully completed its two-year core mission of verifying homegrown Synthetic Aperture Radar (SAR) technology and conducting all-weather Earth observations. The SAR system onboard the satellite was designed, manufactured, and tested domestically for the first time by KAIST’s Satellite Research Center. As of May 25, 2025, it has successfully completed its two-year in-orbit technology demonstration mission. Particularly noteworthy is the fact that the SAR system was mounted on the 100 kg-class Next-Generation Small Satellite-2, marking a major step forward in the miniaturization and weight reduction of spaceborne radar systems and strengthening Korea’s competitiveness in satellite technology. < Figure 1. Conceptual diagram of Earth observation by the Next-Generation Small Satellite No. 2's synthetic aperture radar > The developed SAR is an active sensor that uses electromagnetic waves, allowing all-weather image acquisition regardless of time of day or weather conditions. This makes it especially useful for monitoring regions like the Korean Peninsula, which frequently experiences rain and cloud cover, as it can observe even in cloudy and rainy conditions or darkness. Since its launch, the satellite has carried out three to four image acquisitions per day on average, undergoing functionality checks and technology verifications. To date, it has completed over 1,200 Earth observations and the SAR continues to perform stably, supporting ongoing observation tasks even beyond its designated mission lifespan. < Photo 1. Researchers of the Next-Generation Small Satellite No. 2 at SatRec, taken at the KAIST ground station. (From left) Sung-Og Park, Jung-soo Lee, Hongyoung Park, TaeSeong Jang (Next-Generation Small Satellite No. 2 Project Manager), Seyeon Kim, Mi Young Park, Yongmin Kim, DongGuk Kim > Although still in the domestic technology verification stage, KAIST’s Satellite Research Center has been collaborating with the Korea Polar Research Institute (Director Hyoung Chul Shin) and the Korea National Park Research Institute (Director Jin Tae Kim) since March 2024 to prioritize imaging of areas of interest related to Arctic ice changes and forest ecosystem monitoring. KAIST’s Satellite Research Center is conducting repeated observations of Arctic sea ice, and the Remote Sensing and Cryosphere Information Center of the Korea Polar Research Institute is analyzing the results using time-series data to precisely track changes in sea ice area and structure due to climate change. < Photo 2. Radar Images from Observations on July 24, 2024 - Around the Atchafalaya River in Louisiana, USA. The Wax Lake Delta is seen growing like a leaf. > Recently, the Korea Polar Research Institute (KOPRI), by integrating observation data from the Next-Generation Small Satellite No. 2 and the European Space Agency's (ESA) Sentinel-1, detected a significant increase of 15 km² in the area of an ice lake behind Canada's Milne Ice Shelf (a massive, floating layer of ice where glaciers flow from land into the sea) between 2021 and 2025. This has exacerbated structural instability and is analyzed as an important sign indicating the acceleration of Arctic climate change. Hyuncheol Kim, Director of the Remote Sensing and Cryosphere Information Center at the Korea Polar Research Institute, stated, “This research clearly demonstrates how vulnerable Arctic ice shelves are to climate change. We will continue to monitor and analyze Arctic environmental changes using the SAR aboard the Next-Generation Small Satellite-2 and promote international collaboration.” He added, “We also plan to present these findings at international academic conferences and expand educational and outreach efforts to raise public awareness about changes in the Arctic environment.” < Photo 3. Sinduri Coastal Dune, Taean Coastal National Park, Taean-gun, Chungcheongnam-do > In collaboration with the Climate Change Research Center of the National Park Research Institute, SAR imagery from the satellite is also being used to study phenological shifts due to climate change, the dieback of conifers in high-altitude zones, and landslide monitoring in forest ecosystems. Researchers are also analyzing the spatial distribution of carbon storage in forest areas using satellite data, comparing it with field measurements to improve accuracy. Because SAR is unaffected by light and weather conditions, it can observe through fire and smoke during wildfires, making it an exceptionally effective tool for the regular monitoring of large protected areas. It is expected to play an important role in shaping future forest conservation policies. In addition, KAIST’s Satellite Research Center is working on a system to convert the satellite’s technology demonstration data into standardized imagery products, with budget support from the Korea Aerospace Administration (Administrator Youngbin Yoon), making the data more accessible to research institutions and boosting the usability of the satellite’s observations. < Photo 4. Jang Bogo Station, Antarctica > Jaeheung Han, Director of the Satellite Research Center, said, “The significance of the Next-Generation Small Satellite-2 lies not only in the success of domestic development, but also in its direct contribution to real-world environmental analysis and national research efforts. We will continue to focus on expanding the application of SAR data from the satellite.” KAIST President Kwang-Hyung Lee remarked, “This satellite is a product of KAIST’s advanced space technology and the innovation capacity of its researchers. Its success signals KAIST’s potential to lead in future space technology talent development and R&D, and we will continue to accelerate efforts in this direction.” < Photo 5. Confirmation of changes in the expanded area of the Milne Ice Shelf lake using observation data from Next-Generation Small Satellite No. 2 and Sentinel-1 >
2025.05.25
View 1300
KAIST and Mainz Researchers Unveil 3D Magnon Control, Charting a New Course for Neuromorphic and Quantum Technologies
< Professor Se Kwon Kim of the Department of Physics (left), Dr. Zarzuela of the University of Mainz, Germany (right) > What if the magnon Hall effect, which processes information using magnons (spin waves) capable of current-free information transfer with magnets, could overcome its current limitation of being possible only on a 2D plane? If magnons could be utilized in 3D space, they would enable flexible design, including 3D circuits, and be applicable in various fields such as next-generation neuromorphic (brain-mimicking) computing structures, similar to human brain information processing. KAIST and an international joint research team have, for the first time in the world, predicted a 3D magnon Hall effect, demonstrating that magnons can move freely and complexly in 3D space, transcending the conventional concept of magnons. KAIST (President Kwang Hyung Lee) announced on May 22nd that Professor Se Kwon Kim of the Department of Physics, in collaboration with Dr. Ricardo Zarzuela of the University of Mainz, Germany, has revealed that the interaction between magnons (spin waves) and solitons (spin vortices) within complex magnetic structures (topologically textured frustrated magnets) is not simple, but complex in a way that enables novel functionalities. Magnons (spin waves), which can transmit information like electron movement, are garnering attention as a next-generation information processing technology that transmits information without using current, thus generating no heat. Until now, magnon research has focused on simple magnets where spins are neatly aligned in one direction, and the mathematics describing this was a relatively simple 'Abelian gauge theory.' The research team demonstrated, for the first time in the world, that in complex spin structures like frustrated magnets, magnons interact and become entangled in complex ways from various directions. They applied an advanced mathematical framework, 'non-Abelian gauge theory,' to describe this movement, which is a groundbreaking achievement. This research presents the possibility of future applications in low-power logic devices using magnons and topology-based quantum information processing technologies, indicating a potential paradigm shift in future information technology. In conventional linear magnetic materials, the value representing the magnetic state (order parameter) is given as a vector. In magnonics research based on this, it has been interpreted that a U(1) Abelian gauge field is induced when magnons move in soliton structures like skyrmions. This means that the interaction between solitons and magnons has a structure similar to quantum electrodynamics (QED), which has successfully explained various experimental results such as the magnon Hall effect in 2D magnets. < Figure. Schematic diagram of non-Abelian magnon quantum chromodynamics describing the dynamics of three types of magnons discovered for the first time in this study.> However, through this research, the team theoretically revealed that in frustrated magnets, the order parameter must be expressed not as a simple vector but as a quaternion. As a result, the gauge field experienced by magnons resembles an SU(3) non-Abelian gauge field, rather than a simple U(1) Abelian gauge field. This implies that within frustrated magnets, there are not one or two types of magnons seen in conventional magnets, but three distinct types of magnons, each interacting and intricately entangled with solitons. This structure is highly significant as it resembles quantum chromodynamics (QCD) that describes the strong interaction between quarks mediated by gluons rather than quantum electrodynamics (QED) that describes electromagnetic forces. Professor Se Kwon Kim stated, "This research presents a powerful theoretical framework to explain the dynamics of magnons occurring within the complex order of frustrated magnets," adding, "By pioneering non-Abelian magnonics, it will be a conceptual turning point that can influence quantum magnetism research as a whole." The research results, with Dr. Ricardo Zarzuela of the University of Mainz, Germany, as the first author, were published in the world-renowned physics journal Physical Review Letters on May 6th.※ Paper title: "Non-Abelian Gauge Theory for Magnons in Topologically Textured Frustrated Magnets," Phys. Rev. Lett. 134, 186701 (2025)DOI: https://doi.org/10.1103/PhysRevLett.134.186701 This research was supported by the Brain Pool Plus program of the National Research Foundation of Korea.
2025.05.22
View 2205
“For the First Time, We Shared a Meaningful Exchange”: KAIST Develops an AI App for Parents and Minimally Verbal Autistic Children Connect
• KAIST team up with NAVER AI Lab and Dodakim Child Development Center Develop ‘AAcessTalk’, an AI-driven Communication Tool bridging the gap Between Children with Autism and their Parents • The project earned the prestigious Best Paper Award at the ACM CHI 2025, the Premier International Conference in Human-Computer Interaction • Families share heartwarming stories of breakthrough communication and newfound understanding. < Photo 1. (From left) Professor Hwajung Hong and Doctoral candidate Dasom Choi of the Department of Industrial Design with SoHyun Park and Young-Ho Kim of Naver Cloud AI Lab > For many families of minimally verbal autistic (MVA) children, communication often feels like an uphill battle. But now, thanks to a new AI-powered app developed by researchers at KAIST in collaboration with NAVER AI Lab and Dodakim Child Development Center, parents are finally experiencing moments of genuine connection with their children. On the 16th, the KAIST (President Kwang Hyung Lee) research team, led by Professor Hwajung Hong of the Department of Industrial Design, announced the development of ‘AAcessTalk,’ an artificial intelligence (AI)-based communication tool that enables genuine communication between children with autism and their parents. This research was recognized for its human-centered AI approach and received international attention, earning the Best Paper Award at the ACM CHI 2025*, an international conference held in Yokohama, Japan.*ACM CHI (ACM Conference on Human Factors in Computing Systems) 2025: One of the world's most prestigious academic conference in the field of Human-Computer Interaction (HCI). This year, approximately 1,200 papers were selected out of about 5,000 submissions, with the Best Paper Award given to only the top 1%. The conference, which drew over 5,000 researchers, was the largest in its history, reflecting the growing interest in ‘Human-AI Interaction.’ Called AACessTalk, the app offers personalized vocabulary cards tailored to each child’s interests and context, while guiding parents through conversations with customized prompts. This creates a space where children’s voices can finally be heard—and where parents and children can connect on a deeper level. Traditional augmentative and alternative communication (AAC) tools have relied heavily on fixed card systems that often fail to capture the subtle emotions and shifting interests of children with autism. AACessTalk breaks new ground by integrating AI technology that adapts in real time to the child’s mood and environment. < Figure. Schematics of AACessTalk system. It provides personalized vocabulary cards for children with autism and context-based conversation guides for parents to focus on practical communication. Large ‘Turn Pass Button’ is placed at the child’s side to allow the child to lead the conversation. > Among its standout features is a large ‘Turn Pass Button’ that gives children control over when to start or end conversations—allowing them to lead with agency. Another feature, the “What about Mom/Dad?” button, encourages children to ask about their parents’ thoughts, fostering mutual engagement in dialogue, something many children had never done before. One parent shared, “For the first time, we shared a meaningful exchange.” Such stories were common among the 11 families who participated in a two-week pilot study, where children used the app to take more initiative in conversations and parents discovered new layers of their children’s language abilities. Parents also reported moments of surprise and joy when their children used unexpected words or took the lead in conversations, breaking free from repetitive patterns. “I was amazed when my child used a word I hadn’t heard before. It helped me understand them in a whole new way,” recalled one caregiver. Professor Hwajung Hong, who led the research at KAIST’s Department of Industrial Design, emphasized the importance of empowering children to express their own voices. “This study shows that AI can be more than a communication aid—it can be a bridge to genuine connection and understanding within families,” she said. Looking ahead, the team plans to refine and expand human-centered AI technologies that honor neurodiversity, with a focus on bringing practical solutions to socially vulnerable groups and enriching user experiences. This research is the result of KAIST Department of Industrial Design doctoral student Dasom Choi's internship at NAVER AI Lab.* Thesis Title: AACessTalk: Fostering Communication between Minimally Verbal Autistic Children and Parents with Contextual Guidance and Card Recommendation* DOI: 10.1145/3706598.3713792* Main Author Information: Dasom Choi (KAIST, NAVER AI Lab, First Author), SoHyun Park (NAVER AI Lab) , Kyungah Lee (Dodakim Child Development Center), Hwajung Hong (KAIST), and Young-Ho Kim (NAVER AI Lab, Corresponding Author) This research was supported by the NAVER AI Lab internship program and grants from the National Research Foundation of Korea: the Doctoral Student Research Encouragement Grant (NRF-2024S1A5B5A19043580) and the Mid-Career Researcher Support Program for the Development of a Generative AI-Based Augmentative and Alternative Communication System for Autism Spectrum Disorder (RS-2024-00458557).
2025.05.19
View 3165
Decoding Fear: KAIST Identifies An Affective Brain Circuit Crucial for Fear Memory Formation by Non-nociceptive Threat Stimulus
Fear memories can form in the brain following exposure to threatening situations such as natural disasters, accidents, or violence. When these memories become excessive or distorted, they can lead to severe mental health disorders, including post-traumatic stress disorder (PTSD), anxiety disorders, and depression. However, the mechanisms underlying fear memory formation triggered by affective pain rather than direct physical pain have remained largely unexplored – until now. A KAIST research team has identified, for the first time, a brain circuit specifically responsible for forming fear memories in the absence of physical pain, marking a significant advance in understanding how psychological distress is processed and drives fear memory formation in the brain. This discovery opens the door to the development of targeted treatments for trauma-related conditions by addressing the underlying neural pathways. < Photo 1. (from left) Professor Jin-Hee Han, Dr. Junho Han and Ph.D. Candidate Boin Suh of the Department of Biological Sciences > KAIST (President Kwang-Hyung Lee) announced on May 15th that the research team led by Professor Jin-Hee Han in the Department of Biological Sciences has identified the pIC-PBN circuit*, a key neural pathway involved in forming fear memories triggered by psychological threats in the absence of sensory pain. This groundbreaking work was conducted through experiments with mice.*pIC–PBN circuit: A newly identified descending neural pathway from the posterior insular cortex (pIC) to the parabrachial nucleus (PBN), specialized for transmitting psychological threat information. Traditionally, the lateral parabrachial nucleus (PBN) has been recognized as a critical part of the ascending pain pathway, receiving pain signals from the spinal cord. However, this study reveals a previously unknown role for the PBN in processing fear induced by non-painful psychological stimuli, fundamentally changing our understanding of its function in the brain. This work is considered the first experimental evidence that 'emotional distress' and 'physical pain' are processed through different neural circuits to form fear memories, making it a significant contribution to the field of neuroscience. It clearly demonstrates the existence of a dedicated pathway (pIC-PBN) for transmitting emotional distress. The study's first author, Dr. Junho Han, shared the personal motivation behind this research: “Our dog, Lego, is afraid of motorcycles. He never actually crashed into one, but ever since having a traumatizing event of having a motorbike almost run into him, just hearing the sound now triggers a fearful response. Humans react similarly – even if you didn’t have a personal experience of being involved in an accident, a near-miss or exposure to alarming media can create lasting fear memories, which may eventually lead to PTSD.” He continued, “Until now, fear memory research has mainly relied on experimental models involving physical pain. However, much of real-world human fears arise from psychological threats, rather than from direct physical harm. Despite this, little was known about the brain circuits responsible for processing these psychological threats that can drive fear memory formation.” To investigate this, the research team developed a novel fear conditioning model that utilizes visual threat stimuli instead of electrical shocks. In this model, mice were exposed to a rapidly expanding visual disk on a ceiling screen, simulating the threat of an approaching predator. This approach allowed the team to demonstrate that fear memories can form in response to a non-nociceptive, psychological threat alone, without the need for physical pain. < Figure 1. Artificial activation of the posterior insular cortex (pIC) to lateral parabrachial nucleus (PBN) neural circuit induces anxiety-like behaviors and fear memory formation in mice. > Using advanced chemogenetic and optogenetic techniques, the team precisely controlled neuronal activity, revealing that the lateral parabrachial nucleus (PBN) is essential to form fear memories in response to visual threats. They further traced the origin of these signals to the posterior insular cortex (pIC), a region known to process negative emotions and pain, confirming a direct connection between the two areas. The study also showed that inhibiting the pIC–PBN circuit significantly reduced fear memory formation in response to visual threats, without affecting innate fear responses or physical pain-based learning. Conversely, artificially activating this circuit alone was sufficient to drive fear memory formation, confirming its role as a key pathway for processing psychological threat information. < Figure 2. Schematic diagram of brain neural circuits transmitting emotional & physical pain threat signals. Visual threat stimuli do not involve physical pain but can create an anxious state and form fear memory through the affective pain signaling pathway. > Professor Jin-Hee Han commented, “This study lays an important foundation for understanding how emotional distress-based mental disorders, such as PTSD, panic disorder, and anxiety disorder, develop, and opens new possibilities for targeted treatment approaches.” The findings, authored by Dr. Junho Han (first author), Ph.D. candidate Boin Suh (second author), and Dr. Jin-Hee Han (corresponding author) of the Department of Biological Sciences, were published online in the international journal Science Advances on May 9, 2025.※ Paper Title: A top-down insular cortex circuit crucial for non-nociceptive fear learning. Science Advances (https://doi.org/10.1101/2024.10.14.618356)※ Author Information: Junho Han (first author), Boin Suh (second author), and Jin-Hee Han (corresponding author) This research was supported by grants from the National Research Foundation of Korea (NRF-2022M3E5E8081183 and NRF-2017M3C7A1031322).
2025.05.15
View 2483
<<
첫번째페이지
<
이전 페이지
1
2
3
4
5
6
7
8
9
10
>
다음 페이지
>>
마지막 페이지 64