본문 바로가기
대메뉴 바로가기
KAIST
Newsletter Vol.28
Receive KAIST news by email!
View
Subscribe
Close
Type your e-mail address here.
Subscribe
Close
KAIST
NEWS
유틸열기
홈페이지 통합검색
-
검색
KOREAN
메뉴 열기
Artificial+Intelligence
by recently order
by view order
KAIST Uses AI to Discover Optimal New Material for Removing Radioactive Iodine Contamination
<(From the Right) Professor Ho Jin Ryu, Department of Nuclear and Quantum Engineering, Dr. Sujeong Lee, a graduate of the KAIST Department of Materials Science and Engineering, and Dr. Juhwan Noh of KRICT’s Digital Chemistry Research Center> Managing radioactive waste is one of the core challenges in the use of nuclear energy. In particular, radioactive iodine poses serious environmental and health risks due to its long half-life (15.7 million years in the case of I-129), high mobility, and toxicity to living organisms. A Korean research team has successfully used artificial intelligence to discover a new material that can remove iodine for nuclear environmental remediation. The team plans to push forward with commercialization through various industry-academia collaborations, from iodine-adsorbing powders to contaminated water treatment filters. KAIST (President Kwang Hyung Lee) announced on the 2of July that Professor Ho Jin Ryu's research team from the Department of Nuclear and Quantum Engineering, in collaboration with Dr. Juhwan Noh of the Digital Chemistry Research Center at the Korea Research Institute of Chemical Technology (KRICT, President Young Kook Lee), which operates under the National Research Council of Science & Technology (NST, Chairman Youngsik Kim), developed a technique using AI to discover new materials that effectively remove radioactive iodine contaminants. Recent studies show that radioactive iodine primarily exists in aqueous environments in the form of iodate (IO₃⁻). However, existing silver-based adsorbents have weak chemical adsorption strength for iodate, making them inefficient. Therefore, it is imperative to develop new adsorbent materials that can effectively remove iodate. Professor Ho Jin Ryu’s team used a machine learning-based experimental strategy to identify optimal iodate adsorbents among compounds called Layered Double Hydroxides (LDHs), which contain various metal elements. The multi-metal LDH developed in this study – Cu₃(CrFeAl), based on copper, chromium, iron, and aluminum—showed exceptional adsorption performance, removing over 90% of iodate. This achievement was made possible by efficiently exploring a vast compositional space using AI-driven active learning, which would be difficult to search through conventional trial-and-error experiments. <Picture2. Concept of Developed AI-Based Technology for Exploring New Materials for Radioactive Contamination Removal> The research team focused on the fact that LDHs, like high-entropy materials, can incorporate a wide range of metal compositions and possess structures favorable for anion adsorption. However, due to the overwhelming number of possible metal combinations in multi-metal LDHs, identifying the optimal composition through traditional experimental methods has been nearly impossible. To overcome this, the team employed AI (machine learning). Starting with experimental data from 24 binary and 96 ternary LDH compositions, they expanded their search to include quaternary and quinary candidates. As a result, they were able to discover the optimal material for iodate removal by testing only 16% of the total candidate materials. Professor Ho Jin Ryu stated, “This study shows the potential of using artificial intelligence to efficiently identify radioactive decontamination materials from a vast pool of new material candidates, which is expected to accelerate research for developing new materials for nuclear environmental cleanup.” The research team has filed a domestic patent application for the developed powder technology and is currently proceeding with an international patent application. They plan to enhance the material’s performance under various conditions and pursue commercialization through industry-academia cooperation in the development of filters for treating contaminated water. Dr. Sujeong Lee, a graduate of the KAIST Department of Materials Science and Engineering, and Dr. Juhwan Noh of KRICT’s Digital Chemistry Research Center, participated as the co-first authors of the study. The results were published online on May 26 in the internationally renowned environmental publication Journal of Hazardous Materials. ※ Paper title: Discovery of multi-metal-layered double hydroxides for decontamination of iodate by machine learning-assisted experiments ※ DOI: https://doi.org/10.1016/j.jhazmat.2025.138735 This research was supported by the Nuclear Energy Research Infrastructure Program and the Nano-Materials Technology Development Program funded by the Ministry of Science and ICT and the National Research Foundation of Korea.
2025.07.03
View 46
KAIST to Lead the Way in Nurturing Talent and Driving S&T Innovation for a G3 AI Powerhouse
* Focusing on nurturing talent and dedicating to R&D to become a G3 AI powerhouse (Top 3 AI Nations). * Leading the realization of an "AI-driven Basic Society for All" and developing technologies that leverage AI to overcome the crisis in Korea's manufacturing sector. * 50 years ago, South Korea emerged as a scientific and technological powerhouse from the ashes, with KAIST at its core, contributing to the development of scientific and technological talent, innovative technology, national industrial growth, and the creation of a startup innovation ecosystem. As public interest in AI and science and technology has significantly grown with the inauguration of the new government, KAIST (President Kwang Hyung Lee) announced its plan, on June 24th, to transform into an "AI-centric, Value-Creating Science and Technology University" that leads national innovation based on science and technology and spearheads solutions to global challenges. At a time when South Korea is undergoing a major transition to a technology-driven society, KAIST, drawing on its half-century of experience as a "Starter Kit" for national development, is preparing to leap beyond being a mere educational and research institution to become a global innovation hub that creates new social value. In particular, KAIST has presented a vision for realizing an "AI-driven Basic Society" where all citizens can utilize AI without alienation, enabling South Korea to ascend to the top three AI nations (G3). To achieve this, through the "National AI Research Hub" project (headed by Kee Eung Kim), led by KAIST representing South Korea, the institution is dedicated to enhancing industrial competitiveness and effectively solving social problems based on AI technology. < KAIST President Kwang Hyung Lee > KAIST's research achievements in the AI field are garnering international attention. In the top three machine learning conferences (ICML, NeurIPS, ICLR), KAIST ranked 5th globally and 1st in Asia over the past five years (2020-2024). During the same period, based on the number of papers published in top conferences in machine learning, natural language processing, and computer vision (ICML, NeurIPS, ICLR, ACL, EMNLP, NAACL, CVPR, ICCV, ECCV), KAIST ranked 5th globally and 4th in Asia. Furthermore, KAIST has consistently demonstrated unparalleled research capabilities, ranking 1st globally in the average number of papers accepted at ISSCC (International Solid-State Circuits Conference), the world's most prestigious academic conference on semiconductor integrated circuits, for 19 years (2006-2024). KAIST is continuously expanding its research into core AI technologies, including hyper-scale AI models (Korean LLM), neuromorphic semiconductors, and low-power AI processors, as well as various application areas such as autonomous driving, urban air mobility (UAM), precision medicine, and explainable AI (XAI). In the manufacturing sector, KAIST's AI technologies are also driving on-site innovation. Professor Young Jae Jang's team has enhanced productivity in advanced manufacturing fields like semiconductors and displays through digital twins utilizing manufacturing site data and AI-based prediction technology. Professor Song Min Kim's team developed ultra-low power wireless tag technology capable of tracking locations with sub-centimeter precision, accelerating the implementation of smart factories. Technologies such as industrial process optimization and equipment failure prediction developed by INEEJI Co., Ltd., founded by Professor Jaesik Choi, are being rapidly applied in real industrial settings, yielding results. INEEJI was designated as a national strategic technology in the 'Explainable AI (XAI)' field by the government in March. < Researchers performing data analysis for AI research > Practical applications are also emerging in the robotics sector, which is closely linked to AI. Professor Jemin Hwangbo's team from the Department of Mechanical Engineering garnered attention by newly developing RAIBO 2, a quadrupedal robot usable in high-risk environments such as disaster relief and rough terrain exploration. Professor Kyoung Chul Kong's team and Angel Robotics Co., Ltd. developed the WalkOn Suit exoskeleton robot, significantly improving the quality of life for individuals with complete lower body paralysis or walking disabilities. Additionally, remarkable research is ongoing in future core technology areas such as AI semiconductors, quantum cryptography communication, ultra-small satellites, hydrogen fuel cells, next-generation batteries, and biomimetic sensors. Notably, space exploration technology based on small satellites, asteroid exploration projects, energy harvesting, and high-speed charging technologies are gaining attention. Particularly in advanced bio and life sciences, KAIST is collaborating with Germany's Merck company on various research initiatives, including synthetic biology and mRNA. KAIST is also contributing to the construction of a 430 billion won Merck Bio-Center in Daejeon, thereby stimulating the local economy and creating jobs. Based on these cutting-edge research capabilities, KAIST continues to expand its influence not only within the industry but also on the global stage. It has established strategic partnerships with leading universities worldwide, including MIT, Stanford University, and New York University (NYU). Notably, KAIST and NYU have established a joint campus in New York to strengthen human exchange and collaborative research. Active industry-academia collaborations with global companies such as Google, Intel, and TSMC are also ongoing, playing a pivotal role in future technology development and the creation of an innovation ecosystem. These activities also lead to a strong startup ecosystem that drives South Korean industries. The flow of startups, which began with companies like Qnix Computer, Nexon, and Naver, has expanded to a total of 1,914 companies to date. Their cumulative assets amount to 94 trillion won, with sales reaching 36 trillion won and employing approximately 60,000 people. Over 90% of these are technology-based startups originating from faculty and student labs, demonstrating a model that makes a tangible economic contribution based on science and technology. < Students at work > Having consistently generated diverse achievements, KAIST has already produced approximately 80,000 "KAISTians" who have created innovation through challenge and failure, and is currently recruiting new talent to continue driving innovation that transforms South Korea and the world. President Kwang Hyung Lee emphasized, "KAIST will establish itself as a global leader in science and technology, designing the future of South Korea and humanity and creating tangible value." He added, "We will focus on talent nurturing and research and development to realize the new government's national agenda of becoming a G3 AI powerhouse." He further stated, "KAIST's vision for the AI field, in which it places particular emphasis, is to strive for a society where everyone can freely utilize AI. We will contribute to significantly boosting productivity by recovering manufacturing competitiveness through AI and actively disseminating physical AI, AI robots, and AI mobility technologies to industrial sites."
2025.06.24
View 659
KAIST Researchers Unveil an AI that Generates "Unexpectedly Original" Designs
< Photo 1. Professor Jaesik Choi, KAIST Kim Jaechul Graduate School of AI > Recently, text-based image generation models can automatically create high-resolution, high-quality images solely from natural language descriptions. However, when a typical example like the Stable Diffusion model is given the text "creative," its ability to generate truly creative images remains limited. KAIST researchers have developed a technology that can enhance the creativity of text-based image generation models such as Stable Diffusion without additional training, allowing AI to draw creative chair designs that are far from ordinary. Professor Jaesik Choi's research team at KAIST Kim Jaechul Graduate School of AI, in collaboration with NAVER AI Lab, developed this technology to enhance the creative generation of AI generative models without the need for additional training. < Photo 2. Gayoung Lee, Researcher at NAVER AI Lab; Dahee Kwon, Ph.D. Candidate at KAIST Kim Jaechul Graduate School of AI; Jiyeon Han, Ph.D. Candidate at KAIST Kim Jaechul Graduate School of AI; Junho Kim, Researcher at NAVER AI Lab > Professor Choi's research team developed a technology to enhance creative generation by amplifying the internal feature maps of text-based image generation models. They also discovered that shallow blocks within the model play a crucial role in creative generation. They confirmed that amplifying values in the high-frequency region after converting feature maps to the frequency domain can lead to noise or fragmented color patterns. Accordingly, the research team demonstrated that amplifying the low-frequency region of shallow blocks can effectively enhance creative generation. Considering originality and usefulness as two key elements defining creativity, the research team proposed an algorithm that automatically selects the optimal amplification value for each block within the generative model. Through the developed algorithm, appropriate amplification of the internal feature maps of a pre-trained Stable Diffusion model was able to enhance creative generation without additional classification data or training. < Figure 1. Overview of the methodology researched by the development team. After converting the internal feature map of a pre-trained generative model into the frequency domain through Fast Fourier Transform, the low-frequency region of the feature map is amplified, then re-transformed into the feature space via Inverse Fast Fourier Transform to generate an image. > The research team quantitatively proved, using various metrics, that their developed algorithm can generate images that are more novel than those from existing models, without significantly compromising utility. In particular, they confirmed an increase in image diversity by mitigating the mode collapse problem that occurs in the SDXL-Turbo model, which was developed to significantly improve the image generation speed of the Stable Diffusion XL (SDXL) model. Furthermore, user studies showed that human evaluation also confirmed a significant improvement in novelty relative to utility compared to existing methods. Jiyeon Han and Dahee Kwon, Ph.D. candidates at KAIST and co-first authors of the paper, stated, "This is the first methodology to enhance the creative generation of generative models without new training or fine-tuning. We have shown that the latent creativity within trained AI generative models can be enhanced through feature map manipulation." They added, "This research makes it easy to generate creative images using only text from existing trained models. It is expected to provide new inspiration in various fields, such as creative product design, and contribute to the practical and useful application of AI models in the creative ecosystem." < Figure 2. Application examples of the methodology researched by the development team. Various Stable Diffusion models generate novel images compared to existing generations while maintaining the meaning of the generated object. > This research, co-authored by Jiyeon Han and Dahee Kwon, Ph.D. candidates at KAIST Kim Jaechul Graduate School of AI, was presented on June 16 at the International Conference on Computer Vision and Pattern Recognition (CVPR), an international academic conference.* Paper Title: Enhancing Creative Generation on Stable Diffusion-based Models* DOI: https://doi.org/10.48550/arXiv.2503.23538 This research was supported by the KAIST-NAVER Ultra-creative AI Research Center, the Innovation Growth Engine Project Explainable AI, the AI Research Hub Project, and research on flexible evolving AI technology development in line with increasingly strengthened ethical policies, all funded by the Ministry of Science and ICT through the Institute for Information & Communications Technology Promotion. It also received support from the KAIST AI Graduate School Program and was carried out at the KAIST Future Defense AI Specialized Research Center with support from the Defense Acquisition Program Administration and the Agency for Defense Development.
2025.06.20
View 730
Simultaneous Analysis of 21 Chemical Reactions... AI to Transform New Drug Development
< Photo 1. (From left) Professor Hyunwoo Kim and students Donghun Kim and Gyeongseon Choi in the Integrated M.S./Ph.D. program of the Department of Chemistry > Thalidomide, a drug once used to alleviate morning sickness in pregnant women, exhibits distinct properties due to its optical isomers* in the body: one isomer has a sedative effect, while the other causes severe side effects like birth defects. As this example illustrates, precise organic synthesis techniques, which selectively synthesize only the desired optical isomer, are crucial in new drug development. Overcoming the traditional methods that struggled with simultaneously analyzing multiple reactants, our research team has developed the world's first technology to precisely analyze 21 types of reactants simultaneously. This breakthrough is expected to make a significant contribution to new drug development utilizing AI and robots. *Optical Isomers: A pair of molecules with the same chemical formula that are mirror images of each other and cannot be superimposed due to their asymmetric structure. This is analogous to a left and right hand, which are similar in form but cannot be perfectly overlaid. KAIST's Professor Hyunwoo Kim's research team in the Department of Chemistry announced on the 16th that they have developed an innovative optical isomer analysis technology suitable for the era of AI-driven autonomous synthesis*. This research is the world's first technology to precisely analyze asymmetric catalytic reactions involving multiple reactants simultaneously using high-resolution fluorine nuclear magnetic resonance spectroscopy (19F NMR). It is expected to make groundbreaking contributions to various fields, including new drug development and catalyst optimization. *AI-driven Autonomous Synthesis: An advanced technology that automates and optimizes chemical substance synthesis processes using artificial intelligence (AI). It is gaining attention as a core element for realizing automated and intelligent research environments in future laboratories. AI predicts and adjusts experimental conditions, interprets results, and designs subsequent experiments independently, minimizing human intervention in repetitive experiments and significantly increasing research efficiency and innovativeness. Currently, while autonomous synthesis systems can automate everything from reaction design to execution, reaction analysis still relies on individual processing using traditional equipment. This leads to slower speeds and bottlenecks, making it unsuitable for high-speed repetitive experiments. Furthermore, multi-substrate simultaneous screening techniques proposed in the 1990s garnered attention as a strategy to maximize reaction analysis efficiency. However, limitations of existing chromatography-based analysis methods restricted the number of applicable substrates. In asymmetric synthesis reactions, which selectively synthesize only the desired optical isomer, simultaneously analyzing more than 10 types of substrates was nearly impossible. < Figure 1. Conventional organic reaction evaluation methods follow a process of deriving optimal reaction conditions using a single substrate, then expanding the substrate scope one by one under those conditions, leaving potential reaction areas unexplored. To overcome this, high-throughput screening is introduced to broadly explore catalyst reactivity for various substrates. When combined with multi-substrate screening, this approach allows for a much broader and more systematic understanding of reaction scope and trends. > To overcome these limitations, the research team developed a 19F NMR-based multi-substrate simultaneous screening technology. This method involves performing asymmetric catalytic reactions with multiple reactants in a single reaction vessel, introducing a fluorine functional group into the products, and then applying their self-developed chiral cobalt reagent to clearly quantify all optical isomers using 19F NMR. Utilizing the excellent resolution and sensitivity of 19F NMR, the research team successfully performed asymmetric synthesis reactions of 21 substrates simultaneously in a single reaction vessel and quantitatively measured the product yield and optical isomer ratio without any separate purification steps. Professor Hyunwoo Kim stated, "While anyone can perform asymmetric synthesis reactions with multiple substrates in one reactor, accurately analyzing all the products has been a challenging problem to solve until now. We expect that achieving world-class multi-substrate screening analysis technology will greatly contribute to enhancing the analytical capabilities of AI-driven autonomous synthesis platforms." < Figure 2. A method for analyzing multi-substrate asymmetric catalytic reactions, where different substrates react simultaneously in a single reactor, using fluorine nuclear magnetic resonance has been implemented. By utilizing the characteristics of fluorine nuclear magnetic resonance, which has a clean background signal and a wide chemical shift range, the reactivity of each substrate can be quantitatively analyzed. It is also shown that the optical activity of all reactants can be simultaneously measured using a cobalt metal complex. > He further added, "This research provides a technology that can rapidly verify the efficiency and selectivity of asymmetric catalytic reactions essential for new drug development, and it is expected to be utilized as a core analytical tool for AI-driven autonomous research." < Figure 3. It can be seen that in a multi-substrate reductive amination reaction using a total of 21 substrates, the yield and optical activity of the reactants according to the catalyst system were simultaneously measured using a fluorine nuclear magnetic resonance-based analysis platform. The yield of each reactant is indicated by color saturation, and the optical activity by numbers. > Donghun Kim (first author, Integrated M.S./Ph.D. program) and Gyeongseon Choi (second author, Integrated M.S./Ph.D. program) from the KAIST Department of Chemistry participated in this research. The study was published online in the Journal of the American Chemical Society on May 27, 2025.※ Paper Title: One-pot Multisubstrate Screening for Asymmetric Catalysis Enabled by 19F NMR-based Simultaneous Chiral Analysis※ DOI: 10.1021/jacs.5c03446 This research was supported by the National Research Foundation of Korea's Mid-Career Researcher Program, the Asymmetric Catalytic Reaction Design Center, and the KAIST KC30 Project. < Figure 4. Conceptual diagram of performing multi-substrate screening reactions and utilizing fluorine nuclear magnetic resonance spectroscopy. >
2025.06.16
View 1245
KAIST Introduces ‘Virtual Teaching Assistant’ That can Answer Even in the Middle of the Night – Successful First Deployment in Classroom
- Research teams led by Prof. Yoonjae Choi (Kim Jaechul Graduate School of AI) and Prof. Hwajeong Hong (Department of Industrial Design) at KAIST developed a Virtual Teaching Assistant (VTA) to support learning and class operations for a course with 477 students. - The VTA responds 24/7 to students’ questions related to theory and practice by referencing lecture slides, coding assignments, and lecture videos. - The system’s source code has been released to support future development of personalized learning support systems and their application in educational settings. < Photo 1. (From left) PhD candidate Sunjun Kweon, Master's candidate Sooyohn Nam, PhD candidate Hyunseung Lim, Professor Hwajung Hong, Professor Yoonjae Choi > “At first, I didn’t have high expectations for the Virtual Teaching Assistant (VTA), but it turned out to be extremely helpful—especially when I had sudden questions late at night, I could get immediate answers,” said Jiwon Yang, a Ph.D. student at KAIST. “I was also able to ask questions I would’ve hesitated to bring up with a human TA, which led me to ask even more and ultimately improved my understanding of the course.” KAIST (President Kwang Hyung Lee) announced on June 5th that a joint research team led by Prof. Yoonjae Choi of the Kim Jaechul Graduate School of AI and Prof. Hwajeong Hong of the Department of Industrial Design has successfully developed and deployed a Virtual Teaching Assistant (VTA) that provides personalized feedback to individual students even in large-scale classes. This study marks one of the first large-scale, real-world deployments in Korea, where the VTA was introduced in the “Programming for Artificial Intelligence” course at the KAIST Kim Jaechul Graduate School of AI, taken by 477 master’s and Ph.D. students during the Fall 2024 semester, to evaluate its effectiveness and practical applicability in an actual educational setting. The AI teaching assistant developed in this study is a course-specialized agent, distinct from general-purpose tools like ChatGPT or conventional chatbots. The research team implemented a Retrieval-Augmented Generation (RAG) architecture, which automatically vectorizes a large volume of course materials—including lecture slides, coding assignments, and video lectures—and uses them as the basis for answering students’ questions. < Photo 2. Teaching Assistant demonstrating to the student how the Virtual Teaching Assistant works> When a student asks a question, the system searches for the most relevant course materials in real time based on the context of the query, and then generates a response. This process is not merely a simple call to a large language model (LLM), but rather a material-grounded question answering system tailored to the course content—ensuring both high reliability and accuracy in learning support. Sunjun Kweon, the first author of the study and head teaching assistant for the course, explained, “Previously, TAs were overwhelmed with repetitive and basic questions—such as concepts already covered in class or simple definitions—which made it difficult to focus on more meaningful inquiries.” He added, “After introducing the VTA, students began to reduce repeated questions and focus on more essential ones. As a result, the burden on TAs was significantly reduced, allowing us to concentrate on providing more advanced learning support.” In fact, compared to the previous year’s course, the number of questions that required direct responses from human TAs decreased by approximately 40%. < Photo 3. A student working with VTA. > The VTA, which was operated over a 14-week period, was actively used by more than half of the enrolled students, with a total of 3,869 Q&A interactions recorded. Notably, students without a background in AI or with limited prior knowledge tended to use the VTA more frequently, indicating that the system provided practical support as a learning aid, especially for those who needed it most. The analysis also showed that students tended to ask the VTA more frequently about theoretical concepts than they did with human TAs. This suggests that the AI teaching assistant created an environment where students felt free to ask questions without fear of judgment or discomfort, thereby encouraging more active engagement in the learning process. According to surveys conducted before, during, and after the course, students reported increased trust, response relevance, and comfort with the VTA over time. In particular, students who had previously hesitated to ask human TAs questions showed higher levels of satisfaction when interacting with the AI teaching assistant. < Figure 1. Internal structure of the AI Teaching Assistant (VTA) applied in this course. It follows a Retrieval-Augmented Generation (RAG) structure that builds a vector database from course materials (PDFs, recorded lectures, coding practice materials, etc.), searches for relevant documents based on student questions and conversation history, and then generates responses based on them. > Professor Yoonjae Choi, the lead instructor of the course and principal investigator of the study, stated, “The significance of this research lies in demonstrating that AI technology can provide practical support to both students and instructors. We hope to see this technology expanded to a wider range of courses in the future.” The research team has released the system’s source code on GitHub, enabling other educational institutions and researchers to develop their own customized learning support systems and apply them in real-world classroom settings. < Figure 2. Initial screen of the AI Teaching Assistant (VTA) introduced in the "Programming for AI" course. It asks for student ID input along with simple guidelines, a mechanism to ensure that only registered students can use it, blocking indiscriminate external access and ensuring limited use based on students. > The related paper, titled “A Large-Scale Real-World Evaluation of an LLM-Based Virtual Teaching Assistant,” was accepted on May 9, 2025, to the Industry Track of ACL 2025, one of the most prestigious international conferences in the field of Natural Language Processing (NLP), recognizing the excellence of the research. < Figure 3. Example conversation with the AI Teaching Assistant (VTA). When a student inputs a class-related question, the system internally searches for relevant class materials and then generates an answer based on them. In this way, VTA provides learning support by reflecting class content in context. > This research was conducted with the support of the KAIST Center for Teaching and Learning Innovation, the National Research Foundation of Korea, and the National IT Industry Promotion Agency.
2025.06.05
View 1476
RAIBO Runs over Walls with Feline Agility... Ready for Effortless Search over Mountaineous and Rough Terrains
< Photo 1. Research Team Photo (Professor Jemin Hwangbo, second from right in the front row) > KAIST's quadrupedal robot, RAIBO, can now move at high speed across discontinuous and complex terrains such as stairs, gaps, walls, and debris. It has demonstrated its ability to run on vertical walls, leap over 1.3-meter-wide gaps, sprint at approximately 14.4 km/h over stepping stones, and move quickly and nimbly on terrain combining 30° slopes, stairs, and stepping stones. RAIBO is expected to be deployed soon for practical missions such as disaster site exploration and mountain searches. Professor Jemin Hwangbo's research team in the Department of Mechanical Engineering at our university announced on June 3rd that they have developed a quadrupedal robot navigation framework capable of high-speed locomotion at 14.4 km/h (4m/s) even on discontinuous and complex terrains such as walls, stairs, and stepping stones. The research team developed a quadrupedal navigation system that enables the robot to reach its target destination quickly and safely in complex and discontinuous terrain. To achieve this, they approached the problem by breaking it down into two stages: first, developing a planner for planning foothold positions, and second, developing a tracker to accurately follow the planned foothold positions. First, the planner module quickly searches for physically feasible foothold positions using a sampling-based optimization method with neural network-based heuristics and verifies the optimal path through simulation rollouts. While existing methods considered various factors such as contact timing and robot posture in addition to foothold positions, this research significantly reduced computational complexity by setting only foothold positions as the search space. Furthermore, inspired by the walking method of cats, the introduction of a structure where the hind feet step on the same spots as the front feet further significantly reduced computational complexity. < Figure 1. High-speed navigation across various discontinuous terrains > Second, the tracker module is trained to accurately step on planned positions, and tracking training is conducted through a generative model that competes in environments of appropriate difficulty. The tracker is trained through reinforcement learning to accurately step on planned plots, and during this process, a generative model called the 'map generator' provides the target distribution. This generative model is trained simultaneously and adversarially with the tracker to allow the tracker to progressively adapt to more challenging difficulties. Subsequently, a sampling-based planner was designed to generate feasible foothold plans that can reflect the characteristics and performance of the trained tracker. This hierarchical structure showed superior performance in both planning speed and stability compared to existing techniques, and experiments proved its high-speed locomotion capabilities across various obstacles and discontinuous terrains, as well as its general applicability to unseen terrains. Professor Jemin Hwangbo stated, "We approached the problem of high-speed navigation in discontinuous terrain, which previously required a significantly large amount of computation, from the simple perspective of how to select the footprint positions. Inspired by the placements of cat's paw, allowing the hind feet to step where the front feet stepped drastically reduced computation. We expect this to significantly expand the range of discontinuous terrain that walking robots can overcome and enable them to traverse it at high speeds, contributing to the robot's ability to perform practical missions such as disaster site exploration and mountain searches." This research achievement was published in the May 2025 issue of the international journal Science Robotics. Paper Title: High-speed control and navigation for quadrupedal robots on complex and discrete terrain, (https://www.science.org/doi/10.1126/scirobotics.ads6192)YouTube Link: https://youtu.be/EZbM594T3c4?si=kfxLF2XnVUvYVIyk
2025.06.04
View 2540
KAIST Develops Virtual Staining Technology for 3D Histopathology
Moving beyond traditional methods of observing thinly sliced and stained cancer tissues, a collaborative international research team led by KAIST has successfully developed a groundbreaking technology. This innovation uses advanced optical techniques combined with an artificial intelligence-based deep learning algorithm to create realistic, virtually stained 3D images of cancer tissue without the need for serial sectioning nor staining. This breakthrough is anticipated to pave the way for next-generation non-invasive pathological diagnosis. < Photo 1. (From left) Juyeon Park (Ph.D. Candidate, Department of Physics), Professor YongKeun Park (Department of Physics) (Top left) Professor Su-Jin Shin (Gangnam Severance Hospital), Professor Tae Hyun Hwang (Vanderbilt University School of Medicine) > KAIST (President Kwang Hyung Lee) announced on the 26th that a research team led by Professor YongKeun Park of the Department of Physics, in collaboration with Professor Su-Jin Shin's team at Yonsei University Gangnam Severance Hospital, Professor Tae Hyun Hwang's team at Mayo Clinic, and Tomocube's AI research team, has developed an innovative technology capable of vividly displaying the 3D structure of cancer tissues without separate staining. For over 200 years, conventional pathology has relied on observing cancer tissues under a microscope, a method that only shows specific cross-sections of the 3D cancer tissue. This has limited the ability to understand the three-dimensional connections and spatial arrangements between cells. To overcome this, the research team utilized holotomography (HT), an advanced optical technology, to measure the 3D refractive index information of tissues. They then integrated an AI-based deep learning algorithm to successfully generate virtual H&E* images.* H&E (Hematoxylin & Eosin): The most widely used staining method for observing pathological tissues. Hematoxylin stains cell nuclei blue, and eosin stains cytoplasm pink. The research team quantitatively demonstrated that the images generated by this technology are highly similar to actual stained tissue images. Furthermore, the technology exhibited consistent performance across various organs and tissues, proving its versatility and reliability as a next-generation pathological analysis tool. < Figure 1. Comparison of conventional 3D tissue pathology procedure and the 3D virtual H&E staining technology proposed in this study. The traditional method requires preparing and staining dozens of tissue slides, while the proposed technology can reduce the number of slides by up to 10 times and quickly generate H&E images without the staining process. > Moreover, by validating the feasibility of this technology through joint research with hospitals and research institutions in Korea and the United States, utilizing Tomocube's holotomography equipment, the team demonstrated its potential for full-scale adoption in real-world pathological research settings. Professor YongKeun Park stated, "This research marks a major advancement by transitioning pathological analysis from conventional 2D methods to comprehensive 3D imaging. It will greatly enhance biomedical research and clinical diagnostics, particularly in understanding cancer tumor boundaries and the intricate spatial arrangements of cells within tumor microenvironments." < Figure 2. Results of AI-based 3D virtual H&E staining and quantitative analysis of pathological tissue. The virtually stained images enabled 3D reconstruction of key pathological features such as cell nuclei and glandular lumens. Based on this, various quantitative indicators, including cell nuclear distribution, volume, and surface area, could be extracted. > This research, with Juyeon Park, a student of the Integrated Master’s and Ph.D. Program at KAIST, as the first author, was published online in the prestigious journal Nature Communications on May 22. (Paper title: Revealing 3D microanatomical structures of unlabeled thick cancer tissues using holotomography and virtual H&E staining. [https://doi.org/10.1038/s41467-025-59820-0] This study was supported by the Leader Researcher Program of the National Research Foundation of Korea, the Global Industry Technology Cooperation Center Project of the Korea Institute for Advancement of Technology, and the Korea Health Industry Development Institute.
2025.05.26
View 2766
“For the First Time, We Shared a Meaningful Exchange”: KAIST Develops an AI App for Parents and Minimally Verbal Autistic Children Connect
• KAIST team up with NAVER AI Lab and Dodakim Child Development Center Develop ‘AAcessTalk’, an AI-driven Communication Tool bridging the gap Between Children with Autism and their Parents • The project earned the prestigious Best Paper Award at the ACM CHI 2025, the Premier International Conference in Human-Computer Interaction • Families share heartwarming stories of breakthrough communication and newfound understanding. < Photo 1. (From left) Professor Hwajung Hong and Doctoral candidate Dasom Choi of the Department of Industrial Design with SoHyun Park and Young-Ho Kim of Naver Cloud AI Lab > For many families of minimally verbal autistic (MVA) children, communication often feels like an uphill battle. But now, thanks to a new AI-powered app developed by researchers at KAIST in collaboration with NAVER AI Lab and Dodakim Child Development Center, parents are finally experiencing moments of genuine connection with their children. On the 16th, the KAIST (President Kwang Hyung Lee) research team, led by Professor Hwajung Hong of the Department of Industrial Design, announced the development of ‘AAcessTalk,’ an artificial intelligence (AI)-based communication tool that enables genuine communication between children with autism and their parents. This research was recognized for its human-centered AI approach and received international attention, earning the Best Paper Award at the ACM CHI 2025*, an international conference held in Yokohama, Japan.*ACM CHI (ACM Conference on Human Factors in Computing Systems) 2025: One of the world's most prestigious academic conference in the field of Human-Computer Interaction (HCI). This year, approximately 1,200 papers were selected out of about 5,000 submissions, with the Best Paper Award given to only the top 1%. The conference, which drew over 5,000 researchers, was the largest in its history, reflecting the growing interest in ‘Human-AI Interaction.’ Called AACessTalk, the app offers personalized vocabulary cards tailored to each child’s interests and context, while guiding parents through conversations with customized prompts. This creates a space where children’s voices can finally be heard—and where parents and children can connect on a deeper level. Traditional augmentative and alternative communication (AAC) tools have relied heavily on fixed card systems that often fail to capture the subtle emotions and shifting interests of children with autism. AACessTalk breaks new ground by integrating AI technology that adapts in real time to the child’s mood and environment. < Figure. Schematics of AACessTalk system. It provides personalized vocabulary cards for children with autism and context-based conversation guides for parents to focus on practical communication. Large ‘Turn Pass Button’ is placed at the child’s side to allow the child to lead the conversation. > Among its standout features is a large ‘Turn Pass Button’ that gives children control over when to start or end conversations—allowing them to lead with agency. Another feature, the “What about Mom/Dad?” button, encourages children to ask about their parents’ thoughts, fostering mutual engagement in dialogue, something many children had never done before. One parent shared, “For the first time, we shared a meaningful exchange.” Such stories were common among the 11 families who participated in a two-week pilot study, where children used the app to take more initiative in conversations and parents discovered new layers of their children’s language abilities. Parents also reported moments of surprise and joy when their children used unexpected words or took the lead in conversations, breaking free from repetitive patterns. “I was amazed when my child used a word I hadn’t heard before. It helped me understand them in a whole new way,” recalled one caregiver. Professor Hwajung Hong, who led the research at KAIST’s Department of Industrial Design, emphasized the importance of empowering children to express their own voices. “This study shows that AI can be more than a communication aid—it can be a bridge to genuine connection and understanding within families,” she said. Looking ahead, the team plans to refine and expand human-centered AI technologies that honor neurodiversity, with a focus on bringing practical solutions to socially vulnerable groups and enriching user experiences. This research is the result of KAIST Department of Industrial Design doctoral student Dasom Choi's internship at NAVER AI Lab.* Thesis Title: AACessTalk: Fostering Communication between Minimally Verbal Autistic Children and Parents with Contextual Guidance and Card Recommendation* DOI: 10.1145/3706598.3713792* Main Author Information: Dasom Choi (KAIST, NAVER AI Lab, First Author), SoHyun Park (NAVER AI Lab) , Kyungah Lee (Dodakim Child Development Center), Hwajung Hong (KAIST), and Young-Ho Kim (NAVER AI Lab, Corresponding Author) This research was supported by the NAVER AI Lab internship program and grants from the National Research Foundation of Korea: the Doctoral Student Research Encouragement Grant (NRF-2024S1A5B5A19043580) and the Mid-Career Researcher Support Program for the Development of a Generative AI-Based Augmentative and Alternative Communication System for Autism Spectrum Disorder (RS-2024-00458557).
2025.05.19
View 3684
2025 National Strategic Technology Innovation Forum Held - Seeking ROK-U.S. Cooperation
The Future Institute for National Strategic Technology and Policy (FINST&P) at KAIST will host the 'National Strategic Technology* Innovation Forum for 1st half of 2025' on Thursday, May 22, at the Chung Kunmo Conference Hall in the Academic and Culture Building (E9) at the KAIST Main Campus in Daejeon. * National Strategic Technologies: Technologies recognized for their strategic importance in terms of diplomacy and security, with significant impact on the national economy and related industries, and serving as the foundation for future innovation, including the creation of new technologies and industries. Currently, 12 major technologies such as AI, advanced bio, quantum, and semiconductors, and 50 detailed key technologies are being selected and supported (「Special Act on Fostering National Strategic Technologies」). This forum will examine the policy direction for fostering national strategic technologies in South Korea amidst rapidly changing international dynamics, such as escalating conflict between the United States and China and increasing global security uncertainties. Furthermore, it will discuss ways to strengthen technology innovation between South Korea and the United States to secure scientific and technological sovereignty and future growth engines. The forum will feature: △An opening address by KAIST President Kwang-Hyung Lee △Congratulatory remarks by Minister Sang-im Yoo of the Ministry of Science and ICT △A keynote speech by Robert D. Atkinson, President of the Information Technology and Innovation Foundation (ITIF) of the U.S. Subsequently, △Part 1, ‘ROK-U.S. Science and Technology Cooperation,’ will share the latest global trends in national strategic technologies and discuss ROK-U.S. science and technology cooperation under the U.S.-China technology hegemony structure. Following this, △Part 2, ‘ROK-U.S. Cooperation in Key Detailed Technology Fields,’ will analyze R&D trends and current issues focusing on major national strategic technologies, and derive action-oriented policy tasks that South Korea can pursue based on ROK-U.S. cooperation. < National Strategic Technology Innovation Forum Poster > Each session of Part 1 and Part 2 will consist of presentations by domestic and international experts, followed by a comprehensive discussion and Q&A with the audience, promising more in-depth discussions. Robert D. Atkinson, President of the U.S. Information Technology and Innovation Foundation (ITIF), in his keynote speech ‘The Trump 2.0 Era: South Korea's New Growth Strategy,’ suggests that South Korea should shift from its existing export-oriented growth to a new growth strategy based on broad technological innovation, and promote technological innovation by improving "shadow regulations" imposed by social practices. The first presenter in Part 1, Stephen Ezell, Vice President for Global Innovation Policy at ITIF, emphasizes in ‘U.S.-China Conflict: South Korea's Response and Global Implications’ that South Korea must overcome the crisis by improving overall national productivity and fostering a competitive service industry. Following this, Kyungjin Song, Country Representative of The Asia Foundation Korea Office, suggests in ‘Strengthening ROK-U.S. Strategic Technology Partnership Cooperation’ that as global technological hegemony competition changes the diplomatic and security landscape, ROK-U.S. cooperation should advance towards an institutional and sustainable cooperation foundation through a multi-layered partnership structure involving both countries' parliaments, industries, academia, and civil society. Jaemin Jung, Dean of the College of Humanities and Social Sciences at KAIST, in ‘The Value of Humanities, Social Sciences, and Arts in the Age of Artificial Intelligence,’ explains the role and importance of the KAIST College of Humanities and Social Sciences in connecting technological innovation with human-centered values, as responsible technological development of artificial intelligence (AI) is difficult without insights into humans, society, and culture, presenting examples through AI joint research projects conducted with MIT. As the first presenter in Part 2, Yong-hee Kim, Director of the Future Institute for National Strategic Technology and Policy (FINST&P) at KAIST, in ‘ROK-U.S. Cooperation for Truly Sustainable Next-Generation Nuclear Power,’ states that many countries or companies are pursuing nuclear power for carbon neutrality and energy security. He suggests that to achieve sustainable nuclear power, three major issues—safety, spent fuel, and uranium resources—need to be resolved, and the molten salt fast reactor (MSFR), an advanced reactor, can be an effective solution.*Molten Salt Fast Reactor (MSFR): A type of Generation IV nuclear reactor that uses molten salt as nuclear fuel and coolant in a fast neutron reactor. Byung Hee Hong, Professor at Seoul National University's Department of Chemistry, predicts in ‘Innovation in Strategic Industries Led by Graphene Mass Production Technology’ that graphene is a ‘dream new material’ that will overcome the limitations of existing technologies. If South Korea succeeds in mass-producing graphene, it will bring tremendous innovation across key industries such as AI semiconductors and sensors, quantum computing, and biomedical. Finally, Hoi-Jun Yoo, Distinguished Professor at the KAIST Graduate School of Artificial Intelligence Semiconductor, in ‘The Present and Future of AI Semiconductors,’ explains that with the full-scale utilization of large-scale AI like ChatGPT, semiconductor design is tending to reorganize from a computation-centric to a memory-centric approach. He then presents the direction and feasibility of mid-to-long-term strategies for the competitive development of Korean AI semiconductors. KAIST President Kwang-Hyung Lee stated the purpose of the event, saying, "As national strategic technology is a core agenda directly linked to our nation's future growth, KAIST will continue to provide a platform for science and technology and policy to communicate, together with domestic and international industry-academia-research institutions." This event is co-hosted with the U.S. think tank Information Technology and Innovation Foundation (ITIF), which has played a leading role in science and technology innovation policy, with the sponsorship of the Ministry of Science and ICT.
2025.05.16
View 1363
KAIST & CMU Unveils Amuse, a Songwriting AI-Collaborator to Help Create Music
Wouldn't it be great if music creators had someone to brainstorm with, help them when they're stuck, and explore different musical directions together? Researchers of KAIST and Carnegie Mellon University (CMU) have developed AI technology similar to a fellow songwriter who helps create music. KAIST (President Kwang-Hyung Lee) has developed an AI-based music creation support system, Amuse, by a research team led by Professor Sung-Ju Lee of the School of Electrical Engineering in collaboration with CMU. The research was presented at the ACM Conference on Human Factors in Computing Systems (CHI), one of the world’s top conferences in human-computer interaction, held in Yokohama, Japan from April 26 to May 1. It received the Best Paper Award, given to only the top 1% of all submissions. < (From left) Professor Chris Donahue of Carnegie Mellon University, Ph.D. Student Yewon Kim and Professor Sung-Ju Lee of the School of Electrical Engineering > The system developed by Professor Sung-Ju Lee’s research team, Amuse, is an AI-based system that converts various forms of inspiration such as text, images, and audio into harmonic structures (chord progressions) to support composition. For example, if a user inputs a phrase, image, or sound clip such as “memories of a warm summer beach”, Amuse automatically generates and suggests chord progressions that match the inspiration. Unlike existing generative AI, Amuse is differentiated in that it respects the user's creative flow and naturally induces creative exploration through an interactive method that allows flexible integration and modification of AI suggestions. The core technology of the Amuse system is a generation method that blends two approaches: a large language model creates music code based on the user's prompt and inspiration, while another AI model, trained on real music data, filters out awkward or unnatural results using rejection sampling. < Figure 1. Amuse system configuration. After extracting music keywords from user input, a large language model-based code progression is generated and refined through rejection sampling (left). Code extraction from audio input is also possible (right). The bottom is an example visualizing the chord structure of the generated code. > The research team conducted a user study targeting actual musicians and evaluated that Amuse has high potential as a creative companion, or a Co-Creative AI, a concept in which people and AI collaborate, rather than having a generative AI simply put together a song. The paper, in which a Ph.D. student Yewon Kim and Professor Sung-Ju Lee of KAIST School of Electrical and Electronic Engineering and Carnegie Mellon University Professor Chris Donahue participated, demonstrated the potential of creative AI system design in both academia and industry. ※ Paper title: Amuse: Human-AI Collaborative Songwriting with Multimodal Inspirations DOI: https://doi.org/10.1145/3706598.3713818 ※ Research demo video: https://youtu.be/udilkRSnftI?si=FNXccC9EjxHOCrm1 ※ Research homepage: https://nmsl.kaist.ac.kr/projects/amuse/ Professor Sung-Ju Lee said, “Recent generative AI technology has raised concerns in that it directly imitates copyrighted content, thereby violating the copyright of the creator, or generating results one-way regardless of the creator’s intention. Accordingly, the research team was aware of this trend, paid attention to what the creator actually needs, and focused on designing an AI system centered on the creator.” He continued, “Amuse is an attempt to explore the possibility of collaboration with AI while maintaining the initiative of the creator, and is expected to be a starting point for suggesting a more creator-friendly direction in the development of music creation tools and generative AI systems in the future.” This research was conducted with the support of the National Research Foundation of Korea with funding from the government (Ministry of Science and ICT). (RS-2024-00337007)
2025.05.07
View 4855
KAIST sends out Music and Bio-Signs of Professor Kwon Ji-yong, a.k.a. G-Dragon, into Space to Pulsate through Universe and Resonate among Stars
KAIST (President Kwang-Hyung Lee) announced on the 10th of April that it successfully promoted the world’s first ‘Space Sound Source Transmission Project’ based on media art at the KAIST Space Research Institute on April 9th through collaboration between Professor Jinjoon Lee of the Graduate School of Culture Technology, a world-renowned media artist, and the global K-Pop artist, G-Dragon. This project was proposed as part of the ‘AI Entertech Research Center’ being promoted by KAIST and Galaxy Corporation. It is a project to transmit the message and sound of G-Dragon (real name, Kwon Ji-yong), a singer/song writer affiliated with Galaxy Corporation and a visiting professor in the Department of Mechanical Engineering at KAIST, to space for the first time in the world. This is a convergence project that combines science, technology, art, and popular music, and is a new form of ‘space culture content’ experiment that connects KAIST’s cutting-edge space technology, Professor Jinjoon Lee’s media art work, and G-Dragon’s voice and sound source containing his latest digital single, "HOME SWEET HOME". < Photo 1. Professor Jinjoon Lee's Open Your Eyes Project "Iris"'s imagery projected on the 13m space antenna at the Space Research Institute > This collaboration was planned with the theme of ‘emotional signals that expand the inner universe of humans to the outer universe.’ The image of G-Dragon’s iris was augmented through AI as a window into soul symbolizing his uniqueness and identity, and the new song “Home Sweet Home” was combined as an audio message containing the vibration of that emotion. This was actually transmitted into space using a next-generation small satellite developed by KAIST Space Research Institute, completing a symbolic performance in which an individual’s inner universe is transmitted to outer space. Professor Jinjoon Lee’s cinematic media art work “Iris” was unveiled at the site. This work was screened in the world’s first projection mapping method* on KAIST Space Research Institute’s 13m space antenna. This video was created using generative artificial intelligence (AI) technology based on the image of G-Dragon's iris, and combined with sound using the data of the sounds of Emile Bell rings – the bell that holds a thousand years of history, it presented an emotional art experience that transcends time and space. *Projection Mapping: A technology that projects light and images onto actual structures to create visual changes, and is a method of expression that artistically reinterprets space. This work is one of the major research achievements of KAIST TX Lab and Professor Lee based on new media technology based on biometric data such as iris, heartbeat, and brain waves. Professor Jinjoon Lee said, "The iris is a symbol that reflects inner emotions and identity, so much so that it is called the 'mirror of the soul,' and this work sought to express 'the infinite universe seen from the inside of humanity' through G-Dragon's gaze." < Photo 2. (From left) Professor Jinjoon Lee of the Graduate School of Culture Technology and G-Dragon (Visiting Professor Kwon Ji-yong of the Department of Mechanical Engineering) > He continued, "The universe is a realm of technology as well as a stage for imagination and emotion, and I look forward to an encounter with the unknown through a new attempt to speak of art in the language of science including AI and imagine science in the form of art." “G-Dragon’s voice and music have now begun their journey to space,” said Yong-ho Choi, Galaxy Corporation’s Chief Happiness Officer (CHO). “This project is an act of leaving music as a legacy for humanity, while also having an important meaning of attempting to communicate with space.” He added, “This is a pioneering step to introduce human culture to space, and it will remain as a monumental performance that opens a new chapter in the history of music comparable to the Beatles.” Galaxy Corporation is leading the future entertainment technology industry through its collaboration with KAIST, and was recently selected as the only entertainment technology company in a private meeting with Microsoft CEO Nadella. In particular, it is promoting the globalization of AI entertainment technology, receiving praise as a “pioneer of imagination” for new forms of AI entertainment content, including the AI contents for the deceased. < Photo 3. Photo of G-Dragon's Home Sweet Home being sent into the space via Professor Jinjoon Lee's Space Sound Source Transmission Project > Through this project, KAIST Space Research Institute presented new possibilities for utilizing satellite technology, and showed a model for science to connect with society in a more popular way. KAIST President Kwang-Hyung Lee said, “KAIST is a place that always supports new imaginations and challenges,” and added, “We will continue to strive to continue creative research that no one has ever thought of, like this project that combines science, technology, and art.” In the meantime, Galaxy Corporation, the agency of G-Dragon’s Professor Kwon Ji-yong, is an AI entertainment company that presents a new paradigm based on IP, media, tech, and entertainment convergence technology.
2025.04.10
View 4620
KAIST, Galaxy Corporation Hold Signboard Ceremony for ‘AI Entertech Research Center’
KAIST (President Kwang-Hyung Lee) announced on the 9th that it will hold a signboard ceremony for the establishment of the ‘AI Entertech Research Center’ with the artificial intelligence entertech company, Galaxy Corporation (CEO Yong-ho Choi) at the main campus of KAIST. < (Galaxy Corporation, from center to the left) CEO Yongho Choi, Director Hyunjung Kim and related persons / (KAIST, from center to the right) Professor SeungSeob Lee of the Department of Mechanical Engineering, Provost and Executive Vice President Gyun Min Lee, Dean Jung Kim of the Department of Mechanical Engineering and Professor Yong Jin Yoon of the same department > This collaboration is a part of KAIST’s art convergence research strategy and is an extension of its efforts to lead future K-Culture through the development of creative cultural content based on science and technology. Beyond simple technological development, KAIST has been continuously implementing the convergence model of ‘Tech-Art’ that expands the horizon of the content industry through the fusion of emotional technology and cultural imagination. Previously, KAIST established the ‘Sumi Jo Performing Arts Research Center’ in collaboration with world-renowned soprano Sumi Jo, a visiting professor, and has been leading the convergence research of art and engineering, such as AI-based interactive performance technology and immersive content. The establishment of the ‘AI Entertech Research Center’ this time is being evaluated as a new challenge for the technological expansion of the K-content industry. In addition, the role of singer G-Dragon (real name Kwon Ji-yong), an artist affiliated with Galaxy Corporation and a visiting professor in the Department of Mechanical Engineering at KAIST, was also a major factor. Since being appointed to KAIST last year, Professor Kwon has been actively promoting the establishment of a research center and soliciting KAIST research projects through his agency to develop the ‘AI Entertech’ field, which fuses entertainment and cutting-edge technology. < (Galaxy Corporation, from center to the left) CEO Yongho Choi, Director Hyunjung Kim and related persons / (KAIST, from center to the right) Professor SeungSeob Lee of the Department of Mechanical Engineering, Provost and Executive Vice President Gyun Min Lee, Dean Jung Kim of the Department of Mechanical Engineering and Professor Yong Jin Yoon of the same department > The AI Entertech Research Center is scheduled to officially launch in the third quarter of this year, and this inauguration ceremony was held in line with Professor Kwon Ji-yong’s schedule to visit KAIST. Galaxy Corporation recently had a private meeting with Microsoft (MS) CEO Nadella as the only entertech company, and is actively promoting the globalization of AI entertech. In addition, since last year, it has established a cooperative relationship with KAIST and plans to actively seek the convergence of entertech and technology that transcends time and space through the establishment of a research center. Professor Kwon Ji-yong will attend the ‘Innovate Korea 2025’ event co-hosted by KAIST, Herald Media Group, and the National Research Council of Science and Technology, held at the KAIST Lyu Keun-Chul Sports Complex in the afternoon of the same day, and will give a special talk on the topic of ‘The Future of AI Entertech.’ In addition to Professor Kwon, Professor SeungSeob Lee of the Department of Mechanical Engineering at KAIST, Professor Sang-gyun Kim of Kyunghee University, and CEO Yong-ho Choi of Galaxy Corporation will also participate in this talk show. The two organizations signed an MOU last year to jointly research science and technology for the global spread of K-pop, and the establishment of this research center is the first tangible result of this. Once the research center is fully operational, various projects such as the development of an AI-based entertech platform and joint research on global content technology will be promoted. < A photo of Professor Kwon Ji-yong (right) from at the talk show with KAIST President Kwang-Hyung Lee (left) from the previous year > Yong-ho Choi, Galaxy Corporation CHO (Chief Happiness Officer), said, “This collaboration is the starting point for providing a completely new entertainment experience to fans around the world by grafting KAIST AI and cutting-edge technologies onto the fandom platform,” and added, “The convergence of AI and entertech is not just technological advancement; it is a driving force for innovation that enriches human life.” Kwang-Hyung Lee, KAIST President, said, “I am confident that KAIST’s scientific and technological capabilities, combined with Professor Kwon Ji-yong’s global sensibility, will lead the technological evolution of K-culture,” and added, “I hope that KAIST’s spirit of challenge and research DNA will create a new wave in the entertech market.” Meanwhile, Galaxy Corporation, the agency of Professor G-Dragon Kwon Ji-yong, is an AI entertainment technology company that presents a new paradigm based on IP, media, tech, and entertainment convergence technology. (End)
2025.04.09
View 3736
<<
첫번째페이지
<
이전 페이지
1
2
3
4
5
6
7
8
9
10
>
다음 페이지
>>
마지막 페이지 10