본문 바로가기
대메뉴 바로가기
KAIST
Newsletter Vol.26
Receive KAIST news by email!
View
Subscribe
Close
Type your e-mail address here.
Subscribe
Close
KAIST
NEWS
유틸열기
홈페이지 통합검색
-
검색
KOREAN
메뉴 열기
Artificial+Intelligence
by recently order
by view order
'Fingerprint' Machine Learning Technique Identifies Different Bacteria in Seconds
A synergistic combination of surface-enhanced Raman spectroscopy and deep learning serves as an effective platform for separation-free detection of bacteria in arbitrary media Bacterial identification can take hours and often longer, precious time when diagnosing infections and selecting appropriate treatments. There may be a quicker, more accurate process according to researchers at KAIST. By teaching a deep learning algorithm to identify the “fingerprint” spectra of the molecular components of various bacteria, the researchers could classify various bacteria in different media with accuracies of up to 98%. Their results were made available online on Jan. 18 in Biosensors and Bioelectronics, ahead of publication in the journal’s April issue. Bacteria-induced illnesses, those caused by direct bacterial infection or by exposure to bacterial toxins, can induce painful symptoms and even lead to death, so the rapid detection of bacteria is crucial to prevent the intake of contaminated foods and to diagnose infections from clinical samples, such as urine. “By using surface-enhanced Raman spectroscopy (SERS) analysis boosted with a newly proposed deep learning model, we demonstrated a markedly simple, fast, and effective route to classify the signals of two common bacteria and their resident media without any separation procedures,” said Professor Sungho Jo from the School of Computing. Raman spectroscopy sends light through a sample to see how it scatters. The results reveal structural information about the sample — the spectral fingerprint — allowing researchers to identify its molecules. The surface-enhanced version places sample cells on noble metal nanostructures that help amplify the sample’s signals. However, it is challenging to obtain consistent and clear spectra of bacteria due to numerous overlapping peak sources, such as proteins in cell walls. “Moreover, strong signals of surrounding media are also enhanced to overwhelm target signals, requiring time-consuming and tedious bacterial separation steps,” said Professor Yeon Sik Jung from the Department of Materials Science and Engineering. To parse through the noisy signals, the researchers implemented an artificial intelligence method called deep learning that can hierarchically extract certain features of the spectral information to classify data. They specifically designed their model, named the dual-branch wide-kernel network (DualWKNet), to efficiently learn the correlation between spectral features. Such an ability is critical for analyzing one-dimensional spectral data, according to Professor Jo. “Despite having interfering signals or noise from the media, which make the general shapes of different bacterial spectra and their residing media signals look similar, high classification accuracies of bacterial types and their media were achieved,” Professor Jo said, explaining that DualWKNet allowed the team to identify key peaks in each class that were almost indiscernible in individual spectra, enhancing the classification accuracies. “Ultimately, with the use of DualWKNet replacing the bacteria and media separation steps, our method dramatically reduces analysis time.” The researchers plan to use their platform to study more bacteria and media types, using the information to build a training data library of various bacterial types in additional media to reduce the collection and detection times for new samples. “We developed a meaningful universal platform for rapid bacterial detection with the collaboration between SERS and deep learning,” Professor Jo said. “We hope to extend the use of our deep learning-based SERS analysis platform to detect numerous types of bacteria in additional media that are important for food or clinical analysis, such as blood.” The National R&D Program, through a National Research Foundation of Korea grant funded by the Ministry of Science and ICT, supported this research. -PublicationEojin Rho, Minjoon Kim, Seunghee H. Cho, Bongjae Choi, Hyungjoon Park, Hanhwi Jang, Yeon Sik Jung, Sungho Jo, “Separation-free bacterial identification in arbitrary media via deepneural network-based SERS analysis,” Biosensors and Bioelectronics online January 18, 2022 (doi.org/10.1016/j.bios.2022.113991) -ProfileProfessor Yeon Sik JungDepartment of Materials Science and EngineeringKAIST Professor Sungho JoSchool of ComputingKAIST
2022.03.04
View 20332
SM CEP Soo-Man Lee to Teach at the KAIST School of Computing
The Founder and Chief Executive Producer of SM Entertainment Soo-Man Lee was appointed as a distinguished visiting professor in the KAIST School of Computing. His three-year term starts on March 1. KAIST and the SM Entertainment signed an MOU on joint research on the metaverse last year and Lee’s appointment is the extension of their mutual collaborations in fields where technologies converge and will encourage innovative advancements in engineering technology and the entertainment industry. Lee, who completed a graduate program in computer science at California State University Northridge will give special leadership lectures for both undergraduate and graduate students, and will participate in metaverse-related research as a consultant. In particular, Professor Lee will participate in joint research with the tentatively named Metaverse Institute affiliated with the KAIST Institute for Artificial Intelligence. The institute will help SM Entertainment stay ahead of the global metaverse market by using the avatars of celebrities, and lend itself to raising the already strong brand power of the K-pop leader. Professor Lee said, “I am grateful that KAIST, the very cradle of Korea’s science and technology, has given me the opportunity to meet its students as a visiting professor. We will lead the metaverse world, in which Korea is emerging as a market leader, with the excellent contents and technology unique to our country, and work together to lead the future global entertainment market.” President Kwang-Hyung Lee said, “The ability to expand our limitless creativity in the metaverse is indispensable for us as we adapt to this new era. We hope that the vision and creative insights of Executive Producer Lee, which have allowed him to look ahead into the future of the entertainment contents market, will have a positive and fresh impact on the members of KAIST.” The global influence and reputation of Executive Producer Lee has been well established through his various awards. He was the first Korean to be listed on Variety500 for five consecutive years from 2017 to 2021. He was also the first Korean awardee of the Asia Game Changer Awards in 2016, the first cultural figure to receive the YoungSan Diplomacy Award in 2017, the only Korean to be listed on the 2020 Billboard Impact List, and he has also received the K-pop Contribution Award at the 10th Gaon Chart Music Awards. He recently introduced Play2Create (P2C), a new interactive and creative culture in which re-creation can be enjoyed like a game using IP, and is leading the establishment of the P2C ecosystem.
2022.03.03
View 5322
AI Light-Field Camera Reads 3D Facial Expressions
Machine-learned, light-field camera reads facial expressions from high-contrast illumination invariant 3D facial images A joint research team led by Professors Ki-Hun Jeong and Doheon Lee from the KAIST Department of Bio and Brain Engineering reported the development of a technique for facial expression detection by merging near-infrared light-field camera techniques with artificial intelligence (AI) technology. Unlike a conventional camera, the light-field camera contains micro-lens arrays in front of the image sensor, which makes the camera small enough to fit into a smart phone, while allowing it to acquire the spatial and directional information of the light with a single shot. The technique has received attention as it can reconstruct images in a variety of ways including multi-views, refocusing, and 3D image acquisition, giving rise to many potential applications. However, the optical crosstalk between shadows caused by external light sources in the environment and the micro-lens has limited existing light-field cameras from being able to provide accurate image contrast and 3D reconstruction. The joint research team applied a vertical-cavity surface-emitting laser (VCSEL) in the near-IR range to stabilize the accuracy of 3D image reconstruction that previously depended on environmental light. When an external light source is shone on a face at 0-, 30-, and 60-degree angles, the light field camera reduces 54% of image reconstruction errors. Additionally, by inserting a light-absorbing layer for visible and near-IR wavelengths between the micro-lens arrays, the team could minimize optical crosstalk while increasing the image contrast by 2.1 times. Through this technique, the team could overcome the limitations of existing light-field cameras and was able to develop their NIR-based light-field camera (NIR-LFC), optimized for the 3D image reconstruction of facial expressions. Using the NIR-LFC, the team acquired high-quality 3D reconstruction images of facial expressions expressing various emotions regardless of the lighting conditions of the surrounding environment. The facial expressions in the acquired 3D images were distinguished through machine learning with an average of 85% accuracy – a statistically significant figure compared to when 2D images were used. Furthermore, by calculating the interdependency of distance information that varies with facial expression in 3D images, the team could identify the information a light-field camera utilizes to distinguish human expressions. Professor Ki-Hun Jeong said, “The sub-miniature light-field camera developed by the research team has the potential to become the new platform to quantitatively analyze the facial expressions and emotions of humans.” To highlight the significance of this research, he added, “It could be applied in various fields including mobile healthcare, field diagnosis, social cognition, and human-machine interactions.” This research was published in Advanced Intelligent Systems online on December 16, under the title, “Machine-Learned Light-field Camera that Reads Facial Expression from High-Contrast and Illumination Invariant 3D Facial Images.” This research was funded by the Ministry of Science and ICT and the Ministry of Trade, Industry and Energy. -Publication“Machine-learned light-field camera that reads fascial expression from high-contrast and illumination invariant 3D facial images,” Sang-In Bae, Sangyeon Lee, Jae-Myeong Kwon, Hyun-Kyung Kim. Kyung-Won Jang, Doheon Lee, Ki-Hun Jeong, Advanced Intelligent Systems, December 16, 2021 (doi.org/10.1002/aisy.202100182) ProfileProfessor Ki-Hun JeongBiophotonic LaboratoryDepartment of Bio and Brain EngineeringKAIST Professor Doheon LeeDepartment of Bio and Brain EngineeringKAIST
2022.01.21
View 10721
KAIST ISPI Releases Report on the Global AI Innovation Landscape
Providing key insights for building a successful AI ecosystem The KAIST Innovation Strategy and Policy Institute (ISPI) has launched a report on the global innovation landscape of artificial intelligence in collaboration with Clarivate Plc. The report shows that AI has become a key technology and that cross-industry learning is an important AI innovation. It also stresses that the quality of innovation, not volume, is a critical success factor in technological competitiveness. Key findings of the report include: • Neural networks and machine learning have been unrivaled in terms of scale and growth (more than 46%), and most other AI technologies show a growth rate of more than 20%. • Although Mainland China has shown the highest growth rate in terms of AI inventions, the influence of Chinese AI is relatively low. In contrast, the United States holds a leading position in AI-related inventions in terms of both quantity and influence. • The U.S. and Canada have built an industry-oriented AI technology development ecosystem through organic cooperation with both academia and the Government. Mainland China and South Korea, by contrast, have a government-driven AI technology development ecosystem with relatively low qualitative outputs from the sector. • The U.S., the U.K., and Canada have a relatively high proportion of inventions in robotics and autonomous control, whereas in Mainland China and South Korea, machine learning and neural networks are making progress. Each country/region produces high-quality inventions in their predominant AI fields, while the U.S. has produced high-impact inventions in almost all AI fields. “The driving forces in building a sustainable AI innovation ecosystem are important national strategies. A country’s future AI capabilities will be determined by how quickly and robustly it develops its own AI ecosystem and how well it transforms the existing industry with AI technologies. Countries that build a successful AI ecosystem have the potential to accelerate growth while absorbing the AI capabilities of other countries. AI talents are already moving to countries with excellent AI ecosystems,” said Director of the ISPI Wonjoon Kim. “AI, together with other high-tech IT technologies including big data and the Internet of Things are accelerating the digital transformation by leading an intelligent hyper-connected society and enabling the convergence of technology and business. With the rapid growth of AI innovation, AI applications are also expanding in various ways across industries and in our lives,” added Justin Kim, Special Advisor at the ISPI and a co-author of the report.
2021.12.21
View 6996
Brain-Inspired Highly Scalable Neuromorphic Hardware Presented
Neurons and synapses based on single transistor can dramatically reduce the hardware cost and accelerate the commercialization of neuromorphic hardware KAIST researchers fabricated a brain-inspired highly scalable neuromorphic hardware by co-integrating single transistor neurons and synapses. Using standard silicon complementary metal-oxide-semiconductor (CMOS) technology, the neuromorphic hardware is expected to reduce chip cost and simplify fabrication procedures. The research team led by Yang-Kyu Choi and Sung-Yool Choi produced a neurons and synapses based on single transistor for highly scalable neuromorphic hardware and showed the ability to recognize text and face images. This research was featured in Science Advances on August 4. Neuromorphic hardware has attracted a great deal of attention because of its artificial intelligence functions, but consuming ultra-low power of less than 20 watts by mimicking the human brain. To make neuromorphic hardware work, a neuron that generates a spike when integrating a certain signal, and a synapse remembering the connection between two neurons are necessary, just like the biological brain. However, since neurons and synapses constructed on digital or analog circuits occupy a large space, there is a limit in terms of hardware efficiency and costs. Since the human brain consists of about 1011 neurons and 1014 synapses, it is necessary to improve the hardware cost in order to apply it to mobile and IoT devices. To solve the problem, the research team mimicked the behavior of biological neurons and synapses with a single transistor, and co-integrated them onto an 8-inch wafer. The manufactured neuromorphic transistors have the same structure as the transistors for memory and logic that are currently mass-produced. In addition, the neuromorphic transistors proved for the first time that they can be implemented with a ‘Janus structure’ that functions as both neuron and synapse, just like coins have heads and tails. Professor Yang-Kyu Choi said that this work can dramatically reduce the hardware cost by replacing the neurons and synapses that were based on complex digital and analog circuits with a single transistor. "We have demonstrated that neurons and synapses can be implemented using a single transistor," said Joon-Kyu Han, the first author. "By co-integrating single transistor neurons and synapses on the same wafer using a standard CMOS process, the hardware cost of the neuromorphic hardware has been improved, which will accelerate the commercialization of neuromorphic hardware,” Han added.This research was supported by the National Research Foundation (NRF) and IC Design Education Center (IDEC). -PublicationJoon-Kyu Han, Sung-Yool Choi, Yang-Kyu Choi, et al.“Cointegration of single-transistor neurons and synapses by nanoscale CMOS fabrication for highly scalable neuromorphic hardware,” Science Advances (DOI: 10.1126/sciadv.abg8836) -ProfileProfessor Yang-Kyu ChoiNano-Oriented Bio-Electronics Labhttps://sites.google.com/view/nobelab/ School of Electrical EngineeringKAIST Professor Sung-Yool ChoiMolecular and Nano Device Laboratoryhttps://www.mndl.kaist.ac.kr/ School of Electrical EngineeringKAIST
2021.08.05
View 9449
Prof. Sang Wan Lee Selected for 2021 IBM Academic Award
Professor Sang Wan Lee from the Department of Bio and Brain Engineering was selected as the recipient of the 2021 IBM Global University Program Academic Award. The award recognizes individual faculty members whose emerging science and technology contains significant interest for universities and IBM. Professor Lee, whose research focuses on artificial intelligence and computational neuroscience, won the award for his research proposal titled A Neuroscience-Inspired Approach for Metacognitive Reinforcement Learning. IBM provides a gift of $40,000 to the recipient’s institution in recognition of the selection of the project but not as a contract for services. Professor Lee’s project aims to exploit the unique characteristics of human reinforcement learning. Specifically, he plans to examines the hypothesis that metacognition, a human’s ability to estimate their uncertainty level, serves to guide sample-efficient and near-optimal exploration, making it possible to achieve an optimal balance between model-based and model-free reinforcement learning. He was also selected as the winner of the Google Research Award in 2016 and has been working with DeepMind and University College London to conduct basic research on decision-making brain science to establish a theory on frontal lobe meta-enhance learning. "We plan to conduct joint research for utilizing brain-based artificial intelligence technology and frontal lobe meta-enhanced learning technology modeling in collaboration with an international research team including IBM, DeepMind, MIT, and Oxford,” Professor Lee said.
2021.06.25
View 10332
Research Day Highlights the Most Impactful Technologies of the Year
Technology Converting Full HD Image to 4-Times Higher UHD Via Deep Learning Cited as the Research of the Year The technology converting a full HD image into a four-times higher UHD image in real time via AI deep learning was recognized as the Research of the Year. Professor Munchurl Kim from the School of Electrical Engineering who developed the technology won the Research of the Year Grand Prize during the 2021 KAIST Research Day ceremony on May 25. Professor Kim was lauded for conducting creative research on machine learning and deep learning-based image processing. KAIST’s Research Day recognizes the most notable research outcomes of the year, while creating opportunities for researchers to immerse themselves into interdisciplinary research projects with their peers. The ceremony was broadcast online due to Covid-19 and announced the Ten R&D Achievement of the Year that are expected to make a significant impact. To celebrate the award, Professor Kim gave a lecture on “Computational Imaging through Deep Learning for the Acquisition of High-Quality Images.” Focusing on the fact that advancements in artificial intelligence technology can show superior performance when used to convert low-quality videos to higher quality, he introduced some of the AI technologies that are currently being applied in the field of image restoration and quality improvement. Professors Eui-Cheol Shin from the Graduate School of Medical Science and Engineering and In-Cheol Park from the School of Electrical Engineering each received Research Awards, and Professor Junyong Noh from the Graduate School of Culture Technology was selected for the Innovation Award. Professors Dong Ki Yoon from the Department of Chemistry and Hyungki Kim from the Department of Mechanical Engineering were awarded the Interdisciplinary Award as a team for their joint research. Meanwhile, out of KAIST’s ten most notable R&D achievements, those from the field of natural and biological sciences included research on rare earth element-platinum nanoparticle catalysts by Professor Ryong Ryoo from the Department of Chemistry, real-time observations of the locational changes in all of the atoms in a molecule by Professor Hyotcherl Ihee from the Department of Chemistry, and an investigation on memory retention mechanisms after synapse removal from an astrocyte by Professor Won-Suk Chung from the Department of Biological Sciences. Awardees from the engineering field were a wearable robot for paraplegics with the world’s best functionality and walking speed by Professor Kyoungchul Kong from the Department of Mechanical Engineering, fair machine learning by Professor Changho Suh from the School of Electrical Engineering, and a generative adversarial networks processing unit (GANPU), an AI semiconductor that can learn from even mobiles by processing multiple and deep networks by Professor Hoi-Jun Yoo from the School of Electrical Engineering. Others selected as part of the ten research studies were the development of epigenetic reprogramming technology in tumour by Professor Pilnam Kim from the Department of Bio and Brain Engineering, the development of an original technology for reverse cell aging by Professor Kwang-Hyun Cho from the Department of Bio and Brain Engineering, a heterogeneous metal element catalyst for atmospheric purification by Professor Hyunjoo Lee from the Department of Chemical and Biomolecular Engineering, and the Mobile Clinic Module (MCM): a negative pressure ward for epidemic hospitals by Professor Taek-jin Nam (reported at the Wall Street Journal) from the Department of Industrial Design.
2021.05.31
View 13884
Streamlining the Process of Materials Discovery
The materials platform M3I3 reduces the time for materials discovery by reverse engineering future materials using multiscale/multimodal imaging and machine learning of the processing-structure-properties relationship Developing new materials and novel processes has continued to change the world. The M3I3 Initiative at KAIST has led to new insights into advancing materials development by implementing breakthroughs in materials imaging that have created a paradigm shift in the discovery of materials. The Initiative features the multiscale modeling and imaging of structure and property relationships and materials hierarchies combined with the latest material-processing data. The research team led by Professor Seungbum Hong analyzed the materials research projects reported by leading global institutes and research groups, and derived a quantitative model using machine learning with a scientific interpretation. This process embodies the research goal of the M3I3: Materials and Molecular Modeling, Imaging, Informatics and Integration. The researchers discussed the role of multiscale materials and molecular imaging combined with machine learning and also presented a future outlook for developments and the major challenges of M3I3. By building this model, the research team envisions creating desired sets of properties for materials and obtaining the optimum processing recipes to synthesize them. “The development of various microscopy and diffraction tools with the ability to map the structure, property, and performance of materials at multiscale levels and in real time enabled us to think that materials imaging could radically accelerate materials discovery and development,” says Professor Hong. “We plan to build an M3I3 repository of searchable structural and property maps using FAIR (Findable, Accessible, Interoperable, and Reusable) principles to standardize best practices as well as streamline the training of early career researchers.” One of the examples that shows the power of structure-property imaging at the nanoscale is the development of future materials for emerging nonvolatile memory devices. Specifically, the research team focused on microscopy using photons, electrons, and physical probes on the multiscale structural hierarchy, as well as structure-property relationships to enhance the performance of memory devices. “M3I3 is an algorithm for performing the reverse engineering of future materials. Reverse engineering starts by analyzing the structure and composition of cutting-edge materials or products. Once the research team determines the performance of our targeted future materials, we need to know the candidate structures and compositions for producing the future materials.” The research team has built a data-driven experimental design based on traditional NCM (nickel, cobalt, and manganese) cathode materials. With this, the research team expanded their future direction for achieving even higher discharge capacity, which can be realized via Li-rich cathodes. However, one of the major challenges was the limitation of available data that describes the Li-rich cathode properties. To mitigate this problem, the researchers proposed two solutions: First, they should build a machine-learning-guided data generator for data augmentation. Second, they would use a machine-learning method based on ‘transfer learning.’ Since the NCM cathode database shares a common feature with a Li-rich cathode, one could consider repurposing the NCM trained model for assisting the Li-rich prediction. With the pretrained model and transfer learning, the team expects to achieve outstanding predictions for Li-rich cathodes even with the small data set. With advances in experimental imaging and the availability of well-resolved information and big data, along with significant advances in high-performance computing and a worldwide thrust toward a general, collaborative, integrative, and on-demand research platform, there is a clear confluence in the required capabilities of advancing the M3I3 Initiative. Professor Hong said, “Once we succeed in using the inverse “property−structure−processing” solver to develop cathode, anode, electrolyte, and membrane materials for high energy density Li-ion batteries, we will expand our scope of materials to battery/fuel cells, aerospace, automobiles, food, medicine, and cosmetic materials.” The review was published in ACS Nano in March. This study was conducted through collaborations with Dr. Chi Hao Liow, Professor Jong Min Yuk, Professor Hye Ryung Byon, Professor Yongsoo Yang, Professor EunAe Cho, Professor Pyuck-Pa Choi, and Professor Hyuck Mo Lee at KAIST, Professor Joshua C. Agar at Lehigh University, Dr. Sergei V. Kalinin at Oak Ridge National Laboratory, Professor Peter W. Voorhees at Northwestern University, and Professor Peter Littlewood at the University of Chicago (Article title: Reducing Time to Discovery: Materials and Molecular Modeling, Imaging, Informatics, and Integration).This work was supported by the KAIST Global Singularity Research Program for 2019 and 2020. Publication: “Reducing Time to Discovery: Materials and Molecular Modeling, Imaging, Informatics and Integration,” S. Hong, C. H. Liow, J. M. Yuk, H. R. Byon, Y. Yang, E. Cho, J. Yeom, G. Park, H. Kang, S. Kim, Y. Shim, M. Na, C. Jeong, G. Hwang, H. Kim, H. Kim, S. Eom, S. Cho, H. Jun, Y. Lee, A. Baucour, K. Bang, M. Kim, S. Yun, J. Ryu, Y. Han, A. Jetybayeva, P.-P. Choi, J. C. Agar, S. V. Kalinin, P. W. Voorhees, P. Littlewood, and H. M. Lee, ACS Nano 15, 3, 3971–3995 (2021) https://doi.org/10.1021/acsnano.1c00211 Profile: Seungbum Hong, PhD Associate Professor seungbum@kaist.ac.kr http://mii.kaist.ac.kr Department of Materials Science and Engineering KAIST (END)
2021.04.05
View 12185
Deep-Learning and 3D Holographic Microscopy Beats Scientists at Analyzing Cancer Immunotherapy
Live tracking and analyzing of the dynamics of chimeric antigen receptor (CAR) T-cells targeting cancer cells can open new avenues for the development of cancer immunotherapy. However, imaging via conventional microscopy approaches can result in cellular damage, and assessments of cell-to-cell interactions are extremely difficult and labor-intensive. When researchers applied deep learning and 3D holographic microscopy to the task, however, they not only avoided these difficultues but found that AI was better at it than humans were. Artificial intelligence (AI) is helping researchers decipher images from a new holographic microscopy technique needed to investigate a key process in cancer immunotherapy “live” as it takes place. The AI transformed work that, if performed manually by scientists, would otherwise be incredibly labor-intensive and time-consuming into one that is not only effortless but done better than they could have done it themselves. The research, conducted by the team of Professor YongKeun Park from the Department of Physics, appeared in the journal eLife last December. A critical stage in the development of the human immune system’s ability to respond not just generally to any invader (such as pathogens or cancer cells) but specifically to that particular type of invader and remember it should it attempt to invade again is the formation of a junction between an immune cell called a T-cell and a cell that presents the antigen, or part of the invader that is causing the problem, to it. This process is like when a picture of a suspect is sent to a police car so that the officers can recognize the criminal they are trying to track down. The junction between the two cells, called the immunological synapse, or IS, is the key process in teaching the immune system how to recognize a specific type of invader. Since the formation of the IS junction is such a critical step for the initiation of an antigen-specific immune response, various techniques allowing researchers to observe the process as it happens have been used to study its dynamics. Most of these live imaging techniques rely on fluorescence microscopy, where genetic tweaking causes part of a protein from a cell to fluoresce, in turn allowing the subject to be tracked via fluorescence rather than via the reflected light used in many conventional microscopy techniques. However, fluorescence-based imaging can suffer from effects such as photo-bleaching and photo-toxicity, preventing the assessment of dynamic changes in the IS junction process over the long term. Fluorescence-based imaging still involves illumination, whereupon the fluorophores (chemical compounds that cause the fluorescence) emit light of a different color. Photo-bleaching or photo-toxicity occur when the subject is exposed to too much illumination, resulting in chemical alteration or cellular damage. One recent option that does away with fluorescent labelling and thereby avoids such problems is 3D holographic microscopy or holotomography (HT). In this technique, the refractive index (the way that light changes direction when encountering a substance with a different density—why a straw looks like it bends in a glass of water) is recorded in 3D as a hologram. Until now, HT has been used to study single cells, but never cell-cell interactions involved in immune responses. One of the main reasons is the difficulty of “segmentation,” or distinguishing the different parts of a cell and thus distinguishing between the interacting cells; in other words, deciphering which part belongs to which cell. Manual segmentation, or marking out the different parts manually, is one option, but it is difficult and time-consuming, especially in three dimensions. To overcome this problem, automatic segmentation has been developed in which simple computer algorithms perform the identification. “But these basic algorithms often make mistakes,” explained Professor YongKeun Park, “particularly with respect to adjoining segmentation, which of course is exactly what is occurring here in the immune response we’re most interested in.” So, the researchers applied a deep learning framework to the HT segmentation problem. Deep learning is a type of machine learning in which artificial neural networks based on the human brain recognize patterns in a way that is similar to how humans do this. Regular machine learning requires data as an input that has already been labelled. The AI “learns” by understanding the labeled data and then recognizes the concept that has been labelled when it is fed novel data. For example, AI trained on a thousand images of cats labelled “cat” should be able to recognize a cat the next time it encounters an image with a cat in it. Deep learning involves multiple layers of artificial neural networks attacking much larger, but unlabeled datasets, in which the AI develops its own ‘labels’ for concepts it encounters. In essence, the deep learning framework that KAIST researchers developed, called DeepIS, came up with its own concepts by which it distinguishes the different parts of the IS junction process. To validate this method, the research team applied it to the dynamics of a particular IS junction formed between chimeric antigen receptor (CAR) T-cells and target cancer cells. They then compared the results to what they would normally have done: the laborious process of performing the segmentation manually. They found not only that DeepIS was able to define areas within the IS with high accuracy, but that the technique was even able to capture information about the total distribution of proteins within the IS that may not have been easily measured using conventional techniques. “In addition to allowing us to avoid the drudgery of manual segmentation and the problems of photo-bleaching and photo-toxicity, we found that the AI actually did a better job,” Professor Park added. The next step will be to combine the technique with methods of measuring how much physical force is applied by different parts of the IS junction, such as holographic optical tweezers or traction force microscopy. -Profile Professor YongKeun Park Department of Physics Biomedical Optics Laboratory http://bmol.kaist.ac.kr KAIST
2021.02.24
View 11209
Experts to Help Asia Navigate the Post-COVID-19 and 4IR Eras
Risk Quotient 2020, an international conference co-hosted by KAIST and the National University of Singapore (NUS), will bring together world-leading experts from academia and industry to help Asia navigate the post-COVID-19 and Fourth Industrial Revolution (4IR) eras. The online conference will be held on October 29 from 10 a.m. Korean time under the theme “COVID-19 Pandemic and A Brave New World”. It will be streamed live on YouTube at https://www.youtube.com/c/KAISTofficial and https://www.youtube.com/user/NUScast. The Korea Policy Center for the Fourth Industrial Revolution (KPC4IR) at KAIST organized this conference in collaboration with the Lloyd's Register Foundation Institute for the Public Understanding of Risk (IPUR) at NUS. During the conference, global leaders will examine the socioeconomic impacts of the COVID-19 pandemic on areas including digital innovation, education, the workforce, and the economy. They will then highlight digital and 4IR technologies that could be utilized to effectively mitigate the risks and challenges associated with the pandemic, while harnessing the opportunities that these socioeconomic effects may present. Their discussions will mainly focus on the Asian region. In his opening remarks, KAIST President Sung-Chul Shin will express his appreciation for the Asian populations’ greater trust in and compliance with their governments, which have given the continent a leg up against the coronavirus. He will then emphasize that by working together through the exchange of ideas and global collaboration, we will be able to shape ‘a brave new world’ to better humanity. Welcoming remarks by Prof. Sang Yup Lee (Dean, KAIST Institutes) and Prof. Tze Yun Leong (Director, AI Technology at AI Singapore) will follow. For the keynote speech, Prof. Lan Xue (Dean, Schwarzman College, Tsinghua University) will share China’s response to COVID-19 and lessons for crisis management. Prof. Danny Quah (Dean, Lee Kuan Yew School of Public Policy, NUS) will present possible ways to overcome these difficult times. Dr. Kak-Soo Shin (Senior Advisor, Shin & Kim LLC, Former Ambassador to the State of Israel and Japan, and Former First and Second Vice Minister of the Ministry of Foreign Affairs of the Republic of Korea) will stress the importance of the international community’s solidarity to ensure peace, prosperity, and safety in this new era. Panel Session I will address the impact of COVID-19 on digital innovation. Dr. Carol Soon (Senior Research Fellow, Institute of Policy Studies, NUS) will present her interpretation of recent technological developments as both opportunities for our society as a whole and challenges for vulnerable groups such as low-income families. Dr. Christopher SungWook Chang (Managing Director, Kakao Mobility) will show how changes in mobility usage patterns can be captured by Kakao Mobility’s big data analysis. He will illustrate how the data can be used to interpret citizen’s behaviors and how risks can be transformed into opportunities by utilizing technology. Mr. Steve Ledzian’s (Vice President, Chief Technology Officer, FireEye) talk will discuss the dangers caused by threat actors and other cyber risk implications of COVID-19. Dr. June Sung Park (Chairman, Korea Software Technology Association (KOSTA)) will share how COVID-19 has accelerated digital transformations across all industries and why software education should be reformed to improve Korea’s competitiveness. Panel Session II will examine the impact on education and the workforce. Dr. Sang-Jin Ban (President, Korean Educational Development Institute (KEDI)) will explain Korea’s educational response to the pandemic and the concept of “blended learning” as a new paradigm, and present both positive and negative impacts of online education on students’ learning experiences. Prof. Reuben Ng (Professor, Lee Kuan Yew School of Public Policy, NUS) will present on graduate underemployment, which seems to have worsened during COVID-19. Dr. Michael Fung’s presentation (Deputy Chief Executive (Industry), SkillsFuture SG) will introduce the promotion of lifelong learning in Singapore through a new national initiative known as the ‘SkillsFuture Movement’. This movement serves as an example of a national response to disruptions in the job market and the pace of skills obsolescence triggered by AI and COVID-19. Panel Session III will touch on technology leadership and Asia’s digital economy and society. Prof. Naubahar Sharif (Professor, Division of Social Science and Division of Public Policy, Hong Kong University of Science and Technology (HKUST)) will share his views on the potential of China in taking over global technological leadership based on its massive domestic market, its government support, and the globalization process. Prof. Yee Kuang Heng (Professor, Graduate School of Public Policy, University of Tokyo) will illustrate how different legal and political needs in China and Japan have shaped the ways technologies have been deployed in responding to COVID-19. Dr. Hayun Kang (Head, International Cooperation Research Division, Korea Information Society Development Institute (KISDI)) will explain Korea’s relative success containing the pandemic compared to other countries, and how policy leaders and institutions that embrace digital technologies in the pursuit of public welfare objectives can produce positive outcomes while minimizing the side effects. Prof. Kyung Ryul Park (Graduate School of Science and Technology Policy, KAIST) will be hosting the entire conference, whereas Prof. Alice Hae Yun Oh (Director, MARS Artificial Intelligence Research Center, KAIST), Prof. Wonjoon Kim (Dean, Graduate School of Innovation and Technology Management, College of Business, KAIST), Prof. Youngsun Kwon (Dean, KAIST Academy), and Prof. Taejun Lee (Korea Development Institute (KDI) School of Public Policy and Management) are to chair discussions with the keynote speakers and panelists. Closing remarks will be delivered by Prof. Chan Ghee Koh (Director, NUS IPUR), Prof. So Young Kim (Director, KAIST KPC4IR), and Prof. Joungho Kim (Director, KAIST Global Strategy Institute (GSI)). “This conference is expected to serve as a springboard to help Asian countries recover from global crises such as the COVID-19 pandemic through active cooperation and joint engagement among scholars, experts, and policymakers,” according to Director So Young Kim. (END)
2020.10.22
View 13082
KAIST Joins IBM Q Network to Accelerate Quantum Computing Research and Foster Quantum Industry
KAIST has joined the IBM Q Network, a community of Fortune 500 companies, academic institutions, startups, and research labs working with IBM to advance quantum computing for business and science. As the IBM Q Network’s first academic partner in Korea, KAIST will use IBM's advanced quantum computing systems to carry out research projects that advance quantum information science and explore early applications. KAIST will also utilize IBM Quantum resources for talent training and education in preparation for building a quantum workforce for the quantum computing era that will bring huge changes to science and business. By joining the network, KAIST will take a leading role in fostering the ecosystem of quantum computing in Korea, which is expected to be a necessary enabler to realize the Fourth Industrial Revolution. Professor June-Koo Rhee who also serves as Director of the KAIST Information Technology Research Center (ITRC) of Quantum Computing for AI has led the agreement on KAIST’s joining the IBM Q Network. Director Rhee described quantum computing as "a new technology that can calculate mathematical challenges at very high speed and low power” and also as “one that will change the future.” Director Rhee said, “Korea started investment in quantum computing relatively late, and thus requires to take bold steps with innovative R&D strategies to pave the roadmap for the next technological leap in the field”. With KAIST joining the IBM Q Network, “Korea will be better equipped to establish a quantum industry, an important foundation for securing national competitiveness,” he added. The KAIST ITRC of Quantum Computing for AI has been using the publicly available IBM Quantum Experience delivered over the IBM Cloud for research, development and training of quantum algorithms such as quantum artificial intelligence, quantum chemical calculation, and quantum computing education. KAIST will have access to the most advanced IBM Quantum systems to explore practical research and experiments such as diagnosis of diseases based on quantum artificial intelligence, quantum computational chemistry, and quantum machine learning technology. In addition, knowledge exchanges and sharing with overseas universities and companies under the IBM Q Network will help KAIST strengthen the global presence of Korean technology in quantum computing. About IBM Quantum IBM Quantum is an industry-first initiative to build quantum systems for business and science applications. For more information about IBM's quantum computing efforts, please visit www.ibm.com/ibmq. For more information about the IBM Q Network, as well as a full list of all partners, members, and hubs, visit https://www.research.ibm.com/ibm-q/network/ ©Thumbnail Image: IBM. (END)
2020.09.29
View 8235
Deep Learning Helps Explore the Structural and Strategic Bases of Autism
Psychiatrists typically diagnose autism spectrum disorders (ASD) by observing a person’s behavior and by leaning on the Diagnostic and Statistical Manual of Mental Disorders (DSM-5), widely considered the “bible” of mental health diagnosis. However, there are substantial differences amongst individuals on the spectrum and a great deal remains unknown by science about the causes of autism, or even what autism is. As a result, an accurate diagnosis of ASD and a prognosis prediction for patients can be extremely difficult. But what if artificial intelligence (AI) could help? Deep learning, a type of AI, deploys artificial neural networks based on the human brain to recognize patterns in a way that is akin to, and in some cases can surpass, human ability. The technique, or rather suite of techniques, has enjoyed remarkable success in recent years in fields as diverse as voice recognition, translation, autonomous vehicles, and drug discovery. A group of researchers from KAIST in collaboration with the Yonsei University College of Medicine has applied these deep learning techniques to autism diagnosis. Their findings were published on August 14 in the journal IEEE Access. Magnetic resonance imaging (MRI) scans of brains of people known to have autism have been used by researchers and clinicians to try to identify structures of the brain they believed were associated with ASD. These researchers have achieved considerable success in identifying abnormal grey and white matter volume and irregularities in cerebral cortex activation and connections as being associated with the condition. These findings have subsequently been deployed in studies attempting more consistent diagnoses of patients than has been achieved via psychiatrist observations during counseling sessions. While such studies have reported high levels of diagnostic accuracy, the number of participants in these studies has been small, often under 50, and diagnostic performance drops markedly when applied to large sample sizes or on datasets that include people from a wide variety of populations and locations. “There was something as to what defines autism that human researchers and clinicians must have been overlooking,” said Keun-Ah Cheon, one of the two corresponding authors and a professor in Department of Child and Adolescent Psychiatry at Severance Hospital of the Yonsei University College of Medicine. “And humans poring over thousands of MRI scans won’t be able to pick up on what we’ve been missing,” she continued. “But we thought AI might be able to.” So the team applied five different categories of deep learning models to an open-source dataset of more than 1,000 MRI scans from the Autism Brain Imaging Data Exchange (ABIDE) initiative, which has collected brain imaging data from laboratories around the world, and to a smaller, but higher-resolution MRI image dataset (84 images) taken from the Child Psychiatric Clinic at Severance Hospital, Yonsei University College of Medicine. In both cases, the researchers used both structural MRIs (examining the anatomy of the brain) and functional MRIs (examining brain activity in different regions). The models allowed the team to explore the structural bases of ASD brain region by brain region, focusing in particular on many structures below the cerebral cortex, including the basal ganglia, which are involved in motor function (movement) as well as learning and memory. Crucially, these specific types of deep learning models also offered up possible explanations of how the AI had come up with its rationale for these findings. “Understanding the way that the AI has classified these brain structures and dynamics is extremely important,” said Sang Wan Lee, the other corresponding author and an associate professor at KAIST. “It’s no good if a doctor can tell a patient that the computer says they have autism, but not be able to say why the computer knows that.” The deep learning models were also able to describe how much a particular aspect contributed to ASD, an analysis tool that can assist psychiatric physicians during the diagnosis process to identify the severity of the autism. “Doctors should be able to use this to offer a personalized diagnosis for patients, including a prognosis of how the condition could develop,” Lee said. “Artificial intelligence is not going to put psychiatrists out of a job,” he explained. “But using AI as a tool should enable doctors to better understand and diagnose complex disorders than they could do on their own.” -ProfileProfessor Sang Wan LeeDepartment of Bio and Brain EngineeringLaboratory for Brain and Machine Intelligence https://aibrain.kaist.ac.kr/ KAIST
2020.09.23
View 10382
<<
첫번째페이지
<
이전 페이지
1
2
3
4
5
6
7
8
9
>
다음 페이지
>>
마지막 페이지 9