본문 바로가기
대메뉴 바로가기
KAIST
Newsletter Vol.25
Receive KAIST news by email!
View
Subscribe
Close
Type your e-mail address here.
Subscribe
Close
KAIST
NEWS
유틸열기
홈페이지 통합검색
-
검색
KOREAN
메뉴 열기
artificial+intelligence
by recently order
by view order
KAIST Proposes AI Training Method that will Drastically Shorten Time for Complex Quantum Mechanical Calculations
- Professor Yong-Hoon Kim's team from the School of Electrical Engineering succeeded for the first time in accelerating quantum mechanical electronic structure calculations using a convolutional neural network (CNN) model - Presenting an AI learning principle of quantum mechanical 3D chemical bonding information, the work is expected to accelerate the computer-assisted designing of next-generation materials and devices The close relationship between AI and high-performance scientific computing can be seen in the fact that both the 2024 Nobel Prizes in Physics and Chemistry were awarded to scientists for their AI-related research contributions in their respective fields of study. KAIST researchers succeeded in dramatically reducing the computation time for highly sophisticated quantum mechanical computer simulations by predicting atomic-level chemical bonding information distributed in 3D space using a novel AI approach. KAIST (President Kwang-Hyung Lee) announced on the 30th of October that Professor Yong-Hoon Kim's team from the School of Electrical Engineering developed a 3D computer vision artificial neural network-based computation methodology that bypasses the complex algorithms required for atomic-level quantum mechanical calculations traditionally performed using supercomputers to derive the properties of materials. < Figure 1. Various methodologies are utilized in the simulation of materials and materials, such as quantum mechanical calculations at the nanometer (nm) level, classical mechanical force fields at the scale of tens to hundreds of nanometers, continuum dynamics calculations at the macroscopic scale, and calculations that mix simulations at different scales. These simulations are already playing a key role in a wide range of basic research and application development fields in combination with informatics techniques. Recently, there have been active efforts to introduce machine learning techniques to radically accelerate simulations, but research on introducing machine learning techniques to quantum mechanical electronic structure calculations, which form the basis of high-scale simulations, is still insufficient. > The quantum mechanical density functional theory (DFT) calculations using supercomputers have become an essential and standard tool in a wide range of research and development fields, including advanced materials and drug design, as they allow fast and accurate prediction of material properties. *Density functional theory (DFT): A representative theory of ab initio (first principles) calculations that calculate quantum mechanical properties from the atomic level. However, practical DFT calculations require generating 3D electron density and solving quantum mechanical equations through a complex, iterative self-consistent field (SCF)* process that must be repeated tens to hundreds of times. This restricts its application to systems with only a few hundred to a few thousand atoms. *Self-consistent field (SCF): A scientific computing method widely used to solve complex many-body problems that must be described by a number of interconnected simultaneous differential equations. Professor Yong-Hoon Kim’s research team questioned whether recent advancements in AI techniques could be used to bypass the SCF process. As a result, they developed the DeepSCF model, which accelerates calculations by learning chemical bonding information distributed in a 3D space using neural network algorithms from the field of computer vision. < Figure 2. The deepSCF methodology developed in this study provides a way to rapidly accelerate DFT calculations by avoiding the self-consistent field process (orange box) that had to be performed repeatedly in traditional quantum mechanical electronic structure calculations through artificial neural network techniques (green box). The self-consistent field process is a process of predicting the 3D electron density, constructing the corresponding potential, and then solving the quantum mechanical Cohn-Sham equations, repeating tens to hundreds of times. The core idea of the deepSCF methodology is that the residual electron density (δρ), which is the difference between the electron density (ρ) and the sum of the electron densities of the constituent atoms (ρ0), corresponds to chemical bonding information, so the self-consistent field process is replaced with a 3D convolutional neural network model. > The research team focused on the fact that, according to density functional theory, electron density contains all quantum mechanical information of electrons, and that the residual electron density — the difference between the total electron density and the sum of the electron densities of the constituent atoms — contains chemical bonding information. They used this as the target for machine learning. They then adopted a dataset of organic molecules with various chemical bonding characteristics, and applied random rotations and deformations to the atomic structures of these molecules to further enhance the model’s accuracy and generalization capabilities. Ultimately, the research team demonstrated the validity and efficiency of the DeepSCF methodology on large, complex systems. < Figure 3. An example of applying the deepSCF methodology to a carbon nanotube-based DNA sequence analysis device model (top left). In addition to classical mechanical interatomic forces (bottom right), the residual electron density (top right) and quantum mechanical electronic structure properties such as the electronic density of states (DOS) (bottom left) containing information on chemical bonding are rapidly predicted with an accuracy corresponding to the standard DFT calculation results that perform the SCF process. > Professor Yong-Hoon Kim, who supervised the research, explained that his team had found a way to map quantum mechanical chemical bonding information in a 3D space onto artificial neural networks. He noted, “Since quantum mechanical electron structure calculations underpin materials simulations across all scales, this research establishes a foundational principle for accelerating material calculations using artificial intelligence.” Ryong-Gyu Lee, a PhD candidate in the School of Electrical Engineering, served as the first author of this research, which was published online on October 24 in Npj Computational Materials, a prestigious journal in the field of material computation. (Paper title: “Convolutional network learning of self-consistent electron density via grid-projected atomic fingerprints”) This research was conducted with support from the KAIST High-Risk Research Program for Graduate Students and the National Research Foundation of Korea’s Mid-career Researcher Support Program.
2024.10.30
View 779
KAIST Proposes a New Way to Circumvent a Long-time Frustration in Neural Computing
The human brain begins learning through spontaneous random activities even before it receives sensory information from the external world. The technology developed by the KAIST research team enables much faster and more accurate learning when exposed to actual data by pre-learning random information in a brain-mimicking artificial neural network, and is expected to be a breakthrough in the development of brain-based artificial intelligence and neuromorphic computing technology in the future. KAIST (President Kwang-Hyung Lee) announced on the 23rd of October that Professor Se-Bum Paik 's research team in the Department of Brain Cognitive Sciences solved the weight transport problem*, a long-standing challenge in neural network learning, and through this, explained the principles that enable resource-efficient learning in biological brain neural networks. *Weight transport problem: This is the biggest obstacle to the development of artificial intelligence that mimics the biological brain. It is the fundamental reason why large-scale memory and computational work are required in the learning of general artificial neural networks, unlike biological brains. Over the past several decades, the development of artificial intelligence has been based on error backpropagation learning proposed by Geoffery Hinton, who won the Nobel Prize in Physics this year. However, error backpropagation learning was thought to be impossible in biological brains because it requires the unrealistic assumption that individual neurons must know all the connected information across multiple layers in order to calculate the error signal for learning. < Figure 1. Illustration depicting the method of random noise training and its effects > This difficult problem, called the weight transport problem, was raised by Francis Crick, who won the Nobel Prize in Physiology or Medicine for the discovery of the structure of DNA, after the error backpropagation learning was proposed by Hinton in 1986. Since then, it has been considered the reason why the operating principles of natural neural networks and artificial neural networks will forever be fundamentally different. At the borderline of artificial intelligence and neuroscience, researchers including Hinton have continued to attempt to create biologically plausible models that can implement the learning principles of the brain by solving the weight transport problem. In 2016, a joint research team from Oxford University and DeepMind in the UK first proposed the concept of error backpropagation learning being possible without weight transport, drawing attention from the academic world. However, biologically plausible error backpropagation learning without weight transport was inefficient, with slow learning speeds and low accuracy, making it difficult to apply in reality. KAIST research team noted that the biological brain begins learning through internal spontaneous random neural activity even before experiencing external sensory experiences. To mimic this, the research team pre-trained a biologically plausible neural network without weight transport with meaningless random information (random noise). As a result, they showed that the symmetry of the forward and backward neural cell connections of the neural network, which is an essential condition for error backpropagation learning, can be created. In other words, learning without weight transport is possible through random pre-training. < Figure 2. Illustration depicting the meta-learning effect of random noise training > The research team revealed that learning random information before learning actual data has the property of meta-learning, which is ‘learning how to learn.’ It was shown that neural networks that pre-learned random noise perform much faster and more accurate learning when exposed to actual data, and can achieve high learning efficiency without weight transport. < Figure 3. Illustration depicting research on understanding the brain's operating principles through artificial neural networks > Professor Se-Bum Paik said, “It breaks the conventional understanding of existing machine learning that only data learning is important, and provides a new perspective that focuses on the neuroscience principles of creating appropriate conditions before learning,” and added, “It is significant in that it solves important problems in artificial neural network learning through clues from developmental neuroscience, and at the same time provides insight into the brain’s learning principles through artificial neural network models.” This study, in which Jeonghwan Cheon, a Master’s candidate of KAIST Department of Brain and Cognitive Sciences participated as the first author and Professor Sang Wan Lee of the same department as a co-author, will be presented at the 38th Neural Information Processing Systems (NeurIPS), the world's top artificial intelligence conference, to be held in Vancouver, Canada from December 10 to 15, 2024. (Paper title: Pretraining with random noise for fast and robust learning without weight transport) This study was conducted with the support of the National Research Foundation of Korea's Basic Research Program in Science and Engineering, the Information and Communications Technology Planning and Evaluation Institute's Talent Development Program, and the KAIST Singularity Professor Program.
2024.10.23
View 845
KAIST and NYU set out to Install Korea's First Joint Degree Program in AI
< (From left) New York University President Linda Mills and President Kwang-Hyung Lee > KAIST (President Kwang-Hyung Lee) and New York University (NYU, President Linda G. Mills) signed an MOU in the afternoon of the 9th to introduce a graduate program for a joint degree in the field of artificial intelligence. This agreement was promoted based on the consensus between the two universities that strengthening capabilities in the field of AI and fostering global talent are essential elements that can lead to great development in the entire future society beyond simple technical education. The two universities have been operating joint research groups in various industrial fields related to AI and convergence with it, and based on this agreement, they plan to establish an operating committee within this year to design a joint degree program for graduate school courses related to artificial intelligence. A KAIST official said, “If the joint degree program in AI is implemented, it is expected to be an unprecedented innovative experiment in which KAIST and NYU join forces to create ‘a single AI degree.’ The committee will consist of an equal number of faculty members from both schools, and will discuss the overall strategic planning of the joint degree program, including ▴curriculum structure and course composition ▴course completion roadmap ▴calculation of faculty and student population ▴calculation of budget size ▴calculation of operating facility size and details ▴legal matters regarding certification. In addition, the development of a new logo symbolizing the joint degree of KAIST and NYU in AI will also be carried out. The two schools expect that the joint degree program being promoted this time will contribute to advancing education and research capabilities in the field of artificial intelligence, jointly discovering and fostering talent in related fields that are currently lacking worldwide, and will become an exemplary case of global education and research cooperation. The faculty members of both schools, who possess excellent capabilities, will provide innovative and creative education in the field of artificial intelligence. Students will receive support to gain top-level research experience by participating in various international joint research projects promoted by the faculty members of both schools. Through this, the core of this joint degree program promoted by both schools is to continuously cultivate excellent human resources who will lead the future global society. Since signing a cooperation agreement for the establishment of a joint campus in June 2022, KAIST and NYU have been promoting campus sharing, joint research, and joint bachelor's degree programs. Including this, they are developing an innovative joint campus model and establishing an active international cooperation model. In particular, the exchange student system for undergraduate students will be implemented starting from the second semester of the 2023 academic year. 30 students from KAIST and 11 students from NYU were selected through a competitive selection process and are participating. In the case of KAIST students, if they complete one of the six minor programs at NYU, they will receive a degree that states the completion of the minor upon graduation. Based on the performance of the undergraduate exchange student operation, the two schools have also agreed to introduce a dual degree system for master's and doctoral students, and specific procedures are currently in progress. In addition, from 2023 to the present, we are carrying out future joint research projects in 15 fields that are integrated with AI, and we plan to begin international joint research in 10 fields centered on AI and bio from the fourth quarter of this year. NYU President Linda Mills said, “AI technology can play a significant role in addressing various social challenges such as climate change, health care, and education inequality,” and added that, “The global talent cultivated through our two schools will also go on to make innovative contributions to solving these social problems.” Kwang-Hyung Lee, the president of KAIST, said, “In the era of competition for global hegemony in technology, the development of AI technology is an essential element for countries and companies to secure competitiveness,” and “Through long-term cooperation with NYU, we will take the lead in fostering world-class, advanced talents who can innovatively apply and develop AI in various fields.” The signing ceremony held at the Four Seasons Hotel in Seoul was attended by KAIST officials including President Kwang-Hyung Lee, Hyun Deok Yeo, the Director of G-School, NYU officials including President Linda Mills, Kyunghyun Cho, a Professor of Computer Science and Data Science, and Dr. Karin Pavese, the Executive Director of NYU-KAIST Innovation Research Institute, amid attendance by other key figures from the industries situated in Korea. (End)
2024.09.10
View 1842
KAIST develops 'MetaVRain' that realizes vivid 3D real-life images
KAIST (President Kwang Hyung Lee) is a high-speed, low-power artificial intelligence (AI: Artificial Intelligent) semiconductor* MetaVRain, which implements artificial intelligence-based 3D rendering that can render images close to real life on mobile devices. * AI semiconductor: Semiconductor equipped with artificial intelligence processing functions such as recognition, reasoning, learning, and judgment, and implemented with optimized technology based on super intelligence, ultra-low power, and ultra-reliability The artificial intelligence semiconductor developed by the research team makes the existing ray-tracing*-based 3D rendering driven by GPU into artificial intelligence-based 3D rendering on a newly manufactured AI semiconductor, making it a 3D video capture studio that requires enormous costs. is not needed, so the cost of 3D model production can be greatly reduced and the memory used can be reduced by more than 180 times. In particular, the existing 3D graphic editing and design, which used complex software such as Blender, is replaced with simple artificial intelligence learning, so the general public can easily apply and edit the desired style. * Ray-tracing: Technology that obtains images close to real life by tracing the trajectory of all light rays that change according to the light source, shape and texture of the object This research, in which doctoral student Donghyun Han participated as the first author, was presented at the International Solid-State Circuit Design Conference (ISSCC) held in San Francisco, USA from February 18th to 22nd by semiconductor researchers from all over the world. (Paper Number 2.7, Paper Title: MetaVRain: A 133mW Real-time Hyper-realistic 3D NeRF Processor with 1D-2D Hybrid Neural Engines for Metaverse on Mobile Devices (Authors: Donghyeon Han, Junha Ryu, Sangyeob Kim, Sangjin Kim, and Hoi-Jun Yoo)) Professor Yoo's team discovered inefficient operations that occur when implementing 3D rendering through artificial intelligence, and developed a new concept semiconductor that combines human visual recognition methods to reduce them. When a person remembers an object, he has the cognitive ability to immediately guess what the current object looks like based on the process of starting with a rough outline and gradually specifying its shape, and if it is an object he saw right before. In imitation of such a human cognitive process, the newly developed semiconductor adopts an operation method that grasps the rough shape of an object in advance through low-resolution voxels and minimizes the amount of computation required for current rendering based on the result of rendering in the past. MetaVRain, developed by Professor Yu's team, achieved the world's best performance by developing a state-of-the-art CMOS chip as well as a hardware architecture that mimics the human visual recognition process. MetaVRain is optimized for artificial intelligence-based 3D rendering technology and achieves a rendering speed of up to 100 FPS or more, which is 911 times faster than conventional GPUs. In addition, as a result of the study, the energy efficiency, which represents the energy consumed per video screen processing, is 26,400 times higher than that of GPU, opening the possibility of artificial intelligence-based real-time rendering in VR/AR headsets and mobile devices. To show an example of using MetaVRain, the research team developed a smart 3D rendering application system together, and showed an example of changing the style of a 3D model according to the user's preferred style. Since you only need to give artificial intelligence an image of the desired style and perform re-learning, you can easily change the style of the 3D model without the help of complicated software. In addition to the example of the application system implemented by Professor Yu's team, it is expected that various application examples will be possible, such as creating a realistic 3D avatar modeled after a user's face, creating 3D models of various structures, and changing the weather according to the film production environment. do. Starting with MetaVRain, the research team expects that the field of 3D graphics will also begin to be replaced by artificial intelligence, and revealed that the combination of artificial intelligence and 3D graphics is a great technological innovation for the realization of the metaverse. Professor Hoi-Jun Yoo of the Department of Electrical and Electronic Engineering at KAIST, who led the research, said, “Currently, 3D graphics are focused on depicting what an object looks like, not how people see it.” The significance of this study was revealed as a study that enabled efficient 3D graphics by borrowing the way people recognize and express objects by imitating them.” He also foresaw the future, saying, “The realization of the metaverse will be achieved through innovation in artificial intelligence technology and innovation in artificial intelligence semiconductors, as shown in this study.” Figure 1. Description of the MetaVRain demo screen Photo of Presentation at the International Solid-State Circuits Conference (ISSCC)
2023.03.13
View 4529
Shaping the AI Semiconductor Ecosystem
- As the marriage of AI and semiconductor being highlighted as the strategic technology of national enthusiasm, KAIST's achievements in the related fields accumulated through top-class education and research capabilities that surpass that of peer universities around the world are standing far apart from the rest of the pack. As Artificial Intelligence Semiconductor, or a system of semiconductors designed for specifically for highly complicated computation need for AI to conduct its learning and deducing calculations, (hereafter AI semiconductors) stand out as a national strategic technology, the related achievements of KAIST, headed by President Kwang Hyung Lee, are also attracting attention. The Ministry of Science, ICT and Future Planning (MSIT) of Korea initiated a program to support the advancement of AI semiconductor last year with the goal of occupying 20% of the global AI semiconductor market by 2030. This year, through industry-university-research discussions, the Ministry expanded to the program with the addition of 1.2 trillion won of investment over five years through 'Support Plan for AI Semiconductor Industry Promotion'. Accordingly, major universities began putting together programs devised to train students to develop expertise in AI semiconductors. KAIST has accumulated top-notch educational and research capabilities in the two core fields of AI semiconductor - Semiconductor and Artificial Intelligence. Notably, in the field of semiconductors, the International Solid-State Circuit Conference (ISSCC) is the world's most prestigious conference about designing of semiconductor integrated circuit. Established in 1954, with more than 60% of the participants coming from companies including Samsung, Qualcomm, TSMC, and Intel, the conference naturally focuses on practical value of the studies from the industrial point-of-view, earning the nickname the ‘Semiconductor Design Olympics’. At such conference of legacy and influence, KAIST kept its presence widely visible over other participating universities, leading in terms of the number of accepted papers over world-class schools such as Massachusetts Institute of Technology (MIT) and Stanford for the past 17 years. Number of papers published at the InternationalSolid-State Circuit Conference (ISSCC) in 2022 sorted by nations and by institutions Number of papers by universities presented at the International Solid-State Circuit Conference (ISCCC) in 2006~2022 In terms of the number of papers accepted at the ISSCC, KAIST ranked among top two universities each year since 2006. Looking at the average number of accepted papers over the past 17 years, KAIST stands out as an unparalleled leader. The average number of KAIST papers adopted during the period of 17 years from 2006 through 2022, was 8.4, which is almost double of that of competitors like MIT (4.6) and UCLA (3.6). In Korea, it maintains the second place overall after Samsung, the undisputed number one in the semiconductor design field. Also, this year, KAIST was ranked first among universities participating at the Symposium on VLSI Technology and Circuits, an academic conference in the field of integrated circuits that rivals the ISSCC. Number of papers adopted by the Symposium on VLSI Technology and Circuits in 2022 submitted from the universities With KAIST researchers working and presenting new technologies at the frontiers of all key areas of the semiconductor industry, the quality of KAIST research is also maintained at the highest level. Professor Myoungsoo Jung's research team in the School of Electrical Engineering is actively working to develop heterogeneous computing environment with high energy efficiency in response to the industry's demand for high performance at low power. In the field of materials, a research team led by Professor Byong-Guk Park of the Department of Materials Science and Engineering developed the Spin Orbit Torque (SOT)-based Magnetic RAM (MRAM) memory that operates at least 10 times faster than conventional memories to suggest a way to overcome the limitations of the existing 'von Neumann structure'. As such, while providing solutions to major challenges in the current semiconductor industry, the development of new technologies necessary to preoccupy new fields in the semiconductor industry are also very actively pursued. In the field of Quantum Computing, which is attracting attention as next-generation computing technology needed in order to take the lead in the fields of cryptography and nonlinear computation, Professor Sanghyeon Kim's research team in the School of Electrical Engineering presented the world's first 3D integrated quantum computing system at 2021 VLSI Symposium. In Neuromorphic Computing, which is expected to bring remarkable advancements in the field of artificial intelligence by utilizing the principles of the neurology, the research team of Professor Shinhyun Choi of School of Electrical Engineering is developing a next-generation memristor that mimics neurons. The number of papers by the International Conference on Machine Learning (ICML) and the Conference on Neural Information Processing Systems (NeurIPS), two of the world’s most prestigious academic societies in the field of artificial intelligence (KAIST 6th in the world, 1st in Asia, in 2020) The field of artificial intelligence has also grown rapidly. Based on the number of papers from the International Conference on Machine Learning (ICML) and the Conference on Neural Information Processing Systems (NeurIPS), two of the world's most prestigious conferences in the field of artificial intelligence, KAIST ranked 6th in the world in 2020 and 1st in Asia. Since 2012, KAIST's ranking steadily inclined from 37th to 6th, climbing 31 steps over the period of eight years. In 2021, 129 papers, or about 40%, of Korean papers published at 11 top artificial intelligence conferences were presented by KAIST. Thanks to KAIST's efforts, in 2021, Korea ranked sixth after the United States, China, United Kingdom, Canada, and Germany in terms of the number of papers published by global AI academic societies. Number of papers from Korea (and by KAIST) published at 11 top conferences in the field of artificial intelligence in 2021 In terms of content, KAIST's AI research is also at the forefront. Professor Hoi-Jun Yoo's research team in the School of Electrical Engineering compensated for the shortcomings of the “edge networks” by implementing artificial intelligence real-time learning networks on mobile devices. In order to materialize artificial intelligence, data accumulation and a huge amount of computation is required. For this, a high-performance server takes care of massive computation, and for the user terminals, the “edge network” that collects data and performs simple computations are used. Professor Yoo's research greatly increased AI’s processing speed and performance by allotting the learning task to the user terminal as well. In June, a research team led by Professor Min-Soo Kim of the School of Computing presented a solution that is essential for processing super-scale artificial intelligence models. The super-scale machine learning system developed by the research team is expected to achieve speeds up to 8.8 times faster than Google's Tensorflow or IBM's System DS, which are mainly used in the industry. KAIST is also making remarkable achievements in the field of AI semiconductors. In 2020, Professor Minsoo Rhu's research team in the School of Electrical Engineering succeeded in developing the world's first AI semiconductor optimized for AI recommendation systems. Due to the nature of the AI recommendation system having to handle vast amounts of contents and user information, it quickly meets its limitation because of the information bottleneck when the process is operated through a general-purpose artificial intelligence system. Professor Minsoo Rhu's team developed a semiconductor that can achieve a speed that is 21 times faster than existing systems using the 'Processing-In-Memory (PIM)' technology. PIM is a technology that improves efficiency by performing the calculations in 'RAM', or random-access memory, which is usually only used to store data temporarily just before they are processed. When PIM technology is put out on the market, it is expected that fortify competitiveness of Korean companies in the AI semiconductor market drastically, as they already hold great strength in the memory area. KAIST does not plan to be complacent with its achievements, but is making various plans to further the distance from the competitors catching on in the fields of artificial intelligence, semiconductors, and AI semiconductors. Following the establishment of the first artificial intelligence research center in Korea in 1990, the Kim Jaechul AI Graduate School was opened in 2019 to sustain the supply chain of the experts in the field. In 2020, Artificial Intelligence Semiconductor System Research Center was launched to conduct convergent research on AI and semiconductors, which was followed by the establishment of the AI Institutes to promote “AI+X” research efforts. Based on the internal capabilities accumulated through these efforts, KAIST is also making efforts to train human resources needed in these areas. KAIST established joint research centers with companies such as Naver, while collaborating with local governments such as Hwaseong City to simultaneously nurture professional manpower. Back in 2021, KAIST signed an agreement to establish the Semiconductor System Engineering Department with Samsung Electronics and are preparing a new semiconductor specialist training program. The newly established Department of Semiconductor System Engineering will select around 100 new students every year from 2023 and provide special scholarships to all students so that they can develop their professional skills. In addition, through close cooperation with the industry, they will receive special support which includes field trips and internships at Samsung Electronics, and joint workshops and on-site training. KAIST has made a significant contribution to the growth of the Korean semiconductor industry ecosystem, producing 25% of doctoral workers in the domestic semiconductor field and 20% of CEOs of mid-sized and venture companies with doctoral degrees. With the dawn coming up on the AI semiconductor ecosystem, whether KAIST will reprise the pivotal role seems to be the crucial point of business.
2022.08.05
View 9483
An AI-based, Indoor/Outdoor-Integrated (IOI) GPS System to Bring Seismic Waves in the Terrains of Positioning Technology
KAIST breaks new grounds in positioning technology with an AI-integrated GPS board that works both indoors and out KAIST (President Kwang Hyung Lee) announced on the 8th that Professor Dong-Soo Han's research team (Intelligent Service Integration Lab) from the School of Computing has developed a GPS system that works both indoors and outdoors with quality precision regardless of the environment. This Indoor/Outdoor-Integrated GPS System, or IOI GPS System, for short, uses the GPS signals outdoors and estimates locations indoors using signals from multiple sources like an inertial sensor, pressure sensors, geomagnetic sensors, and light sensors. To this end, the research team developed techniques to detect environmental changes such as entering a building, and methods to detect entrances, ground floors, stairs, elevators and levels of buildings by utilizing artificial intelligence techniques. Various landmark detecting techniques were also incorporated with pedestrian dead reckoning (PDR), a navigation tool for pedestrians, to devise the so-called “Sensor-Fusion Positioning Algorithm”. To date, it was common to estimate locations based on wireless LAN signals or base station signals in a space where the GPS signal could not reach. However, the IOI GPS enables positioning even in buildings without signals nor indoor maps. The algorithm developed by the research team can provide accurate floor information within a building where even big tech companies like Google and Apple's positioning services do not provide. Unlike other positioning methods that rely on visual data, geomagnetic positioning techniques, or wireless LAN, this system also has the advantage of not requiring any prior preparation. In other words, the foundation to enable the usage of a universal GPS system that works both indoors and outdoors anywhere in the world is now ready. The research team also produced a circuit board for the purpose of operating the IOI GPS System, mounted with chips to receive and process GPS, Wi-Fi, and Bluetooth signals, along with an inertial sensor, a barometer, a magnetometer, and a light sensor. The sensor-fusion positioning algorithm the lab has developed is also incorporated in the board. When the accuracy of the IOI GPS board was tested in the N1 building of KAIST’s main campus in Daejeon, it achieved an accuracy of about 95% in floor estimation and an accuracy of about 3 to 6 meters in distance estimation. As for the indoor/outdoor transition, the navigational mode change was completed in about 0.3 seconds. When it was combined with the PDR technique, the estimation accuracy improved further down to a scope of one meter. The research team is now working on assembling a tag with a built-in positioning board and applying it to location-based docent services for visitors at museums, science centers, and art galleries. The IOI GPS tag can be used for the purpose of tracking children and/or the elderly, and it can also be used to locate people or rescue workers lost in disaster-ridden or hazardous sites. On a different note, the sensor-fusion positioning algorithm and positioning board for vehicles are also under development for the tracking of vehicles entering indoor areas like underground parking lots. When the IOI GPS board for vehicles is manufactured, the research team will work to collaborate with car manufacturers and car rental companies, and will also develop a sensor-fusion positioning algorithm for smartphones. Telecommunication companies seeking to diversify their programs in the field of location-based services will also be interested in the use the IOI GPS. Professor Dong-Soo Han of the School of Computing, who leads the research team, said, “This is the first time to develop an indoor/outdoor integrated GPS system that can pinpoint locations in a building where there is no wireless signal or an indoor map, and there are an infinite number of areas it can be applied to. When the integration with the Korea Augmentation Satellite System (KASS) and the Korean GPS (KPS) System that began this year, is finally completed, Korea can become the leader in the field of GPS both indoors and outdoors, and we also have plans to manufacture semi-conductor chips for the IOI GPS System to keep the tech-gap between Korea and the followers.” He added, "The guidance services at science centers, museums, and art galleries that uses IOI GPS tags can provide a set of data that would be very helpful for analyzing the visitors’ viewing traces. It is an essential piece of information required when the time comes to decide when to organize the next exhibit. We will be working on having it applied to the National Science Museum, first.” The projects to develop the IOI GPS system and the trace analysis system for science centers were supported through Science, Culture, Exhibits and Services Capability Enhancement Program of the Ministry of Science and ICT. Profile: Dong-Soo Han, Ph.D.Professorddsshhan@kaist.ac.krhttp://isilab.kaist.ac.kr Intelligent Service Integration Lab.School of Computing http://kaist.ac.kr/en/ Korea Advanced Institute of Science and Technology (KAIST)Daejeon, Republic of Korea
2022.07.13
View 8111
Neuromorphic Memory Device Simulates Neurons and Synapses
Simultaneous emulation of neuronal and synaptic properties promotes the development of brain-like artificial intelligence Researchers have reported a nano-sized neuromorphic memory device that emulates neurons and synapses simultaneously in a unit cell, another step toward completing the goal of neuromorphic computing designed to rigorously mimic the human brain with semiconductor devices. Neuromorphic computing aims to realize artificial intelligence (AI) by mimicking the mechanisms of neurons and synapses that make up the human brain. Inspired by the cognitive functions of the human brain that current computers cannot provide, neuromorphic devices have been widely investigated. However, current Complementary Metal-Oxide Semiconductor (CMOS)-based neuromorphic circuits simply connect artificial neurons and synapses without synergistic interactions, and the concomitant implementation of neurons and synapses still remains a challenge. To address these issues, a research team led by Professor Keon Jae Lee from the Department of Materials Science and Engineering implemented the biological working mechanisms of humans by introducing the neuron-synapse interactions in a single memory cell, rather than the conventional approach of electrically connecting artificial neuronal and synaptic devices. Similar to commercial graphics cards, the artificial synaptic devices previously studied often used to accelerate parallel computations, which shows clear differences from the operational mechanisms of the human brain. The research team implemented the synergistic interactions between neurons and synapses in the neuromorphic memory device, emulating the mechanisms of the biological neural network. In addition, the developed neuromorphic device can replace complex CMOS neuron circuits with a single device, providing high scalability and cost efficiency. The human brain consists of a complex network of 100 billion neurons and 100 trillion synapses. The functions and structures of neurons and synapses can flexibly change according to the external stimuli, adapting to the surrounding environment. The research team developed a neuromorphic device in which short-term and long-term memories coexist using volatile and non-volatile memory devices that mimic the characteristics of neurons and synapses, respectively. A threshold switch device is used as volatile memory and phase-change memory is used as a non-volatile device. Two thin-film devices are integrated without intermediate electrodes, implementing the functional adaptability of neurons and synapses in the neuromorphic memory. Professor Keon Jae Lee explained, "Neurons and synapses interact with each other to establish cognitive functions such as memory and learning, so simulating both is an essential element for brain-inspired artificial intelligence. The developed neuromorphic memory device also mimics the retraining effect that allows quick learning of the forgotten information by implementing a positive feedback effect between neurons and synapses.” This result entitled “Simultaneous emulation of synaptic and intrinsic plasticity using a memristive synapse” was published in the May 19, 2022 issue of Nature Communications. -Publication:Sang Hyun Sung, Tae Jin Kim, Hyera Shin, Tae Hong Im, and Keon Jae Lee (2022) “Simultaneous emulation of synaptic and intrinsic plasticity using a memristive synapse,” Nature Communications May 19, 2022 (DOI: 10.1038/s41467-022-30432-2) -Profile:Professor Keon Jae Leehttp://fand.kaist.ac.kr Department of Materials Science and EngineeringKAIST
2022.05.20
View 9815
Energy-Efficient AI Hardware Technology Via a Brain-Inspired Stashing System
Researchers demonstrate neuromodulation-inspired stashing system for the energy-efficient learning of a spiking neural network using a self-rectifying memristor array Researchers have proposed a novel system inspired by the neuromodulation of the brain, referred to as a ‘stashing system,’ that requires less energy consumption. The research group led by Professor Kyung Min Kim from the Department of Materials Science and Engineering has developed a technology that can efficiently handle mathematical operations for artificial intelligence by imitating the continuous changes in the topology of the neural network according to the situation. The human brain changes its neural topology in real time, learning to store or recall memories as needed. The research group presented a new artificial intelligence learning method that directly implements these neural coordination circuit configurations. Research on artificial intelligence is becoming very active, and the development of artificial intelligence-based electronic devices and product releases are accelerating, especially in the Fourth Industrial Revolution age. To implement artificial intelligence in electronic devices, customized hardware development should also be supported. However most electronic devices for artificial intelligence require high power consumption and highly integrated memory arrays for large-scale tasks. It has been challenging to solve these power consumption and integration limitations, and efforts have been made to find out how the human brain solves problems. To prove the efficiency of the developed technology, the research group created artificial neural network hardware equipped with a self-rectifying synaptic array and algorithm called a ‘stashing system’ that was developed to conduct artificial intelligence learning. As a result, it was able to reduce energy by 37% within the stashing system without any accuracy degradation. This result proves that emulating the neuromodulation in humans is possible. Professor Kim said, "In this study, we implemented the learning method of the human brain with only a simple circuit composition and through this we were able to reduce the energy needed by nearly 40 percent.” This neuromodulation-inspired stashing system that mimics the brain’s neural activity is compatible with existing electronic devices and commercialized semiconductor hardware. It is expected to be used in the design of next-generation semiconductor chips for artificial intelligence. This study was published in Advanced Functional Materials in March 2022 and supported by KAIST, the National Research Foundation of Korea, the National NanoFab Center, and SK Hynix. -Publication: Woon Hyung Cheong, Jae Bum Jeon†, Jae Hyun In, Geunyoung Kim, Hanchan Song, Janho An, Juseong Park, Young Seok Kim, Cheol Seong Hwang, and Kyung Min Kim (2022) “Demonstration of Neuromodulation-inspired Stashing System for Energy-efficient Learning of Spiking Neural Network using a Self-Rectifying Memristor Array,” Advanced FunctionalMaterials March 31, 2022 (DOI: 10.1002/adfm.202200337) -Profile: Professor Kyung Min Kimhttp://semi.kaist.ac.kr https://scholar.google.com/citations?user=BGw8yDYAAAAJ&hl=ko Department of Materials Science and EngineeringKAIST
2022.05.18
View 8412
Professor Lik-Hang Lee Offers Metaverse Course for Hong Kong Productivity Council
Professor Lik-Hang Lee from the Department of Industrial System Engineering will offer a metaverse course in partnership with the Hong Kong Productivity Council (HKPC) from the Spring 2022 semester to Hong Kong-based professionals. “The Metaverse Course for Professionals” aims to nurture world-class talents of the metaverse in response to surging demand for virtual worlds and virtual-physical blended environments. The HKPC’s R&D scientists, consultants, software engineers, and related professionals will attend the course. They will receive a professional certificate on managing and developing metaverse skills upon the completion of this intensive course. The course will provide essential skills and knowledge about the parallel virtual universe and how to leverage digitalization and industrialization in the metaverse era. The course includes comprehensive modules, such as designing and implementing virtual-physical blended environments, metaverse technology and ecosystems, immersive smart cities, token economies, and intelligent industrialization in the metaverse era. Professor Lee believes in the decades to come that we will see rising numbers of virtual worlds in cyberspace known as the ‘Immersive Internet’ that will be characterized by high levels of immersiveness, user interactivity, and user-machine collaborations. “Consumers in virtual worlds will create novel content as well as personalized products and services, becoming as catalyst for ‘hyperpersonalization’ in the next industrial revolution,” he said. Professor Lee said he will continue offering world-class education related to the metaverse to students in KAIST and professionals from various industrial sectors, as his Augmented Reality and Media Lab will focus on a variety of metaverse topics such as metaverse campuses and industrial metaverses. The HKPC has worked to address innovative solutions for Hong Kong industries and enterprises since 1967, helping them achieve optimized resource utilization, effectiveness, and cost reduction as well as enhanced productivity and competitiveness in both local and international markets. The HKPC has advocated for facilitating Hong Kong’s reindustrialization powered by Industry 4.0 and e-commerce 4.0 with a strong emphasis on R&D, IoT, AI, digital manufacturing. The Augmented Reality and Media Lab led by Professor Lee will continue its close partnerships with HKPC and its other partners to help build the epicentre of the metaverse in the region. Furthermore, the lab will fully leverage its well-established research niches in user-centric, virtual-physical cyberspace (https://www.lhlee.com/projects-8 ) to serve upcoming projects related to industrial metaverses, which aligns with the departmental focus on smart factories and artificial intelligence.
2022.04.06
View 6321
KAIST ISPI Releases Report on the Global AI Innovation Landscape
Providing key insights for building a successful AI ecosystem The KAIST Innovation Strategy and Policy Institute (ISPI) has launched a report on the global innovation landscape of artificial intelligence in collaboration with Clarivate Plc. The report shows that AI has become a key technology and that cross-industry learning is an important AI innovation. It also stresses that the quality of innovation, not volume, is a critical success factor in technological competitiveness. Key findings of the report include: • Neural networks and machine learning have been unrivaled in terms of scale and growth (more than 46%), and most other AI technologies show a growth rate of more than 20%. • Although Mainland China has shown the highest growth rate in terms of AI inventions, the influence of Chinese AI is relatively low. In contrast, the United States holds a leading position in AI-related inventions in terms of both quantity and influence. • The U.S. and Canada have built an industry-oriented AI technology development ecosystem through organic cooperation with both academia and the Government. Mainland China and South Korea, by contrast, have a government-driven AI technology development ecosystem with relatively low qualitative outputs from the sector. • The U.S., the U.K., and Canada have a relatively high proportion of inventions in robotics and autonomous control, whereas in Mainland China and South Korea, machine learning and neural networks are making progress. Each country/region produces high-quality inventions in their predominant AI fields, while the U.S. has produced high-impact inventions in almost all AI fields. “The driving forces in building a sustainable AI innovation ecosystem are important national strategies. A country’s future AI capabilities will be determined by how quickly and robustly it develops its own AI ecosystem and how well it transforms the existing industry with AI technologies. Countries that build a successful AI ecosystem have the potential to accelerate growth while absorbing the AI capabilities of other countries. AI talents are already moving to countries with excellent AI ecosystems,” said Director of the ISPI Wonjoon Kim. “AI, together with other high-tech IT technologies including big data and the Internet of Things are accelerating the digital transformation by leading an intelligent hyper-connected society and enabling the convergence of technology and business. With the rapid growth of AI innovation, AI applications are also expanding in various ways across industries and in our lives,” added Justin Kim, Special Advisor at the ISPI and a co-author of the report.
2021.12.21
View 6095
Prof. Sang Wan Lee Selected for 2021 IBM Academic Award
Professor Sang Wan Lee from the Department of Bio and Brain Engineering was selected as the recipient of the 2021 IBM Global University Program Academic Award. The award recognizes individual faculty members whose emerging science and technology contains significant interest for universities and IBM. Professor Lee, whose research focuses on artificial intelligence and computational neuroscience, won the award for his research proposal titled A Neuroscience-Inspired Approach for Metacognitive Reinforcement Learning. IBM provides a gift of $40,000 to the recipient’s institution in recognition of the selection of the project but not as a contract for services. Professor Lee’s project aims to exploit the unique characteristics of human reinforcement learning. Specifically, he plans to examines the hypothesis that metacognition, a human’s ability to estimate their uncertainty level, serves to guide sample-efficient and near-optimal exploration, making it possible to achieve an optimal balance between model-based and model-free reinforcement learning. He was also selected as the winner of the Google Research Award in 2016 and has been working with DeepMind and University College London to conduct basic research on decision-making brain science to establish a theory on frontal lobe meta-enhance learning. "We plan to conduct joint research for utilizing brain-based artificial intelligence technology and frontal lobe meta-enhanced learning technology modeling in collaboration with an international research team including IBM, DeepMind, MIT, and Oxford,” Professor Lee said.
2021.06.25
View 9196
Professor Alice Haeyun Oh to Join GPAI Expert Group
Professor Alice Haeyun Oh will participate in the Global Partnership on Artificial Intelligence (GPAI), an international and multi-stakeholder initiative hosted by the OECD to guide the responsible development and use of AI. In collaboration with partners and international organizations, GPAI will bring together leading experts from industry, civil society, government, and academia. The Korean Ministry of Science and ICT (MSIT) officially announced that South Korea will take part in GPAI as one of the 15 founding members that include Canada, France, Japan, and the United States. Professor Oh has been appointed as a new member of the Responsible AI Committee, one of the four committees that GPAI established along with the Data Governance Committee, Future of Work Committee, and Innovation and Commercialization Committee. (END)
2020.06.22
View 7085
<<
첫번째페이지
<
이전 페이지
1
2
>
다음 페이지
>>
마지막 페이지 2