본문 바로가기
대메뉴 바로가기
KAIST
Newsletter Vol.25
Receive KAIST news by email!
View
Subscribe
Close
Type your e-mail address here.
Subscribe
Close
KAIST
NEWS
유틸열기
홈페이지 통합검색
-
검색
KOREAN
메뉴 열기
Artificial+Intelligence
by recently order
by view order
KAIST Proposes a New Way to Circumvent a Long-time Frustration in Neural Computing
The human brain begins learning through spontaneous random activities even before it receives sensory information from the external world. The technology developed by the KAIST research team enables much faster and more accurate learning when exposed to actual data by pre-learning random information in a brain-mimicking artificial neural network, and is expected to be a breakthrough in the development of brain-based artificial intelligence and neuromorphic computing technology in the future. KAIST (President Kwang-Hyung Lee) announced on the 23rd of October that Professor Se-Bum Paik 's research team in the Department of Brain Cognitive Sciences solved the weight transport problem*, a long-standing challenge in neural network learning, and through this, explained the principles that enable resource-efficient learning in biological brain neural networks. *Weight transport problem: This is the biggest obstacle to the development of artificial intelligence that mimics the biological brain. It is the fundamental reason why large-scale memory and computational work are required in the learning of general artificial neural networks, unlike biological brains. Over the past several decades, the development of artificial intelligence has been based on error backpropagation learning proposed by Geoffery Hinton, who won the Nobel Prize in Physics this year. However, error backpropagation learning was thought to be impossible in biological brains because it requires the unrealistic assumption that individual neurons must know all the connected information across multiple layers in order to calculate the error signal for learning. < Figure 1. Illustration depicting the method of random noise training and its effects > This difficult problem, called the weight transport problem, was raised by Francis Crick, who won the Nobel Prize in Physiology or Medicine for the discovery of the structure of DNA, after the error backpropagation learning was proposed by Hinton in 1986. Since then, it has been considered the reason why the operating principles of natural neural networks and artificial neural networks will forever be fundamentally different. At the borderline of artificial intelligence and neuroscience, researchers including Hinton have continued to attempt to create biologically plausible models that can implement the learning principles of the brain by solving the weight transport problem. In 2016, a joint research team from Oxford University and DeepMind in the UK first proposed the concept of error backpropagation learning being possible without weight transport, drawing attention from the academic world. However, biologically plausible error backpropagation learning without weight transport was inefficient, with slow learning speeds and low accuracy, making it difficult to apply in reality. KAIST research team noted that the biological brain begins learning through internal spontaneous random neural activity even before experiencing external sensory experiences. To mimic this, the research team pre-trained a biologically plausible neural network without weight transport with meaningless random information (random noise). As a result, they showed that the symmetry of the forward and backward neural cell connections of the neural network, which is an essential condition for error backpropagation learning, can be created. In other words, learning without weight transport is possible through random pre-training. < Figure 2. Illustration depicting the meta-learning effect of random noise training > The research team revealed that learning random information before learning actual data has the property of meta-learning, which is ‘learning how to learn.’ It was shown that neural networks that pre-learned random noise perform much faster and more accurate learning when exposed to actual data, and can achieve high learning efficiency without weight transport. < Figure 3. Illustration depicting research on understanding the brain's operating principles through artificial neural networks > Professor Se-Bum Paik said, “It breaks the conventional understanding of existing machine learning that only data learning is important, and provides a new perspective that focuses on the neuroscience principles of creating appropriate conditions before learning,” and added, “It is significant in that it solves important problems in artificial neural network learning through clues from developmental neuroscience, and at the same time provides insight into the brain’s learning principles through artificial neural network models.” This study, in which Jeonghwan Cheon, a Master’s candidate of KAIST Department of Brain and Cognitive Sciences participated as the first author and Professor Sang Wan Lee of the same department as a co-author, will be presented at the 38th Neural Information Processing Systems (NeurIPS), the world's top artificial intelligence conference, to be held in Vancouver, Canada from December 10 to 15, 2024. (Paper title: Pretraining with random noise for fast and robust learning without weight transport) This study was conducted with the support of the National Research Foundation of Korea's Basic Research Program in Science and Engineering, the Information and Communications Technology Planning and Evaluation Institute's Talent Development Program, and the KAIST Singularity Professor Program.
2024.10.23
View 879
KAIST’s Robo-Dog “RaiBo” runs through the sandy beach
KAIST (President Kwang Hyung Lee) announced on the 25th that a research team led by Professor Jemin Hwangbo of the Department of Mechanical Engineering developed a quadrupedal robot control technology that can walk robustly with agility even in deformable terrain such as sandy beach. < Photo. RAI Lab Team with Professor Hwangbo in the middle of the back row. > Professor Hwangbo's research team developed a technology to model the force received by a walking robot on the ground made of granular materials such as sand and simulate it via a quadrupedal robot. Also, the team worked on an artificial neural network structure which is suitable in making real-time decisions needed in adapting to various types of ground without prior information while walking at the same time and applied it on to reinforcement learning. The trained neural network controller is expected to expand the scope of application of quadrupedal walking robots by proving its robustness in changing terrain, such as the ability to move in high-speed even on a sandy beach and walk and turn on soft grounds like an air mattress without losing balance. This research, with Ph.D. Student Soo-Young Choi of KAIST Department of Mechanical Engineering as the first author, was published in January in the “Science Robotics”. (Paper title: Learning quadrupedal locomotion on deformable terrain). Reinforcement learning is an AI learning method used to create a machine that collects data on the results of various actions in an arbitrary situation and utilizes that set of data to perform a task. Because the amount of data required for reinforcement learning is so vast, a method of collecting data through simulations that approximates physical phenomena in the real environment is widely used. In particular, learning-based controllers in the field of walking robots have been applied to real environments after learning through data collected in simulations to successfully perform walking controls in various terrains. However, since the performance of the learning-based controller rapidly decreases when the actual environment has any discrepancy from the learned simulation environment, it is important to implement an environment similar to the real one in the data collection stage. Therefore, in order to create a learning-based controller that can maintain balance in a deforming terrain, the simulator must provide a similar contact experience. The research team defined a contact model that predicted the force generated upon contact from the motion dynamics of a walking body based on a ground reaction force model that considered the additional mass effect of granular media defined in previous studies. Furthermore, by calculating the force generated from one or several contacts at each time step, the deforming terrain was efficiently simulated. The research team also introduced an artificial neural network structure that implicitly predicts ground characteristics by using a recurrent neural network that analyzes time-series data from the robot's sensors. The learned controller was mounted on the robot 'RaiBo', which was built hands-on by the research team to show high-speed walking of up to 3.03 m/s on a sandy beach where the robot's feet were completely submerged in the sand. Even when applied to harder grounds, such as grassy fields, and a running track, it was able to run stably by adapting to the characteristics of the ground without any additional programming or revision to the controlling algorithm. In addition, it rotated with stability at 1.54 rad/s (approximately 90° per second) on an air mattress and demonstrated its quick adaptability even in the situation in which the terrain suddenly turned soft. The research team demonstrated the importance of providing a suitable contact experience during the learning process by comparison with a controller that assumed the ground to be rigid, and proved that the proposed recurrent neural network modifies the controller's walking method according to the ground properties. The simulation and learning methodology developed by the research team is expected to contribute to robots performing practical tasks as it expands the range of terrains that various walking robots can operate on. The first author, Suyoung Choi, said, “It has been shown that providing a learning-based controller with a close contact experience with real deforming ground is essential for application to deforming terrain.” He went on to add that “The proposed controller can be used without prior information on the terrain, so it can be applied to various robot walking studies.” This research was carried out with the support of the Samsung Research Funding & Incubation Center of Samsung Electronics. < Figure 1. Adaptability of the proposed controller to various ground environments. The controller learned from a wide range of randomized granular media simulations showed adaptability to various natural and artificial terrains, and demonstrated high-speed walking ability and energy efficiency. > < Figure 2. Contact model definition for simulation of granular substrates. The research team used a model that considered the additional mass effect for the vertical force and a Coulomb friction model for the horizontal direction while approximating the contact with the granular medium as occurring at a point. Furthermore, a model that simulates the ground resistance that can occur on the side of the foot was introduced and used for simulation. >
2023.01.26
View 11508
AI-based Digital Watermarking to Beat Fake News
(from left: PhD candidates Ji-Hyeon Kang, Seungmin Mun, Sangkeun Ji and Professor Heung-Kyu Lee) The illegal use of images has been a prevalent issue along with the rise of distributing fake news, which all create social and economic problems. Here, a KAIST team succeeded in embedding and detecting digital watermarks based on deep neural learning artificial intelligence, which adaptively responds to a variety of attack types, such as removing watermarks and hacking. Their research shows that this technology reached a level of reliability for technology commercialization. Conventional watermarking technologies show limitations in terms of practicality, technology scalability, and usefulness because they require a predetermined set of conditions, such as the attack type and intensity. They are designed and implemented in a way to satisfy specific conditions. In addition to those limitations, the technology itself is vulnerable to security issues because upgraded hacking technologies are constantly emerging, such as watermark removal, copying, and substitution. Professor Heung-Kyu Lee from the School of Computing and his team provided a web service that responds to new attacks through deep neural learning artificial intelligence. It also serves as a two-dimensional image watermarking technique based on neural networks with high security derived from the nonlinear characteristics of artificial neural networks. To protect images from varying viewpoints, the service offers a depth-image-based rendering (DIBR) three-dimensional image watermarking technique. Lastly, they provided a stereoscopic three-dimensional (S3D) image watermarking technique that minimizes visual fatigue due to the embedded watermarks. Their two-dimensional image watermarking technology is the first of its kind to be based upon artificial neural works. It acquires robustness through educating the artificial neural networking on various attack scenarios. At the same time, the team has greatly improved on existing security vulnerabilities by acquiring high security against watermark hacking through the deep structure of artificial neural networks. They have also developed a watermarking technique embedded whenever needed to provide proof during possible disagreements. Users can upload their images to the web service and insert the watermarks. When necessary, they can detect the watermarks for proof in any dispute. Moreover, this technology provides services, including simulation tools, watermark adjustment, and image quality comparisons before and after the watermark is embedded. This study maximized the usefulness of watermarking technology by facilitating additional editing and demonstrating robustness against hacking. Hence, this technology can be applied in a variety of contents for certification, authentication, distinction tracking, and copyrights. It can contribute to spurring the content industry and promoting a digital society by reducing the socio-economic losses caused by the use of various illegal image materials in the future. Professor Lee said, “Disputes related to images are now beyond the conventional realm of copyrights. Recently, their interest has rapidly expanded due to the issues of authentication, certification, integrity inspection, and distribution tracking because of the fake video problem. We will lead digital watermarking research that can overcome the technical limitations of conventional watermarking techniques.” This technology has only been conducted in labs thus far, but it is now open to the public after years of study. His team has been conducting a test run on the webpage (click).Moving forward from testing the technology under specific lab conditions, it will be applied to a real environment setting where constant changes pervade. 1. Figure. 2D image using the watermarking technique: a) original image b) watermark-embedded image c) signal from the embedded watermark Figure 2. Result of watermark detection according to the password Figure 3. Example of a center image using the DIBR 3D image watermarking technique: a) original image b) depth image c) watermark-embedded image d) signal from the embedded watermark Figure 4. Example of using the S3D image watermarking technique: a) original left image b) original right image c) watermark-embedded left image d) watermark-embedded right image e) signal from the embedded watermark (left) f) signal from the embedded watermark (right)
2018.12.05
View 4602
Mathematical Principle behind AI's 'Black Box'
(from left: Professor Jong Chul Ye, PhD candidates Yoseob Han and Eunju Cha) A KAIST research team identified the geometrical structure of artificial intelligence (AI) and discovered the mathematical principles of highly performing artificial neural networks, which can be applicable in fields such as medical imaging. Deep neural networks are an exemplary method of implementing deep learning, which is at the core of the AI technology, and have shown explosive growth in recent years. This technique has been used in various fields, such as image and speech recognition as well as image processing. Despite its excellent performance and usefulness, the exact working principles of deep neural networks has not been well understood, and they often suffer from unexpected results or errors. Hence, there is an increasing social and technical demand for interpretable deep neural network models. To address these issues, Professor Jong Chul Ye from the Department of Bio & Brain Engineering and his team attempted to find the geometric structure in a higher dimensional space where the structure of the deep neural network can be easily understood. They proposed a general deep learning framework, called deep convolutional framelets, to understand the mathematical principle of a deep neural network in terms of the mathematical tools in Harmonic analysis. As a result, it was found that deep neural networks’ structure appears during the process of decomposition of high dimensionally lifted signal via Hankel matrix, which is a high-dimensional structure formerly studied intensively in the field of signal processing. In the process of decomposing the lifted signal, two bases categorized as local and non-local basis emerge. The researchers found that non-local and local basis functions play a role in pooling and filtering operation in convolutional neural network, respectively. Previously, when implementing AI, deep neural networks were usually constructed through empirical trial and errors. The significance of the research lies in the fact that it provides a mathematical understanding on the neural network structure in high dimensional space, which guides users to design an optimized neural network. They demonstrated improved performance of the deep convolutional framelets’ neural networks in the applications of image denoising, image pixel in painting, and medical image restoration. Professor Ye said, “Unlike conventional neural networks designed through trial-and-error, our theory shows that neural network structure can be optimized to each desired application and are easily predictable in their effects by exploiting the high dimensional geometry. This technology can be applied to a variety of fields requiring interpretation of the architecture, such as medical imaging.” This research, led by PhD candidates Yoseob Han and Eunju Cha, was published in the April 26th issue of the SIAM Journal on Imaging Sciences. Figure 1. The design of deep neural network using mathematical principles Figure 2. The results of image noise cancelling Figure 3. The artificial neural network restoration results in the case where 80% of the pixels are lost
2018.09.12
View 5883
Dr. Demis Hassabis, the Developer of AlphaGo, Lectures at KAIST
AlphaGo, a computer program developed by Google DeepMind in London to play the traditional Chinese board game Go, had five matches against Se-Dol Lee, a professional Go player in Korea from March 8-15, 2016. AlphaGo won four out of the five games, a significant test result showcasing the advancement achieved in the field of general-purpose artificial intelligence (GAI), according to the company. Dr. Demis Hassabis, the Chief Executive Officer of Google DeepMind, visited KAIST on March 11, 2016 and gave an hour-long talk to students and faculty. In the lecture, which was entitled “Artificial Intelligence and the Future,” he introduced an overview of GAI and some of its applications in Atari video games and Go. He said that the ultimate goal of GAI was to become a useful tool to help society solve some of the biggest and most pressing problems facing humanity, from climate change to disease diagnosis.
2016.03.11
View 4382
Discovery of New Therapeutic Targets for Alzheimer's Disease
A Korean research team headed by Professor Dae-Soo Kim of Biological Sciences at KAIST and Dr. Chang-Jun Lee from the Korea Institute of Science and Technology (KIST) successfully identified that reactive astrocytes, commonly observed in brains affected by Alzheimer’s disease, produce abnormal amounts of inhibitory neurotransmitter gamma-Aminobutyric acid (GABA) in reaction to the enzyme Monoamine oxidase B (Mao-B) and release GABA through the Bestrophin-1 channel to suppress the normal signal transmission of brain nerve cells. By suppressing the GABA production or release from reactive astrocytes, the research team was able to restore the model mice's memory and learning impairment caused by Alzheimer’s disease. This discovery will allow the development of new drugs to treat Alzheimer’s and other related diseases. The research result was published in the June 29, 2014 edition of Nature Medicine (Title: GABA from Reactive Astrocytes Impairs Memory in Mouse Models of Alzheimer’s Disease). For details, please read the article below: Technology News, July 10, 2014 "Discovery of New Drug Targets for Memory Impairment in Alzheimer’s Disease" http://technews.tmcnet.com/news/2014/07/10/7917811.htm
2014.07.16
View 8348
<<
첫번째페이지
<
이전 페이지
1
>
다음 페이지
>>
마지막 페이지 1