본문 바로가기
대메뉴 바로가기
KAIST
Newsletter Vol.25
Receive KAIST news by email!
View
Subscribe
Close
Type your e-mail address here.
Subscribe
Close
KAIST
NEWS
유틸열기
홈페이지 통합검색
-
검색
KOREAN
메뉴 열기
MAG
by recently order
by view order
KAIST Secures Core Technology for Ultra-High-Resolution Image Sensors
A joint research team from Korea and the United States has developed next-generation, high-resolution image sensor technology with higher power efficiency and a smaller size compared to existing sensors. Notably, they have secured foundational technology for ultra-high-resolution shortwave infrared (SWIR) image sensors, an area currently dominated by Sony, paving the way for future market entry. KAIST (represented by President Kwang Hyung Lee) announced on the 20th of November that a research team led by Professor SangHyeon Kim from the School of Electrical Engineering, in collaboration with Inha University and Yale University in the U.S., has developed an ultra-thin broadband photodiode (PD), marking a significant breakthrough in high-performance image sensor technology. This research drastically improves the trade-off between the absorption layer thickness and quantum efficiency found in conventional photodiode technology. Specifically, it achieved high quantum efficiency of over 70% even in an absorption layer thinner than one micrometer (μm), reducing the thickness of the absorption layer by approximately 70% compared to existing technologies. A thinner absorption layer simplifies pixel processing, allowing for higher resolution and smoother carrier diffusion, which is advantageous for light carrier acquisition while also reducing the cost. However, a fundamental issue with thinner absorption layers is the reduced absorption of long-wavelength light. < Figure 1. Schematic diagram of the InGaAs photodiode image sensor integrated on the Guided-Mode Resonance (GMR) structure proposed in this study (left), a photograph of the fabricated wafer, and a scanning electron microscope (SEM) image of the periodic patterns (right) > The research team introduced a guided-mode resonance (GMR) structure* that enables high-efficiency light absorption across a wide spectral range from 400 nanometers (nm) to 1,700 nanometers (nm). This wavelength range includes not only visible light but also light the SWIR region, making it valuable for various industrial applications. *Guided-Mode Resonance (GMR) Structure: A concept used in electromagnetics, a phenomenon in which a specific (light) wave resonates (forming a strong electric/magnetic field) at a specific wavelength. Since energy is maximized under these conditions, it has been used to increase antenna or radar efficiency. The improved performance in the SWIR region is expected to play a significant role in developing next-generation image sensors with increasingly high resolutions. The GMR structure, in particular, holds potential for further enhancing resolution and other performance metrics through hybrid integration and monolithic 3D integration with complementary metal-oxide-semiconductor (CMOS)-based readout integrated circuits (ROIC). < Figure 2. Benchmark for state-of-the-art InGaAs-based SWIR pixels with simulated EQE lines as a function of TAL variation. Performance is maintained while reducing the absorption layer thickness from 2.1 micrometers or more to 1 micrometer or less while reducing it by 50% to 70% > The research team has significantly enhanced international competitiveness in low-power devices and ultra-high-resolution imaging technology, opening up possibilities for applications in digital cameras, security systems, medical and industrial image sensors, as well as future ultra-high-resolution sensors for autonomous driving, aerospace, and satellite observation. Professor Sang Hyun Kim, the lead researcher, commented, “This research demonstrates that significantly higher performance than existing technologies can be achieved even with ultra-thin absorption layers.” < Figure 3. Top optical microscope image and cross-sectional scanning electron microscope image of the InGaAs photodiode image sensor fabricated on the GMR structure (left). Improved quantum efficiency performance of the ultra-thin image sensor (red) fabricated with the technology proposed in this study (right) > The results of this research were published on 15th of November, in the prestigious international journal Light: Science & Applications (JCR 2.9%, IF=20.6), with Professor Dae-Myung Geum of Inha University (formerly a KAIST postdoctoral researcher) and Dr. Jinha Lim (currently a postdoctoral researcher at Yale University) as co-first authors. (Paper title: “Highly-efficient (>70%) and Wide-spectral (400 nm -1700 nm) sub-micron-thick InGaAs photodiodes for future high-resolution image sensors”) This study was supported by the National Research Foundation of Korea.
2024.11.22
View 75
KAIST Succeeds in the Real-time Observation of Organoids using Holotomography
Organoids, which are 3D miniature organs that mimic the structure and function of human organs, play an essential role in disease research and drug development. A Korean research team has overcome the limitations of existing imaging technologies, succeeding in the real-time, high-resolution observation of living organoids. KAIST (represented by President Kwang Hyung Lee) announced on the 14th of October that Professor YongKeun Park’s research team from the Department of Physics, in collaboration with the Genome Editing Research Center (Director Bon-Kyoung Koo) of the Institute for Basic Science (IBS President Do-Young Noh) and Tomocube Inc., has developed an imaging technology using holotomography to observe live, small intestinal organoids in real time at a high resolution. Existing imaging techniques have struggled to observe living organoids in high resolution over extended periods and often required additional treatments like fluorescent staining. < Figure 1. Overview of the low-coherence HT workflow. Using holotomography, 3D morphological restoration and quantitative analysis of organoids can be performed. In order to improve the limited field of view, which is a limitation of the microscope, our research team utilized a large-area field of view combination algorithm and made a 3D restoration by acquiring multi-focus holographic images for 3D measurements. After that, the organoids were compartmentalized to divide the parts necessary for analysis and quantitatively evaluated the protein concentration measurable from the refractive index and the survival rate of the organoids. > The research team introduced holotomography technology to address these issues, which provides high-resolution images without the need for fluorescent staining and allows for the long-term observation of dynamic changes in real time without causing cell damage. The team validated this technology using small intestinal organoids from experimental mice and were able to observe various cell structures inside the organoids in detail. They also captured dynamic changes such as growth processes, cell division, and cell death in real time using holotomography. Additionally, the technology allowed for the precise analysis of the organoids' responses to drug treatments, verifying the survival of the cells. The researchers believe that this breakthrough will open new horizons in organoid research, enabling the greater utilization of organoids in drug development, personalized medicine, and regenerative medicine. Future research is expected to more accurately replicate the in vivo environment of organoids, contributing significantly to a more detailed understanding of various life phenomena at the cellular level through more precise 3D imaging. < Figure 2. Real-time organoid morphology analysis. Using holotomography, it is possible to observe the lumen and villus development process of intestinal organoids in real time, which was difficult to observe with a conventional microscope. In addition, various information about intestinal organoids can be obtained by quantifying the size and protein amount of intestinal organoids through image analysis. > Dr. Mahn Jae Lee, a graduate of KAIST's Graduate School of Medical Science and Engineering, currently at Chungnam National University Hospital and the first author of the paper, commented, "This research represents a new imaging technology that surpasses previous limitations and is expected to make a major contribution to disease modeling, personalized treatments, and drug development research using organoids." The research results were published online in the international journal Experimental & Molecular Medicine on October 1, 2024, and the technology has been recognized for its applicability in various fields of life sciences. (Paper title: “Long-term three-dimensional high-resolution imaging of live unlabeled small intestinal organoids via low-coherence holotomography”) This research was supported by the National Research Foundation of Korea, KAIST Institutes, and the Institute for Basic Science.
2024.10.14
View 869
KAIST Employs Image-recognition AI to Determine Battery Composition and Conditions
An international collaborative research team has developed an image recognition technology that can accurately determine the elemental composition and the number of charge and discharge cycles of a battery by examining only its surface morphology using AI learning. KAIST (President Kwang-Hyung Lee) announced on July 2nd that Professor Seungbum Hong from the Department of Materials Science and Engineering, in collaboration with the Electronics and Telecommunications Research Institute (ETRI) and Drexel University in the United States, has developed a method to predict the major elemental composition and charge-discharge state of NCM cathode materials with 99.6% accuracy using convolutional neural networks (CNN)*. *Convolutional Neural Network (CNN): A type of multi-layer, feed-forward, artificial neural network used for analyzing visual images. The research team noted that while scanning electron microscopy (SEM) is used in semiconductor manufacturing to inspect wafer defects, it is rarely used in battery inspections. SEM is used for batteries to analyze the size of particles only at research sites, and reliability is predicted from the broken particles and the shape of the breakage in the case of deteriorated battery materials. The research team decided that it would be groundbreaking if an automated SEM can be used in the process of battery production, just like in the semiconductor manufacturing, to inspect the surface of the cathode material to determine whether it was synthesized according to the desired composition and that the lifespan would be reliable, thereby reducing the defect rate. < Figure 1. Example images of true cases and their grad-CAM overlays from the best trained network. > The researchers trained a CNN-based AI applicable to autonomous vehicles to learn the surface images of battery materials, enabling it to predict the major elemental composition and charge-discharge cycle states of the cathode materials. They found that while the method could accurately predict the composition of materials with additives, it had lower accuracy for predicting charge-discharge states. The team plans to further train the AI with various battery material morphologies produced through different processes and ultimately use it for inspecting the compositional uniformity and predicting the lifespan of next-generation batteries. Professor Joshua C. Agar, one of the collaborating researchers of the project from the Department of Mechanical Engineering and Mechanics of Drexel University, said, "In the future, artificial intelligence is expected to be applied not only to battery materials but also to various dynamic processes in functional materials synthesis, clean energy generation in fusion, and understanding foundations of particles and the universe." Professor Seungbum Hong from KAIST, who led the research, stated, "This research is significant as it is the first in the world to develop an AI-based methodology that can quickly and accurately predict the major elemental composition and the state of the battery from the structural data of micron-scale SEM images. The methodology developed in this study for identifying the composition and state of battery materials based on microscopic images is expected to play a crucial role in improving the performance and quality of battery materials in the future." < Figure 2. Accuracies of CNN Model predictions on SEM images of NCM cathode materials with additives under various conditions. > This research was conducted by KAIST’s Materials Science and Engineering Department graduates Dr. Jimin Oh and Dr. Jiwon Yeom, the co-first authors, in collaboration with Professor Josh Agar and Dr. Kwang Man Kim from ETRI. It was supported by the National Research Foundation of Korea, the KAIST Global Singularity project, and international collaboration with the US research team. The results were published in the international journal npj Computational Materials on May 4. (Paper Title: “Composition and state prediction of lithium-ion cathode via convolutional neural network trained on scanning electron microscopy images”)
2024.07.02
View 2534
North Korea and Beyond: AI-Powered Satellite Analysis Reveals the Unseen Economic Landscape of Underdeveloped Nations
- A joint research team in computer science, economics, and geography has developed an artificial intelligence (AI) technology to measure grid-level economic development within six-square-kilometer regions. - This AI technology is applicable in regions with limited statistical data (e.g., North Korea), supporting international efforts to propose policies for economic growth and poverty reduction in underdeveloped countries. - The research team plans to make this technology freely available for use to contribute to the United Nations' Sustainable Development Goals (SDGs). The United Nations reports that more than 700 million people are in extreme poverty, earning less than two dollars a day. However, an accurate assessment of poverty remains a global challenge. For example, 53 countries have not conducted agricultural surveys in the past 15 years, and 17 countries have not published a population census. To fill this data gap, new technologies are being explored to estimate poverty using alternative sources such as street views, aerial photos, and satellite images. The paper published in Nature Communications demonstrates how artificial intelligence (AI) can help analyze economic conditions from daytime satellite imagery. This new technology can even apply to the least developed countries - such as North Korea - that do not have reliable statistical data for typical machine learning training. The researchers used Sentinel-2 satellite images from the European Space Agency (ESA) that are publicly available. They split these images into small six-square-kilometer grids. At this zoom level, visual information such as buildings, roads, and greenery can be used to quantify economic indicators. As a result, the team obtained the first ever fine-grained economic map of regions like North Korea. The same algorithm was applied to other underdeveloped countries in Asia: North Korea, Nepal, Laos, Myanmar, Bangladesh, and Cambodia (see Image 1). The key feature of their research model is the "human-machine collaborative approach," which lets researchers combine human input with AI predictions for areas with scarce data. In this research, ten human experts compared satellite images and judged the economic conditions in the area, with the AI learning from this human data and giving economic scores to each image. The results showed that the Human-AI collaborative approach outperformed machine-only learning algorithms. < Image 1. Nightlight satellite images of North Korea (Top-left: Background photo provided by NASA's Earth Observatory). South Korea appears brightly lit compared to North Korea, which is mostly dark except for Pyongyang. In contrast, the model developed by the research team uses daytime satellite imagery to predict more detailed economic predictions for North Korea (top-right) and five Asian countries (Bottom: Background photo from Google Earth). > The research was led by an interdisciplinary team of computer scientists, economists, and a geographer from KAIST & IBS (Donghyun Ahn, Meeyoung Cha, Jihee Kim), Sogang University (Hyunjoo Yang), HKUST (Sangyoon Park), and NUS (Jeasurk Yang). Dr Charles Axelsson, Associate Editor at Nature Communications, handled this paper during the peer review process at the journal. The research team found that the scores showed a strong correlation with traditional socio-economic metrics such as population density, employment, and number of businesses. This demonstrates the wide applicability and scalability of the approach, particularly in data-scarce countries. Furthermore, the model's strength lies in its ability to detect annual changes in economic conditions at a more detailed geospatial level without using any survey data (see Image 2). < Image 2. Differences in satellite imagery and economic scores in North Korea between 2016 and 2019. Significant development was found in the Wonsan Kalma area (top), one of the tourist development zones, but no changes were observed in the Wiwon Industrial Development Zone (bottom). (Background photo: Sentinel-2 satellite imagery provided by the European Space Agency (ESA)). > This model would be especially valuable for rapidly monitoring the progress of Sustainable Development Goals such as reducing poverty and promoting more equitable and sustainable growth on an international scale. The model can also be adapted to measure various social and environmental indicators. For example, it can be trained to identify regions with high vulnerability to climate change and disasters to provide timely guidance on disaster relief efforts. As an example, the researchers explored how North Korea changed before and after the United Nations sanctions against the country. By applying the model to satellite images of North Korea both in 2016 and in 2019, the researchers discovered three key trends in the country's economic development between 2016 and 2019. First, economic growth in North Korea became more concentrated in Pyongyang and major cities, exacerbating the urban-rural divide. Second, satellite imagery revealed significant changes in areas designated for tourism and economic development, such as new building construction and other meaningful alterations. Third, traditional industrial and export development zones showed relatively minor changes. Meeyoung Cha, a data scientist in the team explained, "This is an important interdisciplinary effort to address global challenges like poverty. We plan to apply our AI algorithm to other international issues, such as monitoring carbon emissions, disaster damage detection, and the impact of climate change." An economist on the research team, Jihee Kim, commented that this approach would enable detailed examinations of economic conditions in the developing world at a low cost, reducing data disparities between developed and developing nations. She further emphasized that this is most essential because many public policies require economic measurements to achieve their goals, whether they are for growth, equality, or sustainability. The research team has made the source code publicly available via GitHub and plans to continue improving the technology, applying it to new satellite images updated annually. The results of this study, with Ph.D. candidate Donghyun Ahn at KAIST and Ph.D. candidate Jeasurk Yang at NUS as joint first authors, were published in Nature Communications under the title "A human-machine collaborative approach measures economic development using satellite imagery." < Photos of the main authors. 1. Donghyun Ahn, PhD candidate at KAIST School of Computing 2. Jeasurk Yang, PhD candidate at the Department of Geography of National University of Singapore 3. Meeyoung Cha, Professor of KAIST School of Computing and CI at IBS 4. Jihee Kim, Professor of KAIST School of Business and Technology Management 5. Sangyoon Park, Professor of the Division of Social Science at Hong Kong University of Science and Technology 6. Hyunjoo Yang, Professor of the Department of Economics at Sogang University >
2023.12.07
View 4581
KAIST builds a high-resolution 3D holographic sensor using a single mask
Holographic cameras can provide more realistic images than ordinary cameras thanks to their ability to acquire 3D information about objects. However, existing holographic cameras use interferometers that measure the wavelength and refraction of light through the interference of light waves, which makes them complex and sensitive to their surrounding environment. On August 23, a KAIST research team led by Professor YongKeun Park from the Department of Physics announced a new leap forward in 3D holographic imaging sensor technology. The team proposed an innovative holographic camera technology that does not use complex interferometry. Instead, it uses a mask to precisely measure the phase information of light and reconstruct the 3D information of an object with higher accuracy. < Figure 1. Structure and principle of the proposed holographic camera. The amplitude and phase information of light scattered from a holographic camera can be measured. > The team used a mask that fulfills certain mathematical conditions and incorporated it into an ordinary camera, and the light scattered from a laser is measured through the mask and analyzed using a computer. This does not require a complex interferometer and allows the phase information of light to be collected through a simplified optical system. With this technique, the mask that is placed between the two lenses and behind an object plays an important role. The mask selectively filters specific parts of light,, and the intensity of the light passing through the lens can be measured using an ordinary commercial camera. This technique combines the image data received from the camera with the unique pattern received from the mask and reconstructs an object’s precise 3D information using an algorithm. This method allows a high-resolution 3D image of an object to be captured in any position. In practical situations, one can construct a laser-based holographic 3D image sensor by adding a mask with a simple design to a general image sensor. This makes the design and construction of the optical system much easier. In particular, this novel technology can capture high-resolution holographic images of objects moving at high speeds, which widens its potential field of application. < Figure 2. A moving doll captured by a conventional camera and the proposed holographic camera. When taking a picture without focusing on the object, only a blurred image of the doll can be obtained from a general camera, but the proposed holographic camera can restore the blurred image of the doll into a clear image. > The results of this study, conducted by Dr. Jeonghun Oh from the KAIST Department of Physics as the first author, were published in Nature Communications on August 12 under the title, "Non-interferometric stand-alone single-shot holographic camera using reciprocal diffractive imaging". Dr. Oh said, “The holographic camera module we are suggesting can be built by adding a filter to an ordinary camera, which would allow even non-experts to handle it easily in everyday life if it were to be commercialized.” He added, “In particular, it is a promising candidate with the potential to replace existing remote sensing technologies.” This research was supported by the National Research Foundation’s Leader Research Project, the Korean Ministry of Science and ICT’s Core Hologram Technology Support Project, and the Nano and Material Technology Development Project.
2023.09.05
View 4264
A KAIST research team unveils new path for dense photonic integration
Integrated optical semiconductor (hereinafter referred to as optical semiconductor) technology is a next-generation semiconductor technology for which many researches and investments are being made worldwide because it can make complex optical systems such as LiDAR and quantum sensors and computers into a single small chip. In the existing semiconductor technology, the key was how small it was to make it in units of 5 nanometers or 2 nanometers, but increasing the degree of integration in optical semiconductor devices can be said to be a key technology that determines performance, price, and energy efficiency. KAIST (President Kwang-Hyung Lee) announced on the 19th that a research team led by Professor Sangsik Kim of the Department of Electrical and Electronic Engineering discovered a new optical coupling mechanism that can increase the degree of integration of optical semiconductor devices by more than 100 times. The degree of the number of elements that can be configured per chip is called the degree of integration. However, it is very difficult to increase the degree of integration of optical semiconductor devices, because crosstalk occurs between photons between adjacent devices due to the wave nature of light. In previous studies, it was possible to reduce crosstalk of light only in specific polarizations, but in this study, the research team developed a method to increase the degree of integration even under polarization conditions, which were previously considered impossible, by discovering a new light coupling mechanism. This study, led by Professor Sangsik Kim as a corresponding author and conducted with students he taught at Texas Tech University, was published in the international journal 'Light: Science & Applications' [IF=20.257] on June 2nd. done. (Paper title: Anisotropic leaky-like perturbation with subwavelength gratings enables zero crosstalk). Professor Sangsik Kim said, "The interesting thing about this study is that it paradoxically eliminated the confusion through leaky waves (light tends to spread sideways), which was previously thought to increase the crosstalk." He went on to add, “If the optical coupling method using the leaky wave revealed in this study is applied, it will be possible to develop various optical semiconductor devices that are smaller and that has less noise.” Professor Sangsik Kim is a researcher recognized for his expertise and research in optical semiconductor integration. Through his previous research, he developed an all-dielectric metamaterial that can control the degree of light spreading laterally by patterning a semiconductor structure at a size smaller than the wavelength, and proved this through experiments to improve the degree of integration of optical semiconductors. These studies were reported in ‘Nature Communications’ (Vol. 9, Article 1893, 2018) and ‘Optica’ (Vol. 7, pp. 881-887, 2020). In recognition of these achievements, Professor Kim has received the NSF Career Award from the National Science Foundation (NSF) and the Young Scientist Award from the Association of Korean-American Scientists and Engineers. Meanwhile, this research was carried out with the support from the New Research Project of Excellence of the National Research Foundation of Korea and and the National Science Foundation of the US. < Figure 1. Illustration depicting light propagation without crosstalk in the waveguide array of the developed metamaterial-based optical semiconductor >
2023.06.21
View 4503
KAIST debuts “DreamWaQer” - a quadrupedal robot that can walk in the dark
- The team led by Professor Hyun Myung of the School of Electrical Engineering developed “DreamWaQ”, a deep reinforcement learning-based walking robot control technology that can walk in an atypical environment without visual and/or tactile information - Utilization of “DreamWaQ” technology can enable mass production of various types of “DreamWaQers” - Expected to be used in exploration of atypical environment involving unique circumstances such as disasters by fire. A team of Korean engineering researchers has developed a quadrupedal robot technology that can climb up and down the steps and moves without falling over in uneven environments such as tree roots without the help of visual or tactile sensors even in disastrous situations in which visual confirmation is impeded due to darkness or thick smoke from the flames. KAIST (President Kwang Hyung Lee) announced on the 29th of March that Professor Hyun Myung's research team at the Urban Robotics Lab in the School of Electrical Engineering developed a walking robot control technology that enables robust 'blind locomotion' in various atypical environments. < (From left) Prof. Hyun Myung, Doctoral Candidates I Made Aswin Nahrendra, Byeongho Yu, and Minho Oh. In the foreground is the DreamWaQer, a quadrupedal robot equipped with DreamWaQ technology. > The KAIST research team developed "DreamWaQ" technology, which was named so as it enables walking robots to move about even in the dark, just as a person can walk without visual help fresh out of bed and going to the bathroom in the dark. With this technology installed atop any legged robots, it will be possible to create various types of "DreamWaQers". Existing walking robot controllers are based on kinematics and/or dynamics models. This is expressed as a model-based control method. In particular, on atypical environments like the open, uneven fields, it is necessary to obtain the feature information of the terrain more quickly in order to maintain stability as it walks. However, it has been shown to depend heavily on the cognitive ability to survey the surrounding environment. In contrast, the controller developed by Professor Hyun Myung's research team based on deep reinforcement learning (RL) methods can quickly calculate appropriate control commands for each motor of the walking robot through data of various environments obtained from the simulator. Whereas the existing controllers that learned from simulations required a separate re-orchestration to make it work with an actual robot, this controller developed by the research team is expected to be easily applied to various walking robots because it does not require an additional tuning process. DreamWaQ, the controller developed by the research team, is largely composed of a context estimation network that estimates the ground and robot information and a policy network that computes control commands. The context-aided estimator network estimates the ground information implicitly and the robot’s status explicitly through inertial information and joint information. This information is fed into the policy network to be used to generate optimal control commands. Both networks are learned together in the simulation. While the context-aided estimator network is learned through supervised learning, the policy network is learned through an actor-critic architecture, a deep RL methodology. The actor network can only implicitly infer surrounding terrain information. In the simulation, the surrounding terrain information is known, and the critic, or the value network, that has the exact terrain information evaluates the policy of the actor network. This whole learning process takes only about an hour in a GPU-enabled PC, and the actual robot is equipped with only the network of learned actors. Without looking at the surrounding terrain, it goes through the process of imagining which environment is similar to one of the various environments learned in the simulation using only the inertial sensor (IMU) inside the robot and the measurement of joint angles. If it suddenly encounters an offset, such as a staircase, it will not know until its foot touches the step, but it will quickly draw up terrain information the moment its foot touches the surface. Then the control command suitable for the estimated terrain information is transmitted to each motor, enabling rapidly adapted walking. The DreamWaQer robot walked not only in the laboratory environment, but also in an outdoor environment around the campus with many curbs and speed bumps, and over a field with many tree roots and gravel, demonstrating its abilities by overcoming a staircase with a difference of a height that is two-thirds of its body. In addition, regardless of the environment, the research team confirmed that it was capable of stable walking ranging from a slow speed of 0.3 m/s to a rather fast speed of 1.0 m/s. The results of this study were produced by a student in doctorate course, I Made Aswin Nahrendra, as the first author, and his colleague Byeongho Yu as a co-author. It has been accepted to be presented at the upcoming IEEE International Conference on Robotics and Automation (ICRA) scheduled to be held in London at the end of May. (Paper title: DreamWaQ: Learning Robust Quadrupedal Locomotion With Implicit Terrain Imagination via Deep Reinforcement Learning) The videos of the walking robot DreamWaQer equipped with the developed DreamWaQ can be found at the address below. Main Introduction: https://youtu.be/JC1_bnTxPiQ Experiment Sketches: https://youtu.be/mhUUZVbeDA0 Meanwhile, this research was carried out with the support from the Robot Industry Core Technology Development Program of the Ministry of Trade, Industry and Energy (MOTIE). (Task title: Development of Mobile Intelligence SW for Autonomous Navigation of Legged Robots in Dynamic and Atypical Environments for Real Application) < Figure 1. Overview of DreamWaQ, a controller developed by this research team. This network consists of an estimator network that learns implicit and explicit estimates together, a policy network that acts as a controller, and a value network that provides guides to the policies during training. When implemented in a real robot, only the estimator and policy network are used. Both networks run in less than 1 ms on the robot's on-board computer. > < Figure 2. Since the estimator can implicitly estimate the ground information as the foot touches the surface, it is possible to adapt quickly to rapidly changing ground conditions. > < Figure 3. Results showing that even a small walking robot was able to overcome steps with height differences of about 20cm. >
2023.05.18
View 6558
KAIST researchers find the key to overcome the limits in X-ray microscopy
X-ray microscopes have the advantage of penetrating most substances, so internal organs and skeletons can be observed non-invasively through chest X-rays or CT scans. Recently, studies to increase the resolution of X-ray imaging technology are being actively conducted in order to precisely observe the internal structure of semiconductors and batteries at the nanoscale. KAIST (President Kwang Hyung Lee) announced on April 12th that a joint research team led by Professor YongKeun Park of the Department of Physics and Dr. Jun Lim of the Pohang Accelerator Laboratory has succeeded in developing a core technology that can overcome the resolution limitations of existing X-ray microscopes. d This study, in which Dr. KyeoReh Lee participated as the first author, was published on 6th of April in “Light: Science and Application”, a world-renowned academic journal in optics and photonics. (Paper title: Direct high-resolution X-ray imaging exploiting pseudorandomness). X-ray nanomicroscopes do not have refractive lenses. In an X-ray microscope, a circular grating called a concentric zone plate is used instead of a lens. The resolution of an image obtained using the zone plate is determined by the quality of the nanostructure that comprises the plate. There are several difficulties in fabricating and maintaining these nanostructures, which set the limit to the level of resolution for X-ray microscopy. The research team developed a new X-ray nanomicroscopy technology to overcome this problem. The X-ray lens proposed by the research team is in the form of numerous holes punched in a thin tungsten film, and generates random diffraction patterns by diffracting incident X-rays. The research team mathematically identified that, paradoxically, the high-resolution information of the sample was fully contained in these random diffraction patterns, and actually succeeded in extracting the information and imaging the internal states of the samples. The imaging method using the mathematical properties of random diffraction was proposed and implemented in the visible light band for the first time by Dr. KyeoReh Lee and Professor YongKeun Park in 2016*. This study uses the results of previous studies to solve the difficult, lingering problem in the field of the X-ray imaging. ※ "Exploiting the speckle-correlation scattering matrix for a compact reference-free holographic image sensor." Nature communications 7.1 (2016): 13359. The resolution of the image of the constructed sample has no direct correlation with the size of the pattern etched on the random lens used. Based on this idea, the research team succeeded in acquiring images with 14 nm resolution (approximately 1/7 the size of the coronavirus) by using random lenses made in a circular pattern with a diameter of 300 nm. The imaging technology developed by this research team is a key fundamental technology that can enhance the resolution of X-ray nanomicroscopy, which has been blocked by limitations of the production of existing zone plates. The first author and one of the co-corresponding author, Dr. KyeoReh Lee of KAIST Department of Physics, said, “In this study, the resolution was limited to 14 nm, but if the next-generation X-ray light source and high-performance X-ray detector are used, the resolution would exceed that of the conventional X-ray nano-imaging and approach the resolution of an electron microscope.” and added, “Unlike an electron microscope, X-rays can observe the internal structure without damaging the sample, so it will be able to present a new standard for non-invasive nanostructure observation processes such as quality inspections for semiconductors.”. The co-corresponding author, Dr. Jun Lim of the Pohang Accelerator Laboratory, said, “In the same context, the developed image technology is expected to greatly increase the performance in the 4th generation multipurpose radiation accelerator which is set to be established in Ochang of the Northern Chungcheong Province.” This research was conducted with the support through the Research Leader Program and the Sejong Science Fellowship of the National Research Foundation of Korea. Fig. 1. Designed diffuser as X-ray imaging lens. a, Schematic of full-field transmission X-ray microscopy. The attenuation (amplitude) map of a sample is measured. The image resolution (dx) is limited by the outermost zone width of the zone plate (D). b, Schematic of the proposed method. A designed diffuser is used instead of a zone plate. The image resolution is finer than the hole size of the diffuser (dx << D). Fig. 2. The left panel is a surface electron microscopy (SEM) image of the X-ray diffuser used in the experiment. The middle panel shows the design of the X-ray diffuser, and there is an inset in the middle of the panel that shows a corresponding part of the SEM image. The right panel shows an experimental random X-ray diffraction pattern, also known as a speckle pattern, obtained from the X-ray diffuser. Fig. 3. Images taken from the proposed randomness-based X-ray imaging (bottom) and the corresponding surface electron microscope (SEM) images (top).
2023.04.12
View 4766
A Quick but Clingy Creepy-Crawler that will MARVEL You
Engineered by KAIST Mechanics, a quadrupedal robot climbs steel walls and crawls across metal ceilings at the fastest speed that the world has ever seen. < Photo 1. (From left) KAIST ME Prof. Hae-Won Park, Ph.D. Student Yong Um, Ph.D. Student Seungwoo Hong > - Professor Hae-Won Park's team at the Department of Mechanical Engineering developed a quadrupedal robot that can move at a high speed on ferrous walls and ceilings. - It is expected to make a wide variety of contributions as it is to be used to conduct inspections and repairs of large steel structures such as ships, bridges, and transmission towers, offering an alternative to dangerous or risky activities required in hazardous environments while maintaining productivity and efficiency through automation and unmanning of such operations. - The study was published as the cover paper of the December issue of Science Robotics. KAIST (President Kwang Hyung Lee) announced on the 26th that a research team led by Professor Hae-Won Park of the Department of Mechanical Engineering developed a quadrupedal walking robot that can move at high speed on steel walls and ceilings named M.A.R.V.E.L. - rightly so as it is a Magnetically Adhesive Robot for Versatile and Expeditious Locomotion as described in their paper, “Agile and Versatile Climbing on Ferromagnetic Surfaces with a Quadrupedal Robot.” (DOI: 10.1126/scirobotics.add1017) To make this happen, Professor Park's research team developed a foot pad that can quickly turn the magnetic adhesive force on and off while retaining high adhesive force even on an uneven surface through the use of the Electro-Permanent Magnet (EPM), a device that can magnetize and demagnetize an electromagnet with little power, and the Magneto-Rheological Elastomer (MRE), an elastic material made by mixing a magnetic response factor, such as iron powder, with an elastic material, such as rubber, which they mounted on a small quadrupedal robot they made in-house, at their own laboratory. These walking robots are expected to be put into a wide variety of usage, including being programmed to perform inspections, repairs, and maintenance tasks on large structures made of steel, such as ships, bridges, transmission towers, large storage areas, and construction sites. This study, in which Seungwoo Hong and Yong Um of the Department of Mechanical Engineering participated as co-first authors, was published as the cover paper in the December issue of Science Robotics. < Image on the Cover of 2022 December issue of Science Robotics > Existing wall-climbing robots use wheels or endless tracks, so their mobility is limited on surfaces with steps or irregularities. On the other hand, walking robots for climbing can expect improved mobility in obstacle terrain, but have disadvantages in that they have significantly slower moving speeds or cannot perform various movements. In order to enable fast movement of the walking robot, the sole of the foot must have strong adhesion force and be able to control the adhesion to quickly switch from sticking to the surface or to be off of it. In addition, it is necessary to maintain the adhesion force even on a rough or uneven surface. To solve this problem, the research team used the EPM and MRE for the first time in designing the soles of walking robots. An EPM is a magnet that can turn on and off the electromagnetic force with a short current pulse. Unlike general electromagnets, it has the advantage that it does not require energy to maintain the magnetic force. The research team proposed a new EPM with a rectangular structure arrangement, enabling faster switching while significantly lowering the voltage required for switching compared to existing electromagnets. In addition, the research team was able to increase the frictional force without significantly reducing the magnetic force of the sole by covering the sole with an MRE. The proposed sole weighs only 169 g, but provides a vertical gripping force of about *535 Newtons (N) and a frictional force of 445 N, which is sufficient gripping force for a quadrupedal robot weighing 8 kg. * 535 N converted to kg is 54.5 kg, and 445 N is 45.4 kg. In other words, even if an external force of up to 54.5 kg in the vertical direction and up to 45.4 kg in the horizontal direction is applied (or even if a corresponding weight is hung), the sole of the foot does not come off the steel plate. MARVEL climbed up a vertical wall at high speed at a speed of 70 cm per second, and was able to walk while hanging upside down from the ceiling at a maximum speed of 50 cm per second. This is the world's fastest speed for a walking climbing robot. In addition, the research team demonstrated that the robot can climb at a speed of up to 35 cm even on a surface that is painted, dirty with dust and the rust-tainted surfaces of water tanks, proving the robot's performance in a real environment. It was experimentally demonstrated that the robot not only exhibited high speed, but also can switch from floor to wall and from wall to ceiling, and overcome 5-cm high obstacles protruding from walls without difficulty. The new climbing quadrupedal robot is expected to be widely used for inspection, repair, and maintenance of large steel structures such as ships, bridges, transmission towers, oil pipelines, large storage areas, and construction sites. As the works required in these places involves risks such as falls, suffocation and other accidents that may result in serious injuries or casualties, the need for automation is of utmost urgency. One of the first co-authors of the paper, a Ph.D. student, Yong Um of KAIST’s Department of Mechanical Engineering, said, "By the use of the magnetic soles made up of the EPM and MRE and the non-linear model predictive controller suitable for climbing, the robot can speedily move through a variety of ferromagnetic surfaces including walls and ceilings, not just level grounds. We believe this would become a cornerstone that will expand the mobility and the places of pedal-mobile robots can venture into." He added, “These robots can be put into good use in executing dangerous and difficult tasks on steel structures in places like the shipbuilding yards.” This research was carried out with support from the National Research Foundation of Korea's Basic Research in Science & Engineering Program for Mid-Career Researchers and Korea Shipbuilding & Offshore Engineering Co., Ltd.. < Figure 1. The quadrupedal robot (MARVEL) walking over various ferrous surfaces. (A) vertical wall (B) ceiling. (C) over obstacles on a vertical wall (D) making floor-to-wall and wall-to-ceiling transitions (E) moving over a storage tank (F) walking on a wall with a 2-kg weight and over a ceiling with a 3-kg load. > < Figure 2. Description of the magnetic foot (A) Components of the magnet sole: ankle, Square Eletro-Permanent Magnet(S-EPM), MRE footpad. (B) Components of the S-EPM and MRE footpad. (C) Working principle of the S-EPM. When the magnetization direction is aligned as shown in the left figure, magnetic flux comes out of the keeper and circulates through the steel plate, generating holding force (ON state). Conversely, if the magnetization direction is aligned as shown in the figure on the right, the magnetic flux circulates inside the S-EPM and the holding force disappears (OFF state). > Video Introduction: Agile and versatile climbing on ferromagnetic surfaces with a quadrupedal robot - YouTube
2022.12.30
View 12428
See-through exhibitions using smartphones: KAIST develops the AR magic lens, WonderScope
WonderScope shows what’s underneath the surface of an object through an augmented reality technology. < Photo 1. Demonstration at ACM SIGGRAPH > - A KAIST research team led by Professor Woohun Lee from the Department of Industrial Design and Professor Geehyuk Lee from the School of Computing have developed a smartphone “appcessory” called WonderScope that can easily add an augmented reality (AR) perspective to the surface of exhibits - The research won an Honorable Mention for Emerging Technologies Best in Show at ACM SIGGRAPH, one of the largest international conferences on computer graphics and interactions - The technology was improved and validated through real-life applications in three special exhibitions including one at the Geological Museum at the Korea Institute of Geoscience and Mineral Resources (KIGAM) held in 2020, and two at the National Science Museum each in 2021 and 2022 - The technology is expected to be used for public science exhibitions and museums as well as for interactive teaching materials to stimulate children’s curiosity A KAIST research team led by Professor Woohun Lee from the Department of Industrial Design and Professor Geehyuk Lee from the School of Computing developed a novel augmented reality (AR) device, WonderScope, which displays the insides of an object directly from its surface. By installing and connecting WonderScope to a mobile device through Bluetooth, users can see through exhibits as if looking through a magic lens. Many science museums nowadays have incorporated the use of AR apps for mobile devices. Such apps add digital information to the exhibition, providing a unique experience. However, visitors must watch the screen from a certain distance away from the exhibited items, often causing them to focus more on the digital contents rather than the exhibits themselves. In other words, the distance and distractions that exist between the exhibit and the mobile device may actually cause the visitors to feel detached from the exhibition. To solve this problem, museums needed a magic AR lens that could be used directly from the surface of the item. To accomplish this, smartphones must know exactly where on the surface of an object it is placed. Generally, this would require an additional recognition device either on the inside or on the surface of the item, or a special pattern printed on its surface. Realistically speaking, these are impractical solutions, as exhibits would either appear overly complex or face spatial restrictions. WonderScope, on the other hand, uses a much more practical method to identify the location of a smartphone on the surface of an exhibit. First, it reads a small RFID tag attached to the surface of an object, and calculates the location of the moving smartphone by adding its relative movements based on the readings from an optical displacement sensor and an acceleration sensor. The research team also took into consideration the height of the smartphone, and the characteristics of the surface profile in order to calculate the device’s position more accurately. By attaching or embedding RFID tags on exhibits, visitors can easily experience the effects of a magic AR lens through their smartphones. For its wider use, WonderScope must be able to locate itself from various types of exhibit surfaces. To this end, WoderScope uses readings from an optical displacement sensor and an acceleration sensor with complementary characteristics, allowing stable locating capacities on various textures including paper, stone, wood, plastic, acrylic, and glass, as well as surfaces with physical patterns or irregularities. As a result, WonderScope can identify its location from a distance as close as 4 centimeters from an object, also enabling simple three-dimensional interactions near the surface of the exhibits. The research team developed various case project templates and WonderScope support tools to allow the facile production of smartphone apps that use general-purpose virtual reality (VR) and the game engine Unity. WonderScope is also compatible with various types of devices that run on the Android operating system, including smartwatches, smartphones, and tablets, allowing it to be applied to exhibitions in many forms. < Photo 2. Human body model showing demonstration > < Photo 3. Demonstration of the underground mineral exploration game > < Photo 4. Demonstration of Apollo 11 moon exploration experience > The research team developed WonderScope with funding from the science and culture exhibition enhancement support project by the Ministry of Science and ICT. Between October 27, 2020 and February 28, 2021, WonderScope was used to observe underground volcanic activity and the insides of volcanic rocks at “There Once was a Volcano”, a special exhibition held at the Geological Museum in the Korea institute of Geoscience and Mineral Resources (KIGAM). From September 28 to October 3, 2021, it was used to observe the surface of Jung-moon-kyung (a bronze mirror with fine linear design) at the special exhibition “A Bronze Mirror Shines on Science” at the National Science Museum. And from August 2 to October 3, 2022 it was applied to a moon landing simulation at “The Special Exhibition on Moon Exploration”, also at the National Science Museum. Through various field demonstrations over the years, the research team has improved the performance and usability of WonderScope. < Photo 5. Observation of surface corrosion of the main gate > The research team demonstrated WonderScope at the Emerging Technologies forum during ACM SIGGRAPH 2022, a computer graphics and interaction technology conference that was held in Vancouver, Canada between August 8 and 11 this year. At this conference, where the latest interactive technologies are introduced, the team won an Honorable Mention for Best in Show. The judges commented that “WonderScope will be a new technology that provides the audience with a unique joy of participation during their visits to exhibitions and museums.” < Photo 6. Cover of Digital Creativity > WonderScope is a cylindrical “appcessory” module, 5cm in diameter and 4.5cm in height. It is small enough to be easily attached to a smartphone and embedded on most exhibits. Professor Woohun Lee from the KAIST Department of Industrial Design, who supervised the research, said, “WonderScope can be applied to various applications including not only educational, but also industrial exhibitions, in many ways.” He added, “We also expect for it to be used as an interactive teaching tool that stimulates children’s curiosity.” Introductory video of WonderScope: https://www.youtube.com/watch?v=X2MyAXRt7h4&t=7s
2022.10.24
View 6957
KAIST Honors BMW and Hyundai with the 2022 Future Mobility of the Year Award
BMW ‘iVision Circular’, Commercial Vehicle-Hyundai Motors ‘Trailer Drone’ selected as winners of the international awards for concept cars established by KAIST Cho Chun Shik Graduate School of Mobility to honor car makers that strive to present new visions in the field of eco-friendly design of automobiles and unmanned logistics. KAIST (President Kwang Hyung Lee) hosted the “2022 Future Mobility of the Year (FMOTY) Awards” at the Convention Hall of the BEXCO International Motor Show at Busan in the afternoon of the 14th. The Future Mobility of the Year Awards is an award ceremony that selects a model that showcases useful transportation technology and innovative service concepts for the future society among the set of concept cars exhibited at the motor show. As a one-of-a-kind international concept car awards established by KAIST's Cho Chun Shik Graduate School of Mobility (Headed by Professor Jang In-Gwon), the auto journalists from 11 countries were invited to be the jurors to select the winner. With the inaugural awards ceremony held in 2019, over the past three years, automakers from around the globe, including internationally renowned automakers, such as, Volvo/Toyota (2019), Honda/Hyundai (2020), and Renault (2021), even a new start-up car manufacturer like Canoo, the winner of last year’s award for commercial vehicles, were honored for their award-winning works. At this year’s awards ceremony, the 4th of its kind, BMW's “iVision Circular” and Hyundai's “'Trailer Drone” were selected as the best concept cars of the year, the former from the Private Mobility category and the latter from the Public & Commercial Vehicles category. The jury consisting of 16 domestic and foreign auto journalists, including BBC Top Gear's Paul Horrell and Car Magazine’s Georg Kacher, evaluated 53 concept car contestants that made their entry last year. The jurors’ general comment was that while the trend of the global automobile market flowing fast towards electric vehicles, this year's award-winning works presented a new vision in the field of eco-friendly design and unmanned logistics. Private Mobility Categry Winner: BMW iVision Circular BMW's 'iVision Circular', the winner of the Private Mobility category, is an eco-friendly compact car in which all parts of the vehicle are designed with recycled and/or natural materials. It has received favorable reviews for its in-depth implementation of the concept of a futuristic eco-friendly car by manufacturing the tires from natural rubber and adopting a design that made recycling of its parts very easily when the car is to be disposed of. Public & Commercial Vehicles Categry Winner: Hyundai Trailer Drone Hyundai Motor Company’s “Trailer Drone”, the winner of the Public & Commercial Vehicles category, is an eco-friendly autonomous driving truck that can transport large-scale logistics from a port to a destination without a human driver while two unmanned vehicles push and drag a trailer. The concept car won supports from a large number of judges for the blueprint it presented for a groundbreaking logistics service that applied both eco-friendly hydrogen fuel cell and fully autonomous driving technology. Jurors from overseas congratulated the development team of BMW and Hyundai Motor Company via a video message for providing a new direction for the global automobile industry as it strives to transform in line with the changes in the post-pandemic era. Professor Bo-won Kim, the Vice President for Planning and Budget of KAIST, who presented the awards, said, “It is time for the K-Mobility wave to sweep over the global mobility industry.” “KAIST will lead in the various fields of mobility technologies to support global automakers,” he added. Splitting the center are KAIST Vice President Bo-Won Kim on the right, and Seong-Kwon Lee, the Deputy Mayor of the City of Busan on the left. To Kim's left is the Senior VP of BMW Asia-Pacific, Eastern Europe, Middle East, Africa, Jean-Philippe Parain, and to Lee's Right is Sangyup Lee, the Head of Hyundai Motor Design Center and the Executive VP of Hyundai Motors. At the ceremony, along with KAIST officials, including Vice President Bo-Won Kim and Professor In-Gwon Jang, the Head of Cho Chun Shik Graduate School of Mobility, are the Deputy Mayor Seong-Kwon Lee of the City of Busan and the figures from the automobile industry, including Jean-Philippe Parain, the Senior Vice President of BMW Asia-Pacific, Eastern Europe, Middle East, Africa, who is visiting Korea to receive the '2022 Future Mobility' award, and Sangyup Lee, the Head of Hyundai Motor Design Center and the Executive Vice President of Hyundai Motor Company, were in the attendance. More information about the awards ceremony and winning works are available at the official website of this year's Future Mobility Awards (www.fmoty.org). Profile:In-Gwon Jang, Ph.D.Presidentthe Organizing Committeethe Future Mobility of the Year Awardshttp://www.fmoty.org/ Head ProfessorKAIST Cho Chun Shik Graduate School of Mobilityhttps://gt.kaist.ac.kr
2022.07.14
View 9740
Atomically-Smooth Gold Crystals Help to Compress Light for Nanophotonic Applications
Highly compressed mid-infrared optical waves in a thin dielectric crystal on monocrystalline gold substrate investigated for the first time using a high-resolution scattering-type scanning near-field optical microscope. KAIST researchers and their collaborators at home and abroad have successfully demonstrated a new platform for guiding the compressed light waves in very thin van der Waals crystals. Their method to guide the mid-infrared light with minimal loss will provide a breakthrough for the practical applications of ultra-thin dielectric crystals in next-generation optoelectronic devices based on strong light-matter interactions at the nanoscale. Phonon-polaritons are collective oscillations of ions in polar dielectrics coupled to electromagnetic waves of light, whose electromagnetic field is much more compressed compared to the light wavelength. Recently, it was demonstrated that the phonon-polaritons in thin van der Waals crystals can be compressed even further when the material is placed on top of a highly conductive metal. In such a configuration, charges in the polaritonic crystal are “reflected” in the metal, and their coupling with light results in a new type of polariton waves called the image phonon-polaritons. Highly compressed image modes provide strong light-matter interactions, but are very sensitive to the substrate roughness, which hinders their practical application. Challenged by these limitations, four research groups combined their efforts to develop a unique experimental platform using advanced fabrication and measurement methods. Their findings were published in Science Advances on July 13. A KAIST research team led by Professor Min Seok Jang from the School of Electrical Engineering used a highly sensitive scanning near-field optical microscope (SNOM) to directly measure the optical fields of the hyperbolic image phonon-polaritons (HIP) propagating in a 63 nm-thick slab of hexagonal boron nitride (h-BN) on a monocrystalline gold substrate, showing the mid-infrared light waves in dielectric crystal compressed by a hundred times. Professor Jang and a research professor in his group, Sergey Menabde, successfully obtained direct images of HIP waves propagating for many wavelengths, and detected a signal from the ultra-compressed high-order HIP in a regular h-BN crystals for the first time. They showed that the phonon-polaritons in van der Waals crystals can be significantly more compressed without sacrificing their lifetime. This became possible due to the atomically-smooth surfaces of the home-grown gold crystals used as a substrate for the h-BN. Practically zero surface scattering and extremely small ohmic loss in gold at mid-infrared frequencies provide a low-loss environment for the HIP propagation. The HIP mode probed by the researchers was 2.4 times more compressed and yet exhibited a similar lifetime compared to the phonon-polaritons with a low-loss dielectric substrate, resulting in a twice higher figure of merit in terms of the normalized propagation length. The ultra-smooth monocrystalline gold flakes used in the experiment were chemically grown by the team of Professor N. Asger Mortensen from the Center for Nano Optics at the University of Southern Denmark. Mid-infrared spectrum is particularly important for sensing applications since many important organic molecules have absorption lines in the mid-infrared. However, a large number of molecules is required by the conventional detection methods for successful operation, whereas the ultra-compressed phonon-polariton fields can provide strong light-matter interactions at the microscopic level, thus significantly improving the detection limit down to a single molecule. The long lifetime of the HIP on monocrystalline gold will further improve the detection performance. Furthermore, the study conducted by Professor Jang and the team demonstrated the striking similarity between the HIP and the image graphene plasmons. Both image modes possess significantly more confined electromagnetic field, yet their lifetime remains unaffected by the shorter polariton wavelength. This observation provides a broader perspective on image polaritons in general, and highlights their superiority in terms of the nanolight waveguiding compared to the conventional low-dimensional polaritons in van der Waals crystals on a dielectric substrate. Professor Jang said, “Our research demonstrated the advantages of image polaritons, and especially the image phonon-polaritons. These optical modes can be used in the future optoelectronic devices where both the low-loss propagation and the strong light-matter interaction are necessary. I hope that our results will pave the way for the realization of more efficient nanophotonic devices such as metasurfaces, optical switches, sensors, and other applications operating at infrared frequencies.” This research was funded by the Samsung Research Funding & Incubation Center of Samsung Electronics and the National Research Foundation of Korea (NRF). The Korea Institute of Science and Technology, Ministry of Education, Culture, Sports, Science and Technology of Japan, and The Villum Foundation, Denmark, also supported the work. Figure. Nano-tip is used for the ultra-high-resolution imaging of the image phonon-polaritons in hBN launched by the gold crystal edge. Publication: Menabde, S. G., et al. (2022) Near-field probing of image phonon-polaritons in hexagonal boron nitride on gold crystals. Science Advances 8, Article ID: eabn0627. Available online at https://science.org/doi/10.1126/sciadv.abn0627. Profile: Min Seok Jang, MS, PhD Associate Professor jang.minseok@kaist.ac.kr http://janglab.org/ Min Seok Jang Research Group School of Electrical Engineering http://kaist.ac.kr/en/ Korea Advanced Institute of Science and Technology (KAIST) Daejeon, Republic of Korea
2022.07.13
View 9203
<<
첫번째페이지
<
이전 페이지
1
2
3
4
5
6
7
>
다음 페이지
>>
마지막 페이지 7