본문 바로가기
대메뉴 바로가기
KAIST
Newsletter Vol.25
Receive KAIST news by email!
View
Subscribe
Close
Type your e-mail address here.
Subscribe
Close
KAIST
NEWS
유틸열기
홈페이지 통합검색
-
검색
KOREAN
메뉴 열기
school+of+electrical+engineering
by recently order
by view order
A System for Stable Simultaneous Communication among Thousands of IoT Devices
A mmWave Backscatter System, developed by a team led by Professor Song Min Kim is exciting news for the IoT market as it will be able to provide fast and stable connectivity even for a massive network, which could finally allow IoT devices to reach their full potential. A research team led by Professor Song Min Kim of the KAIST School of Electrical Engineering developed a system that can support concurrent communications for tens of millions of IoT devices using backscattering millimeter-level waves (mmWave). With their mmWave backscatter method, the research team built a design enabling simultaneous signal demodulation in a complex environment for communication where tens of thousands of IoT devices are arranged indoors. The wide frequency range of mmWave exceeds 10GHz, which provides great scalability. In addition, backscattering reflects radiated signals instead of wirelessly creating its own, which allows operation at ultralow power. Therefore, the mmWave backscatter system offers internet connectivity on a mass scale to IoT devices at a low installation cost. This research by Kangmin Bae et al. was presented at ACM MobiSys 2022. At this world-renowned conference for mobile systems, the research won the Best Paper Award under the title “OmniScatter: Sensitivity mmWave Backscattering Using Commodity FMCW Radar”. It is meaningful that members of the KAIST School of Electrical Engineering have won the Best Paper Award at ACM MobiSys for two consecutive years, as last year was the first time the award was presented to an institute from Asia. IoT, as a core component of 5G/6G network, is showing exponential growth, and is expected to be part of a trillion devices by 2035. To support the connection of IoT devices on a mass scale, 5G and 6G each aim to support ten times and 100 times the network density of 4G, respectively. As a result, the importance of practical systems for large-scale communication has been raised. The mmWave is a next-generation communication technology that can be incorporated in 5G/6G standards, as it utilizes carrier waves at frequencies between 30 to 300GHz. However, due to signal reduction at high frequencies and reflection loss, the current mmWave backscatter system enables communication in limited environments. In other words, it cannot operate in complex environments where various obstacles and reflectors are present. As a result, it is limited to the large-scale connection of IoT devices that require a relatively free arrangement. The research team found the solution in the high coding gain of an FMCW radar. The team developed a signal processing method that can fundamentally separate backscatter signals from ambient noise while maintaining the coding gain of the radar. They achieved a receiver sensitivity of over 100 thousand times that of previously reported FMCW radars, which can support communication in practical environments. Additionally, given the radar’s property where the frequency of the demodulated signal changes depending on the physical location of the tag, the team designed a system that passively assigns them channels. This lets the ultralow-power backscatter communication system to take full advantage of the frequency range at 10 GHz or higher. The developed system can use the radar of existing commercial products as gateway, making it easily compatible. In addition, since the backscatter system works at ultralow power levels of 10uW or below, it can operate for over 40 years with a single button cell and drastically reduce installation and maintenance costs. The research team confirmed that mmWave backscatter devices arranged randomly in an office with various obstacles and reflectors could communicate effectively. The team then took things one step further and conducted a successful trace-driven evaluation where they simultaneously received information sent by 1,100 devices. Their research presents connectivity that greatly exceeds network density required by next-generation communication like 5G and 6G. The system is expected to become a stepping stone for the hyper-connected future to come. Professor Kim said, “mmWave backscatter is the technology we’ve dreamt of. The mass scalability and ultralow power at which it can operate IoT devices is unmatched by any existing technology”. He added, “We look forward to this system being actively utilized to enable the wide availability of IoT in the hyper-connected generation to come”. To demonstrate the massive connectivity of the system, a trace-driven evaluation of 1,100 concurrent tag transmissions are made. Figure shows the demodulation result of each and every 1,100 tags as red triangles, where they successfully communicate without collision. This work was supported by Samsung Research Funding & Incubation Center of Samsung Electronics and by the ITRC (Information Technology Research Center) support program supervised by the IITP (Institute of Information & Communications Technology Planning & Evaluation). Profile: Song Min Kim, Ph.D.Professorsongmin@kaist.ac.krhttps://smile.kaist.ac.kr SMILE Lab.School of Electrical Engineering
2022.07.28
View 7653
Atomically-Smooth Gold Crystals Help to Compress Light for Nanophotonic Applications
Highly compressed mid-infrared optical waves in a thin dielectric crystal on monocrystalline gold substrate investigated for the first time using a high-resolution scattering-type scanning near-field optical microscope. KAIST researchers and their collaborators at home and abroad have successfully demonstrated a new platform for guiding the compressed light waves in very thin van der Waals crystals. Their method to guide the mid-infrared light with minimal loss will provide a breakthrough for the practical applications of ultra-thin dielectric crystals in next-generation optoelectronic devices based on strong light-matter interactions at the nanoscale. Phonon-polaritons are collective oscillations of ions in polar dielectrics coupled to electromagnetic waves of light, whose electromagnetic field is much more compressed compared to the light wavelength. Recently, it was demonstrated that the phonon-polaritons in thin van der Waals crystals can be compressed even further when the material is placed on top of a highly conductive metal. In such a configuration, charges in the polaritonic crystal are “reflected” in the metal, and their coupling with light results in a new type of polariton waves called the image phonon-polaritons. Highly compressed image modes provide strong light-matter interactions, but are very sensitive to the substrate roughness, which hinders their practical application. Challenged by these limitations, four research groups combined their efforts to develop a unique experimental platform using advanced fabrication and measurement methods. Their findings were published in Science Advances on July 13. A KAIST research team led by Professor Min Seok Jang from the School of Electrical Engineering used a highly sensitive scanning near-field optical microscope (SNOM) to directly measure the optical fields of the hyperbolic image phonon-polaritons (HIP) propagating in a 63 nm-thick slab of hexagonal boron nitride (h-BN) on a monocrystalline gold substrate, showing the mid-infrared light waves in dielectric crystal compressed by a hundred times. Professor Jang and a research professor in his group, Sergey Menabde, successfully obtained direct images of HIP waves propagating for many wavelengths, and detected a signal from the ultra-compressed high-order HIP in a regular h-BN crystals for the first time. They showed that the phonon-polaritons in van der Waals crystals can be significantly more compressed without sacrificing their lifetime. This became possible due to the atomically-smooth surfaces of the home-grown gold crystals used as a substrate for the h-BN. Practically zero surface scattering and extremely small ohmic loss in gold at mid-infrared frequencies provide a low-loss environment for the HIP propagation. The HIP mode probed by the researchers was 2.4 times more compressed and yet exhibited a similar lifetime compared to the phonon-polaritons with a low-loss dielectric substrate, resulting in a twice higher figure of merit in terms of the normalized propagation length. The ultra-smooth monocrystalline gold flakes used in the experiment were chemically grown by the team of Professor N. Asger Mortensen from the Center for Nano Optics at the University of Southern Denmark. Mid-infrared spectrum is particularly important for sensing applications since many important organic molecules have absorption lines in the mid-infrared. However, a large number of molecules is required by the conventional detection methods for successful operation, whereas the ultra-compressed phonon-polariton fields can provide strong light-matter interactions at the microscopic level, thus significantly improving the detection limit down to a single molecule. The long lifetime of the HIP on monocrystalline gold will further improve the detection performance. Furthermore, the study conducted by Professor Jang and the team demonstrated the striking similarity between the HIP and the image graphene plasmons. Both image modes possess significantly more confined electromagnetic field, yet their lifetime remains unaffected by the shorter polariton wavelength. This observation provides a broader perspective on image polaritons in general, and highlights their superiority in terms of the nanolight waveguiding compared to the conventional low-dimensional polaritons in van der Waals crystals on a dielectric substrate. Professor Jang said, “Our research demonstrated the advantages of image polaritons, and especially the image phonon-polaritons. These optical modes can be used in the future optoelectronic devices where both the low-loss propagation and the strong light-matter interaction are necessary. I hope that our results will pave the way for the realization of more efficient nanophotonic devices such as metasurfaces, optical switches, sensors, and other applications operating at infrared frequencies.” This research was funded by the Samsung Research Funding & Incubation Center of Samsung Electronics and the National Research Foundation of Korea (NRF). The Korea Institute of Science and Technology, Ministry of Education, Culture, Sports, Science and Technology of Japan, and The Villum Foundation, Denmark, also supported the work. Figure. Nano-tip is used for the ultra-high-resolution imaging of the image phonon-polaritons in hBN launched by the gold crystal edge. Publication: Menabde, S. G., et al. (2022) Near-field probing of image phonon-polaritons in hexagonal boron nitride on gold crystals. Science Advances 8, Article ID: eabn0627. Available online at https://science.org/doi/10.1126/sciadv.abn0627. Profile: Min Seok Jang, MS, PhD Associate Professor jang.minseok@kaist.ac.kr http://janglab.org/ Min Seok Jang Research Group School of Electrical Engineering http://kaist.ac.kr/en/ Korea Advanced Institute of Science and Technology (KAIST) Daejeon, Republic of Korea
2022.07.13
View 9259
KAIST & LG U+ Team Up for Quantum Computing Solution for Ultra-Space 6G Satellite Networking
KAIST quantum computer scientists have optimized ultra-space 6G Low-Earth Orbit (LEO) satellite networking, finding the shortest path to transfer data from a city to another place via multi-satellite hops. The research team led by Professor June-Koo Kevin Rhee and Professor Dongsu Han in partnership with LG U+ verified the possibility of ultra-performance and precision communication with satellite networks using D-Wave, the first commercialized quantum computer. Satellite network optimization has remained challenging since the network needs to be reconfigured whenever satellites approach other satellites within the connection range in a three-dimensional space. Moreover, LEO satellites orbiting at 200~2000 km above the Earth change their positions dynamically, whereas Geo-Stationary Orbit (GSO) satellites do not change their positions. Thus, LEO satellite network optimization needs to be solved in real time. The research groups formulated the problem as a Quadratic Unconstrained Binary Optimization (QUBO) problem and managed to solve the problem, incorporating the connectivity and link distance limits as the constraints. The proposed optimization algorithm is reported to be much more efficient in terms of hop counts and path length than previously reported studies using classical solutions. These results verify that a satellite network can provide ultra-performance (over 1Gbps user-perceived speed), and ultra-precision (less than 5ms end-to-end latency) network services, which are comparable to terrestrial communication. Once QUBO is applied, “ultra-space networking” is expected to be realized with 6G. Researchers said that an ultra-space network provides communication services for an object moving at up to 10 km altitude with an extreme speed (~ 1000 km/h). Optimized LEO satellite networks can provide 6G communication services to currently unavailable areas such as air flights and deserts. Professor Rhee, who is also the CEO of Qunova Computing, noted, “Collaboration with LG U+ was meaningful as we were able to find an industrial application for a quantum computer. We look forward to more quantum application research on real problems such as in communications, drug and material discovery, logistics, and fintech industries.”
2022.06.17
View 6349
Professor Jae-Woong Jeong Receives Hyonwoo KAIST Academic Award
Professor Jae-Woong Jeong from the School of Electrical Engineering was selected for the Hyonwoo KAIST Academic Award, funded by the HyonWoo Cultural Foundation (Chairman Soo-il Kwak, honorary professor at Seoul National University Business School). The Hyonwoo KAIST Academic Award, presented for the first time in 2021, is an award newly founded by the donations of Chairman Soo-il Kwak of the HyonWoo Cultural Foundation, who aims to reward excellent KAIST scholars who have made outstanding academic achievements. Every year, through the strict evaluations of the selection committee of the HyonWoo Cultural Foundation and the faculty reward recommendation board, KAIST will choose one faculty member that may represent the school with their excellent academic achievement, and reward them with a plaque and 100 million won. Professor Jae-Woong Jeong, the winner of this year’s award, developed the first IoT-based wireless remote brain neural network control system to overcome brain diseases, and has been leading the field. The research was published in 2021 in Nature Biomedical Engineering, one of world’s best scientific journals, and has been recognized as a novel technology that suggested a new vision for the automation of brain research and disease treatment. This study, led by Professor Jeong’s research team, was part of the KAIST College of Engineering Global Initiative Interdisciplinary Research Project, and was jointly studied by Washington University School of Medicine through an international research collaboration. The technology was introduced more than 60 times through both domestic and international media, including Medical Xpress, MBC News, and Maeil Business News. Professor Jeong has also developed a wirelessly chargeable soft machine for brain transplants, and the results were published in Nature Communications. He thereby opened a new paradigm for implantable semi-permanent devices for transplants, and is making unprecedented research achievements.
2022.06.13
View 5594
Professor Iickho Song Publishes a Book on Probability and Random Variables in English
Professor Iickho Song from the School of Electrical Engineering has published a book on probability and random variables in English. This is the translated version of his book in Korean ‘Theory of Random Variables’, which was selected as an Excellent Book of Basic Sciences by the National Academy of Sciences and the Ministry of Education in 2020. The book discusses diverse concepts, notions, and applications concerning probability and random variables, explaining basic concepts and results in a clearer and more complete manner. Readers will also find unique results on the explicit general formula of joint moments and the expected values of nonlinear functions for normal random vectors. In addition, interesting applications for the step and impulse functions in discussions on random vectors are presented. Thanks to a wealth of examples and a total of 330 practice problems of varying difficulty, readers will have the opportunity to significantly expand their knowledge and skills. The book includes an extensive index, allowing readers to quickly and easily find what they are looking for. It also offers a valuable reference guide for experienced scholars and professionals, helping them review and refine their expertise. Link: https://link.springer.com/book/10.1007/978-3-030-97679-8
2022.06.13
View 3400
Machine Learning-Based Algorithm to Speed up DNA Sequencing
The algorithm presents the first full-fledged, short-read alignment software that leverages learned indices for solving the exact match search problem for efficient seeding The human genome consists of a complete set of DNA, which is about 6.4 billion letters long. Because of its size, reading the whole genome sequence at once is challenging. So scientists use DNA sequencers to produce hundreds of millions of DNA sequence fragments, or short reads, up to 300 letters long. Then the DNA sequencer assembles all the short reads like a giant jigsaw puzzle to reconstruct the entire genome sequence. Even with very fast computers, this job can take hours to complete. A research team at KAIST has achieved up to 3.45x faster speeds by developing the first short-read alignment software that uses a recent advance in machine-learning called a learned index. The research team reported their findings on March 7, 2022 in the journal Bioinformatics. The software has been released as open source and can be found on github (https://github.com/kaist-ina/BWA-MEME). Next-generation sequencing (NGS) is a state-of-the-art DNA sequencing method. Projects are underway with the goal of producing genome sequencing at population scale. Modern NGS hardware is capable of generating billions of short reads in a single run. Then the short reads have to be aligned with the reference DNA sequence. With large-scale DNA sequencing operations running hundreds of next-generation sequences, the need for an efficient short read alignment tool has become even more critical. Accelerating the DNA sequence alignment would be a step toward achieving the goal of population-scale sequencing. However, existing algorithms are limited in their performance because of their frequent memory accesses. BWA-MEM2 is a popular short-read alignment software package currently used to sequence the DNA. However, it has its limitations. The state-of-the-art alignment has two phases – seeding and extending. During the seeding phase, searches find exact matches of short reads in the reference DNA sequence. During the extending phase, the short reads from the seeding phase are extended. In the current process, bottlenecks occur in the seeding phase. Finding the exact matches slows the process. The researchers set out to solve the problem of accelerating the DNA sequence alignment. To speed the process, they applied machine learning techniques to create an algorithmic improvement. Their algorithm, BWA-MEME (BWA-MEM emulated) leverages learned indices to solve the exact match search problem. The original software compared one character at a time for an exact match search. The team’s new algorithm achieves up to 3.45x faster speeds in seeding throughput over BWA-MEM2 by reducing the number of instructions by 4.60x and memory accesses by 8.77x. “Through this study, it has been shown that full genome big data analysis can be performed faster and less costly than conventional methods by applying machine learning technology,” said Professor Dongsu Han from the School of Electrical Engineering at KAIST. The researchers’ ultimate goal was to develop efficient software that scientists from academia and industry could use on a daily basis for analyzing big data in genomics. “With the recent advances in artificial intelligence and machine learning, we see so many opportunities for designing better software for genomic data analysis. The potential is there for accelerating existing analysis as well as enabling new types of analysis, and our goal is to develop such software,” added Han. Whole genome sequencing has traditionally been used for discovering genomic mutations and identifying the root causes of diseases, which leads to the discovery and development of new drugs and cures. There could be many potential applications. Whole genome sequencing is used not only for research, but also for clinical purposes. “The science and technology for analyzing genomic data is making rapid progress to make it more accessible for scientists and patients. This will enhance our understanding about diseases and develop a better cure for patients of various diseases.” The research was funded by the National Research Foundation of the Korean government’s Ministry of Science and ICT. -PublicationYoungmok Jung, Dongsu Han, “BWA-MEME:BWA-MEM emulated with a machine learning approach,” Bioinformatics, Volume 38, Issue 9, May 2022 (https://doi.org/10.1093/bioinformatics/btac137) -ProfileProfessor Dongsu HanSchool of Electrical EngineeringKAIST
2022.05.10
View 7017
A New Strategy for Active Metasurface Design Provides a Full 360° Phase Tunable Metasurface
The new strategy displays an unprecedented upper limit of dynamic phase modulation with no significant variations in optical amplitude An international team of researchers led by Professor Min Seok Jang of KAIST and Professor Victor W. Brar of the University of Wisconsin-Madison has demonstrated a widely applicable methodology enabling a full 360° active phase modulation for metasurfaces while maintaining significant levels of uniform light amplitude. This strategy can be fundamentally applied to any spectral region with any structures and resonances that fit the bill. Metasurfaces are optical components with specialized functionalities indispensable for real-life applications ranging from LIDAR and spectroscopy to futuristic technologies such as invisibility cloaks and holograms. They are known for their compact and micro/nano-sized nature, which enables them to be integrated into electronic computerized systems with sizes that are ever decreasing as predicted by Moore’s law. In order to allow for such innovations, metasurfaces must be capable of manipulating the impinging light, doing so by manipulating either the light’s amplitude or phase (or both) and emitting it back out. However, dynamically modulating the phase with the full circle range has been a notoriously difficult task, with very few works managing to do so by sacrificing a substantial amount of amplitude control. Challenged by these limitations, the team proposed a general methodology that enables metasurfaces to implement a dynamic phase modulation with the complete 360° phase range, all the while uniformly maintaining significant levels of amplitude. The underlying reason for the difficulty achieving such a feat is that there is a fundamental trade-off regarding dynamically controlling the optical phase of light. Metasurfaces generally perform such a function through optical resonances, an excitation of electrons inside the metasurface structure that harmonically oscillate together with the incident light. In order to be able to modulate through the entire range of 0-360°, the optical resonance frequency (the center of the spectrum) must be tuned by a large amount while the linewidth (the width of the spectrum) is kept to a minimum. However, to electrically tune the optical resonance frequency of the metasurface on demand, there needs to be a controllable influx and outflux of electrons into the metasurface and this inevitably leads to a larger linewidth of the aforementioned optical resonance. The problem is further compounded by the fact that the phase and the amplitude of optical resonances are closely correlated in a complex, non-linear fashion, making it very difficult to hold substantial control over the amplitude while changing the phase. The team’s work circumvented both problems by using two optical resonances, each with specifically designated properties. One resonance provides the decoupling between the phase and amplitude so that the phase is able to be tuned while significant and uniform levels of amplitude are maintained, as well as providing a narrow linewidth. The other resonance provides the capability of being sufficiently tuned to a large degree so that the complete full circle range of phase modulation is achievable. The quintessence of the work is then to combine the different properties of the two resonances through a phenomenon called avoided crossing, so that the interactions between the two resonances lead to an amalgamation of the desired traits that achieves and even surpasses the full 360° phase modulation with uniform amplitude. Professor Jang said, “Our research proposes a new methodology in dynamic phase modulation that breaks through the conventional limits and trade-offs, while being broadly applicable in diverse types of metasurfaces. We hope that this idea helps researchers implement and realize many key applications of metasurfaces, such as LIDAR and holograms, so that the nanophotonics industry keeps growing and provides a brighter technological future.” The research paper authored by Ju Young Kim and Juho Park, et al., and titled "Full 2π Tunable Phase Modulation Using Avoided Crossing of Resonances" was published in Nature Communications on April 19. The research was funded by the Samsung Research Funding & Incubation Center of Samsung Electronics. -Publication:Ju Young Kim, Juho Park, Gregory R. Holdman, Jacob T. Heiden, Shinho Kim, Victor W. Brar, and Min Seok Jang, “Full 2π Tunable Phase Modulation Using Avoided Crossing ofResonances” Nature Communications on April 19 (2022). doi.org/10.1038/s41467-022-29721-7 -ProfileProfessor Min Seok JangSchool of Electrical EngineeringKAIST
2022.05.02
View 5930
LightPC Presents a Resilient System Using Only Non-Volatile Memory
Lightweight Persistence Centric System (LightPC) ensures both data and execution persistence for energy-efficient full system persistence A KAIST research team has developed hardware and software technology that ensures both data and execution persistence. The Lightweight Persistence Centric System (LightPC) makes the systems resilient against power failures by utilizing only non-volatile memory as the main memory. “We mounted non-volatile memory on a system board prototype and created an operating system to verify the effectiveness of LightPC,” said Professor Myoungsoo Jung. The team confirmed that LightPC validated its execution while powering up and down in the middle of execution, showing up to eight times more memory, 4.3 times faster application execution, and 73% lower power consumption compared to traditional systems. Professor Jung said that LightPC can be utilized in a variety of fields such as data centers and high-performance computing to provide large-capacity memory, high performance, low power consumption, and service reliability. In general, power failures on legacy systems can lead to the loss of data stored in the DRAM-based main memory. Unlike volatile memory such as DRAM, non-volatile memory can retain its data without power. Although non-volatile memory has the characteristics of lower power consumption and larger capacity than DRAM, non-volatile memory is typically used for the task of secondary storage due to its lower write performance. For this reason, nonvolatile memory is often used with DRAM. However, modern systems employing non-volatile memory-based main memory experience unexpected performance degradation due to the complicated memory microarchitecture. To enable both data and execution persistent in legacy systems, it is necessary to transfer the data from the volatile memory to the non-volatile memory. Checkpointing is one possible solution. It periodically transfers the data in preparation for a sudden power failure. While this technology is essential for ensuring high mobility and reliability for users, checkpointing also has fatal drawbacks. It takes additional time and power to move data and requires a data recovery process as well as restarting the system. In order to address these issues, the research team developed a processor and memory controller to raise the performance of non-volatile memory-only memory. LightPC matches the performance of DRAM by minimizing the internal volatile memory components from non-volatile memory, exposing the non-volatile memory (PRAM) media to the host, and increasing parallelism to service on-the-fly requests as soon as possible. The team also presented operating system technology that quickly makes execution states of running processes persistent without the need for a checkpointing process. The operating system prevents all modifications to execution states and data by keeping all program executions idle before transferring data in order to support consistency within a period much shorter than the standard power hold-up time of about 16 minutes. For consistency, when the power is recovered, the computer almost immediately revives itself and re-executes all the offline processes immediately without the need for a boot process. The researchers will present their work (LightPC: Hardware and Software Co-Design for Energy-Efficient Full System Persistence) at the International Symposium on Computer Architecture (ISCA) 2022 in New York in June. More information is available at the CAMELab website (http://camelab.org). -Profile: Professor Myoungsoo Jung Computer Architecture and Memory Systems Laboratory (CAMEL)http://camelab.org School of Electrical EngineeringKAIST
2022.04.25
View 20801
Professor Hyunjoo Jenny Lee to Co-Chair IEEE MEMS 2025
Professor Hyunjoo Jenny Lee from the School of Electrical Engineering has been appointed General Chair of the 38th IEEE MEMS 2025 (International Conference on Micro Electro Mechanical Systems). Professor Lee, who is 40, is the conference’s youngest General Chair to date and will work jointly with Professor Sheng-Shian Li of Taiwan’s National Tsing Hua University as co-chairs in 2025. IEEE MEMS is a top-tier international conference on microelectromechanical systems and it serves as a core academic showcase for MEMS research and technology in areas such as microsensors and actuators. With over 800 MEMS paper submissions each year, the conference only accepts and publishes about 250 of them after a rigorous review process recognized for its world-class prestige. Of all the submissions, fewer than 10% are chosen for oral presentations.
2022.04.18
View 4695
Professor June-Koo Rhee’s Team Wins the QHack Open Hackathon Science Challenge
The research team consisting of three master students Ju-Young Ryu, Jeung-rak Lee, and Eyel Elala in Professor June-Koo Rhee’s group from the KAIST IRTC of Quantum Computing for AI has won the first place at the QHack 2022 Open Hackathon Science Challenge. The QHack 2022 Open Hackathon is one of the world’s prestigious quantum software hackathon events held by US Xanadu, in which 250 people from 100 countries participate. Major sponsors such as IBM Quantum, AWS, CERN QTI, and Google Quantum AI proposed challenging problems, and a winning team is selected judged on team projects in each of the 13 challenges. The KAIST team supervised by Professor Rhee received the First Place prize on the Science Challenge which was organized by the CERN QTI of the European Communities. The team will be awarded an opportunity to tour CERN’s research lab in Europe for one week along with an online internship. The students on the team presented a method for “Leaning Based Error Mitigation for VQE,” in which they implemented an LBEM protocol to lower the error in quantum computing, and leveraged the protocol in the VQU algorithm which is used to calculate the ground state energy of a given molecule. Their research successfully demonstrated the ability to effectively mitigate the error in IBM Quantum hardware and the virtual error model. In conjunction, Professor June-Koo (Kevin) Rhee founded a quantum computing venture start-up, Qunova Computing(https://qunovacomputing.com), with technology tranfer from the KAIST ITRC of Quantum Computing for AI. Qunova Computing is one of the frontier of the quantum software industry in Korea.
2022.04.08
View 4807
CXL-Based Memory Disaggregation Technology Opens Up a New Direction for Big Data Solution Frameworks
A KAIST team’s compute express link (CXL) provides new insights on memory disaggregation and ensures direct access and high-performance capabilities A team from the Computer Architecture and Memory Systems Laboratory (CAMEL) at KAIST presented a new compute express link (CXL) solution whose directly accessible, and high-performance memory disaggregation opens new directions for big data memory processing. Professor Myoungsoo Jung said the team’s technology significantly improves performance compared to existing remote direct memory access (RDMA)-based memory disaggregation. CXL is a peripheral component interconnect-express (PCIe)-based new dynamic multi-protocol made for efficiently utilizing memory devices and accelerators. Many enterprise data centers and memory vendors are paying attention to it as the next-generation multi-protocol for the era of big data. Emerging big data applications such as machine learning, graph analytics, and in-memory databases require large memory capacities. However, scaling out the memory capacity via a prior memory interface like double data rate (DDR) is limited by the number of the central processing units (CPUs) and memory controllers. Therefore, memory disaggregation, which allows connecting a host to another host’s memory or memory nodes, has appeared. RDMA is a way that a host can directly access another host’s memory via InfiniBand, the commonly used network protocol in data centers. Nowadays, most existing memory disaggregation technologies employ RDMA to get a large memory capacity. As a result, a host can share another host’s memory by transferring the data between local and remote memory. Although RDMA-based memory disaggregation provides a large memory capacity to a host, two critical problems exist. First, scaling out the memory still needs an extra CPU to be added. Since passive memory such as dynamic random-access memory (DRAM), cannot operate by itself, it should be controlled by the CPU. Second, redundant data copies and software fabric interventions for RDMA-based memory disaggregation cause longer access latency. For example, remote memory access latency in RDMA-based memory disaggregation is multiple orders of magnitude longer than local memory access. To address these issues, Professor Jung’s team developed the CXL-based memory disaggregation framework, including CXL-enabled customized CPUs, CXL devices, CXL switches, and CXL-aware operating system modules. The team’s CXL device is a pure passive and directly accessible memory node that contains multiple DRAM dual inline memory modules (DIMMs) and a CXL memory controller. Since the CXL memory controller supports the memory in the CXL device, a host can utilize the memory node without processor or software intervention. The team’s CXL switch enables scaling out a host’s memory capacity by hierarchically connecting multiple CXL devices to the CXL switch allowing more than hundreds of devices. Atop the switches and devices, the team’s CXL-enabled operating system removes redundant data copy and protocol conversion exhibited by conventional RDMA, which can significantly decrease access latency to the memory nodes. In a test comparing loading 64B (cacheline) data from memory pooling devices, CXL-based memory disaggregation showed 8.2 times higher data load performance than RDMA-based memory disaggregation and even similar performance to local DRAM memory. In the team’s evaluations for a big data benchmark such as a machine learning-based test, CXL-based memory disaggregation technology also showed a maximum of 3.7 times higher performance than prior RDMA-based memory disaggregation technologies. “Escaping from the conventional RDMA-based memory disaggregation, our CXL-based memory disaggregation framework can provide high scalability and performance for diverse datacenters and cloud service infrastructures,” said Professor Jung. He went on to stress, “Our CXL-based memory disaggregation research will bring about a new paradigm for memory solutions that will lead the era of big data.” -Profile: Professor Myoungsoo Jung Computer Architecture and Memory Systems Laboratory (CAMEL)http://camelab.org School of Electrical EngineeringKAIST
2022.03.16
View 19683
Team KAIST Makes Its Presence Felt in the Self-Driving Tech Industry
Team KAIST finishes 4th at the inaugural CES Autonomous Racing Competition Team KAIST led by Professor Hyunchul Shim and Unmanned Systems Research Group (USRG) placed fourth in an autonomous race car competition in Las Vegas last week, making its presence felt in the self-driving automotive tech industry. Team KAIST, beat its first competitor, Auburn University, with speeds of up to 131 mph at the Autonomous Challenge at CES held at the Las Vegas Motor Speedway. However, the team failed to advance to the final round when it lost to PoliMOVE, comprised of the Polytechnic University of Milan and the University of Alabama, the final winner of the $150,000 USD race. A total of eight teams competed in the self-driving race. The race was conducted as a single elimination tournament consisting of multiple rounds of matches. Two cars took turns playing the role of defender and attacker, and each car attempted to outpace the other until one of them was unable to complete the mission. Each team designed the algorithm to control its racecar, the Dallara-built AV-21, which can reach a speed of up to 173 mph, and make it safely drive around the track at high speeds without crashing into the other. The event is the CES version of the Indy Autonomous Challenge, a competition that took place for the first time in October last year to encourage university students from around the world to develop complicated software for autonomous driving and advance relevant technologies. Team KAIST placed 4th at the Indy Autonomous Challenge, which qualified it to participate in this race. “The technical level of the CES race is much higher than last October’s and we had a very tough race. We advanced to the semifinals for two consecutive races. I think our autonomous vehicle technology is proving itself to the world,” said Professor Shim. Professor Shim’s research group has been working on the development of autonomous aerial and ground vehicles for the past 12 years. A self-driving car developed by the lab was certified by the South Korean government to run on public roads. The vehicle the team used cost more than 1 million USD to build. Many of the other teams had to repair their vehicle more than once due to accidents and had to spend a lot on repairs. “We are the only one who did not have any accidents, and this is a testament to our technological prowess,” said Professor Shim. He said the financial funding to purchase pricy parts and equipment for the racecar is always a challenge given the very tight research budget and absence of corporate sponsorships. However, Professor Shim and his research group plan to participate in the next race in September and in the 2023 CES race. “I think we need more systemic and proactive research and support systems to earn better results but there is nothing better than the group of passionate students who are taking part in this project with us,” Shim added.
2022.01.12
View 8189
<<
첫번째페이지
<
이전 페이지
1
2
3
4
5
6
7
8
9
10
>
다음 페이지
>>
마지막 페이지 11