본문 바로가기
대메뉴 바로가기
KAIST
Newsletter Vol.25
Receive KAIST news by email!
View
Subscribe
Close
Type your e-mail address here.
Subscribe
Close
KAIST
NEWS
유틸열기
홈페이지 통합검색
-
검색
KOREAN
메뉴 열기
research
by recently order
by view order
To Talk or Not to Talk: Smart Speaker Determines Optimal Timing to Talk
A KAIST research team has developed a new context-awareness technology that enables AI assistants to determine when to talk to their users based on user circumstances. This technology can contribute to developing advanced AI assistants that can offer pre-emptive services such as reminding users to take medication on time or modifying schedules based on the actual progress of planned tasks. Unlike conventional AI assistants that used to act passively upon users’ commands, today’s AI assistants are evolving to provide more proactive services through self-reasoning of user circumstances. This opens up new opportunities for AI assistants to better support users in their daily lives. However, if AI assistants do not talk at the right time, they could rather interrupt their users instead of helping them. The right time for talking is more difficult for AI assistants to determine than it appears. This is because the context can differ depending on the state of the user or the surrounding environment. A group of researchers led by Professor Uichin Lee from the KAIST School of Computing identified key contextual factors in user circumstances that determine when the AI assistant should start, stop, or resume engaging in voice services in smart home environments. Their findings were published in the Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT) in September. The group conducted this study in collaboration with Professor Jae-Gil Lee’s group in the KAIST School of Computing, Professor Sangsu Lee’s group in the KAIST Department of Industrial Design, and Professor Auk Kim’s group at Kangwon National University. After developing smart speakers equipped with AI assistant function for experimental use, the researchers installed them in the rooms of 40 students who live in double-occupancy campus dormitories and collected a total of 3,500 in-situ user response data records over a period of a week. The smart speakers repeatedly asked the students a question, “Is now a good time to talk?” at random intervals or whenever a student’s movement was detected. Students answered with either “yes” or “no” and then explained why, describing what they had been doing before being questioned by the smart speakers. Data analysis revealed that 47% of user responses were “no” indicating they did not want to be interrupted. The research team then created 19 home activity categories to cross-analyze the key contextual factors that determine opportune moments for AI assistants to talk, and classified these factors into ‘personal,’ ‘movement,’ and ‘social’ factors respectively. Personal factors, for instance, include: 1. the degree of concentration on or engagement in activities, 2. the degree urgency and busyness, 3. the state of user’s mental or physical condition, and 4. the state of being able to talk or listen while multitasking. While users were busy concentrating on studying, tired, or drying hair, they found it difficult to engage in conversational interactions with the smart speakers. Some representative movement factors include departure, entrance, and physical activity transitions. Interestingly, in movement scenarios, the team found that the communication range was an important factor. Departure is an outbound movement from the smart speaker, and entrance is an inbound movement. Users were much more available during inbound movement scenarios as opposed to outbound movement scenarios. In general, smart speakers are located in a shared place at home, such as a living room, where multiple family members gather at the same time. In Professor Lee’s group’s experiment, almost half of the in-situ user responses were collected when both roommates were present. The group found social presence also influenced interruptibility. Roommates often wanted to minimize possible interpersonal conflicts, such as disturbing their roommates' sleep or work. Narae Cha, the lead author of this study, explained, “By considering personal, movement, and social factors, we can envision a smart speaker that can intelligently manage the timing of conversations with users.” She believes that this work lays the foundation for the future of AI assistants, adding, “Multi-modal sensory data can be used for context sensing, and this context information will help smart speakers proactively determine when it is a good time to start, stop, or resume conversations with their users.” This work was supported by the National Research Foundation (NRF) of Korea. Publication: Cha, N, et al. (2020) “Hello There! Is Now a Good Time to Talk?”: Opportune Moments for Proactive Interactions with Smart Speakers. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT), Vol. 4, No. 3, Article No. 74, pp. 1-28. Available online at https://doi.org/10.1145/3411810 Link to Introductory Video: https://youtu.be/AA8CTi2hEf0 Profile: Uichin Lee Associate Professor uclee@kaist.ac.kr http://ic.kaist.ac.kr Interactive Computing Lab. School of Computing https://www.kaist.ac.kr Korea Advanced Institute of Science and Technology (KAIST) Daejeon, Republic of Korea (END)
2020.11.05
View 10178
Chemical Scissors Snip 2D Transition Metal Dichalcogenides into Nanoribbon
New ‘nanoribbon’ catalyst should slash cost of hydrogen production for clean fuels Researchers have identified a potential catalyst alternative – and an innovative way to produce them using chemical ‘scissors’ – that could make hydrogen production more economical. The research team led by Professor Sang Ouk Kim at the Department of Materials Science and Engineering published their work in Nature Communications. Hydrogen is likely to play a key role in the clean transition away from fossil fuels and other processes that produce greenhouse gas emissions. There is a raft of transportation sectors such as long-haul shipping and aviation that are difficult to electrify and so will require cleanly produced hydrogen as a fuel or as a feedstock for other carbon-neutral synthetic fuels. Likewise, fertilizer production and the steel sector are unlikely to be “de-carbonized” without cheap and clean hydrogen. The problem is that the cheapest methods by far of producing hydrogen gas is currently from natural gas, a process that itself produces the greenhouse gas carbon dioxide–which defeats the purpose. Alternative techniques of hydrogen production, such as electrolysis using an electric current between two electrodes plunged into water to overcome the chemical bonds holding water together, thereby splitting it into its constituent elements, oxygen and hydrogen are very well established. But one of the factors contributing to the high cost, beyond being extremely energy-intensive, is the need for the very expensive precious and relatively rare metal platinum. The platinum is used as a catalyst–a substance that kicks off or speeds up a chemical reaction–in the hydrogen production process. As a result, researchers have long been on the hunt for a substitution for platinum -- another catalyst that is abundant in the earth and thus much cheaper. Transition metal dichalcogenides, or TMDs, in a nanomaterial form, have for some time been considered a good candidate as a catalyst replacement for platinum. These are substances composed of one atom of a transition metal (the elements in the middle part of the periodic table) and two atoms of a chalcogen element (the elements in the third-to-last column in the periodic table, specifically sulfur, selenium and tellurium). What makes TMDs a good bet as a platinum replacement is not just that they are much more abundant, but also their electrons are structured in a way that gives the electrodes a boost. In addition, a TMD that is a nanomaterial is essentially a two-dimensional super-thin sheet only a few atoms thick, just like graphene. The ultrathin nature of a 2-D TMD nanosheet allows for a great many more TMD molecules to be exposed during the catalysis process than would be the case in a block of the stuff, thus kicking off and speeding up the hydrogen-making chemical reaction that much more. However, even here the TMD molecules are only reactive at the four edges of a nanosheet. In the flat interior, not much is going on. In order to increase the chemical reaction rate in the production of hydrogen, the nanosheet would need to be cut into very thin – almost one-dimensional strips, thereby creating many edges. In response, the research team developed what are in essence a pair of chemical scissors that can snip TMD into tiny strips. “Up to now, the only substances that anyone has been able to turn into these ‘nano-ribbons’ are graphene and phosphorene,” said Sang Professor Kim, one of the researchers involved in devising the process. “But they’re both made up of just one element, so it’s pretty straightforward. Figuring out how to do it for TMD, which is made of two elements was going to be much harder.” The ‘scissors’ involve a two-step process involving first inserting lithium ions into the layered structure of the TMD sheets, and then using ultrasound to cause a spontaneous ‘unzipping’ in straight lines. “It works sort of like how when you split a plank of plywood: it breaks easily in one direction along the grain,” Professor Kim continued. “It’s actually really simple.” The researchers then tried it with various types of TMDs, including those made of molybdenum, selenium, sulfur, tellurium and tungsten. All worked just as well, with a catalytic efficiency as effective as platinum’s. Because of the simplicity of the procedure, this method should be able to be used not just in the large-scale production of TMD nanoribbons, but also to make similar nanoribbons from other multi-elemental 2D materials for purposes beyond just hydrogen production. -ProfileProfessor Sang Ouk KimSoft Nanomaterials Laboratory (http://snml.kaist.ac.kr)Department of Materials Science and EngineeringKAIST
2020.10.29
View 6842
'Mini-Lungs' Reveal Early Stages of SARS-CoV-2 Infection
Researchers in Korea and the UK have successfully grown miniature models of critical lung structures called alveoli, and used them to study how the coronavirus that causes COVID-19 infects the lungs. To date, there have been more than 40 million cases of COVID-19 and almost 1.13 million deaths worldwide. The main target tissues of SARS-CoV-2, the virus that causes COVID-19, especially in patients that develop pneumonia, appear to be alveoli – tiny air sacs in the lungs that take up the oxygen we breathe and exchange it with carbon dioxide to exhale. To better understand how SARS-CoV-2 infects the lungs and causes disease, a team of Professor Young Seok Ju from the Graduate School of Medical Science and Engineering at KAIST in collaboration with the Wellcome-MRC Cambridge Stem Cell Institute at the University of Cambridge turned to organoids – ‘mini-organs’ grown in three dimensions to mimic the behaviour of tissue and organs. The team used tissue donated to tissue banks at the Royal Papworth Hospital NHS Foundation Trust and Addenbrooke’s Hospital, Cambridge University NHS Foundations Trust, UK, and Seoul National University Hospital to extract a type of lung cell known as human lung alveolar type 2 cells. By reprogramming these cells back to their earlier ‘stem cell’ stage, they were able to grow self-organizing alveolar-like 3D structures that mimic the behaviour of key lung tissue. “The research community now has a powerful new platform to study precisely how the virus infects the lungs, as well as explore possible treatments,” said Professor Ju, co-senior author of the research. Dr. Joo-Hyeon Lee, another co-senior author at the Wellcome-MRC Cambridge Stem Cell Institute, said: “We still know surprisingly little about how SARS-CoV-2 infects the lungs and causes disease. Our approach has allowed us to grow 3D models of key lung tissue – in a sense, ‘mini-lungs’ – in the lab and study what happens when they become infected.” The team infected the organoids with a strain of SARS-CoV-2 taken from a patient in Korea who was diagnosed with COVID-19 on January 26 after traveling to Wuhan, China. Using a combination of fluorescence imaging and single cell genetic analysis, they were able to study how the cells responded to the virus. When the 3D models were exposed to SARS-CoV-2, the virus began to replicate rapidly, reaching full cellular infection just six hours after infection. Replication enables the virus to spread throughout the body, infecting other cells and tissue. Around the same time, the cells began to produce interferons – proteins that act as warning signals to neighbouring cells, telling them to activate their antiviral defences. After 48 hours, the interferons triggered the innate immune response – its first line of defence – and the cells started fighting back against infection. Sixty hours after infection, a subset of alveolar cells began to disintegrate, leading to cell death and damage to the lung tissue. Although the researchers observed changes to the lung cells within three days of infection, clinical symptoms of COVID-19 rarely occur so quickly and can sometimes take more than ten days after exposure to appear. The team say there are several possible reasons for this. It may take several days from the virus first infiltrating the upper respiratory tract to it reaching the alveoli. It may also require a substantial proportion of alveolar cells to be infected or for further interactions with immune cells resulting in inflammation before a patient displays symptoms. “Based on our model we can tackle many unanswered key questions, such as understanding genetic susceptibility to SARS-CoV-2, assessing relative infectivity of viral mutants, and revealing the damage processes of the virus in human alveolar cells,” said Professor Ju. “Most importantly, it provides the opportunity to develop and screen potential therapeutic agents against SARS-CoV-2 infection.” “We hope to use our technique to grow these 3D models from cells of patients who are particularly vulnerable to infection, such as the elderly or people with diseased lungs, and find out what happens to their tissue,” added Dr. Lee. The research was a collaboration involving scientists from KAIST, the University of Cambridge, Korea National Institute of Health, Institute for Basic Science (IBS), Seoul National University Hospital and Genome Insight in Korea. - ProfileProfessor Young Seok JuLaboratory of Cancer Genomics https://julab.kaist.ac.kr the Graduate School of Medical Science and EngineeringKAIST
2020.10.26
View 9824
Slippery When Wet: Fish and Seaweed Inspire Ships to Reduce Fluid Friction
Faster ships could be on the horizon after KAIST scientists develop a slippery surface inspired by fish and seaweed to reduce the hull's drag through the water. Long-distance cargo ships lose a significant amount of energy due to fluid friction. Looking to the drag reduction mechanisms employed by aquatic life can provide inspiration on how to improve efficiency. Fish and seaweed secrete a layer of mucus to create a slippery surface, reducing their friction as they travel through water. A potential way to mimic this is by creating lubricant-infused surfaces covered with cavities. As the cavities are continuously filled with the lubricant, a layer is formed over the surface. Though this method has previously been shown to work, reducing drag by up to 18%, the underlying physics is not fully understood. KAIST researchers in collaboration with a team of researchers from POSTECH conducted simulations of this process to help explain the effects, and their findings were published in the journal Physics of Fluids on September 15. The group looked at the average speed of a cargo ship with realistic material properties and simulated how it behaves under various lubrication setups. Specifically, they monitored the effects of the open area of the lubricant-filled cavities, as well as the thickness of the cavity lids. They found that for larger open areas, the lubricant spreads more than it does with smaller open areas, leading to a slipperier surface. On the other hand, the lid thickness does not have much of an effect on the slip, though a thicker lid does create a thicker lubricant buildup layer. Professor Emeritus Hyung Jin Sung from the KAIST Department of Mechanical Engineering who led this study said, “Our investigation of the hydrodynamics of a lubricant layer and how it results in drag reduction with a slippery surface in a basic configuration has provided significant insight into the benefits of a lubricant-infused surface.” Now that they have worked on optimizing the lubricant secretion design, the authors hope it can be implemented in real-life marine vehicles. “If the present design parameters are adopted, the drag reduction rate will increase significantly,” Professor Sung added. This work was supported by the National Research Foundation (NRF) of Korea. Source: Materials provided by American Institute of Physics. Publication: Kim, Seung Joong, et al. (2020). A lubricant-infused slip surface for drag reduction. Physics of Fluids. Available online at https://doi.org/10.1063/5.0018460 Profile: Hyung Jin Sung Professor Emeritus hyungjin@kaist.ac.kr http://flow.kaist.ac.kr/index.php Flow Control Lab. (FCL) Department of Mechanical Engineering http://kaist.ac.kr Korea Advanced Institute of Science and Technology (KAIST) Daejeon, Republic of Korea (END)
2020.10.12
View 6187
E. coli Engineered to Grow on CO₂ and Formic Acid as Sole Carbon Sources
- An E. coli strain that can grow to a relatively high cell density solely on CO₂ and formic acid was developed by employing metabolic engineering. - Most biorefinery processes have relied on the use of biomass as a raw material for the production of chemicals and materials. Even though the use of CO₂ as a carbon source in biorefineries is desirable, it has not been possible to make common microbial strains such as E. coli grow on CO₂. Now, a metabolic engineering research group at KAIST has developed a strategy to grow an E. coli strain to higher cell density solely on CO₂ and formic acid. Formic acid is a one carbon carboxylic acid, and can be easily produced from CO₂ using a variety of methods. Since it is easier to store and transport than CO₂, formic acid can be considered a good liquid-form alternative of CO₂. With support from the C1 Gas Refinery R&D Center and the Ministry of Science and ICT, a research team led by Distinguished Professor Sang Yup Lee stepped up their work to develop an engineered E. coli strain capable of growing up to 11-fold higher cell density than those previously reported, using CO₂ and formic acid as sole carbon sources. This work was published in Nature Microbiology on September 28. Despite the recent reports by several research groups on the development of E. coli strains capable of growing on CO₂ and formic acid, the maximum cell growth remained too low (optical density of around 1) and thus the production of chemicals from CO₂ and formic acid has been far from realized. The team previously reported the reconstruction of the tetrahydrofolate cycle and reverse glycine cleavage pathway to construct an engineered E. coli strain that can sustain growth on CO₂ and formic acid. To further enhance the growth, the research team introduced the previously designed synthetic CO₂ and formic acid assimilation pathway, and two formate dehydrogenases. Metabolic fluxes were also fine-tuned, the gluconeogenic flux enhanced, and the levels of cytochrome bo3 and bd-I ubiquinol oxidase for ATP generation were optimized. This engineered E. coli strain was able to grow to a relatively high OD600 of 7~11, showing promise as a platform strain growing solely on CO₂ and formic acid. Professor Lee said, “We engineered E. coli that can grow to a higher cell density only using CO₂ and formic acid. We think that this is an important step forward, but this is not the end. The engineered strain we developed still needs further engineering so that it can grow faster to a much higher density.” Professor Lee’s team is continuing to develop such a strain. “In the future, we would be delighted to see the production of chemicals from an engineered E. coli strain using CO₂ and formic acid as sole carbon sources,” he added. -Profile:Distinguished Professor Sang Yup Leehttp://mbel.kaist.ac.krDepartment of Chemical and Biomolecular EngineeringKAIST
2020.09.29
View 9498
Deep Learning Helps Explore the Structural and Strategic Bases of Autism
Psychiatrists typically diagnose autism spectrum disorders (ASD) by observing a person’s behavior and by leaning on the Diagnostic and Statistical Manual of Mental Disorders (DSM-5), widely considered the “bible” of mental health diagnosis. However, there are substantial differences amongst individuals on the spectrum and a great deal remains unknown by science about the causes of autism, or even what autism is. As a result, an accurate diagnosis of ASD and a prognosis prediction for patients can be extremely difficult. But what if artificial intelligence (AI) could help? Deep learning, a type of AI, deploys artificial neural networks based on the human brain to recognize patterns in a way that is akin to, and in some cases can surpass, human ability. The technique, or rather suite of techniques, has enjoyed remarkable success in recent years in fields as diverse as voice recognition, translation, autonomous vehicles, and drug discovery. A group of researchers from KAIST in collaboration with the Yonsei University College of Medicine has applied these deep learning techniques to autism diagnosis. Their findings were published on August 14 in the journal IEEE Access. Magnetic resonance imaging (MRI) scans of brains of people known to have autism have been used by researchers and clinicians to try to identify structures of the brain they believed were associated with ASD. These researchers have achieved considerable success in identifying abnormal grey and white matter volume and irregularities in cerebral cortex activation and connections as being associated with the condition. These findings have subsequently been deployed in studies attempting more consistent diagnoses of patients than has been achieved via psychiatrist observations during counseling sessions. While such studies have reported high levels of diagnostic accuracy, the number of participants in these studies has been small, often under 50, and diagnostic performance drops markedly when applied to large sample sizes or on datasets that include people from a wide variety of populations and locations. “There was something as to what defines autism that human researchers and clinicians must have been overlooking,” said Keun-Ah Cheon, one of the two corresponding authors and a professor in Department of Child and Adolescent Psychiatry at Severance Hospital of the Yonsei University College of Medicine. “And humans poring over thousands of MRI scans won’t be able to pick up on what we’ve been missing,” she continued. “But we thought AI might be able to.” So the team applied five different categories of deep learning models to an open-source dataset of more than 1,000 MRI scans from the Autism Brain Imaging Data Exchange (ABIDE) initiative, which has collected brain imaging data from laboratories around the world, and to a smaller, but higher-resolution MRI image dataset (84 images) taken from the Child Psychiatric Clinic at Severance Hospital, Yonsei University College of Medicine. In both cases, the researchers used both structural MRIs (examining the anatomy of the brain) and functional MRIs (examining brain activity in different regions). The models allowed the team to explore the structural bases of ASD brain region by brain region, focusing in particular on many structures below the cerebral cortex, including the basal ganglia, which are involved in motor function (movement) as well as learning and memory. Crucially, these specific types of deep learning models also offered up possible explanations of how the AI had come up with its rationale for these findings. “Understanding the way that the AI has classified these brain structures and dynamics is extremely important,” said Sang Wan Lee, the other corresponding author and an associate professor at KAIST. “It’s no good if a doctor can tell a patient that the computer says they have autism, but not be able to say why the computer knows that.” The deep learning models were also able to describe how much a particular aspect contributed to ASD, an analysis tool that can assist psychiatric physicians during the diagnosis process to identify the severity of the autism. “Doctors should be able to use this to offer a personalized diagnosis for patients, including a prognosis of how the condition could develop,” Lee said. “Artificial intelligence is not going to put psychiatrists out of a job,” he explained. “But using AI as a tool should enable doctors to better understand and diagnose complex disorders than they could do on their own.” -ProfileProfessor Sang Wan LeeDepartment of Bio and Brain EngineeringLaboratory for Brain and Machine Intelligence https://aibrain.kaist.ac.kr/ KAIST
2020.09.23
View 9717
Biomarker Predicts Who Will Have Severe COVID-19
- Airway cell analyses showing an activated immune axis could pinpoint the COVID-19 patients who will most benefit from targeted therapies.- KAIST researchers have identified key markers that could help pinpoint patients who are bound to get a severe reaction to COVID-19 infection. This would help doctors provide the right treatments at the right time, potentially saving lives. The findings were published in the journal Frontiers in Immunology on August 28. People’s immune systems react differently to infection with SARS-CoV-2, the virus that causes COVID-19, ranging from mild to severe, life-threatening responses. To understand the differences in responses, Professor Heung Kyu Lee and PhD candidate Jang Hyun Park from the Graduate School of Medical Science and Engineering at KAIST analysed ribonucleic acid (RNA) sequencing data extracted from individual airway cells of healthy controls and of mildly and severely ill patients with COVID-19. The data was available in a public database previously published by a group of Chinese researchers. “Our analyses identified an association between immune cells called neutrophils and special cell receptors that bind to the steroid hormone glucocorticoid,” Professor Lee explained. “This finding could be used as a biomarker for predicting disease severity in patients and thus selecting a targeted therapy that can help treat them at an appropriate time,” he added. Severe illness in COVID-19 is associated with an exaggerated immune response that leads to excessive airway-damaging inflammation. This condition, known as acute respiratory distress syndrome (ARDS), accounts for 70% of deaths in fatal COVID-19 infections. Scientists already know that this excessive inflammation involves heightened neutrophil recruitment to the airways, but the detailed mechanisms of this reaction are still unclear. Lee and Park’s analyses found that a group of immune cells called myeloid cells produced excess amounts of neutrophil-recruiting chemicals in severely ill patients, including a cytokine called tumour necrosis factor (TNF) and a chemokine called CXCL8. Further RNA analyses of neutrophils in severely ill patients showed they were less able to recruit very important T cells needed for attacking the virus. At the same time, the neutrophils produced too many extracellular molecules that normally trap pathogens, but damage airway cells when produced in excess. The researchers additionally found that the airway cells in severely ill patients were not expressing enough glucocorticoid receptors. This was correlated with increased CXCL8 expression and neutrophil recruitment. Glucocorticoids, like the well-known drug dexamethasone, are anti-inflammatory agents that could play a role in treating COVID-19. However, using them in early or mild forms of the infection could suppress the necessary immune reactions to combat the virus. But if airway damage has already happened in more severe cases, glucocorticoid treatment would be ineffective. Knowing who to give this treatment to and when is really important. COVID-19 patients showing reduced glucocorticoid receptor expression, increased CXCL8 expression, and excess neutrophil recruitment to the airways could benefit from treatment with glucocorticoids to prevent airway damage. Further research is needed, however, to confirm the relationship between glucocorticoids and neutrophil inflammation at the protein level. “Our study could serve as a springboard towards more accurate and reliable COVID-19 treatments,” Professor Lee said. This work was supported by the National Research Foundation of Korea, and Mobile Clinic Module Project funded by KAIST. Figure. Low glucocorticoid receptor (GR) expression led to excessive inflammation and lung damage by neutrophils through enhancing the expression of CXCL8 and other cytokines. Image credit: Professor Heung Kyu Lee, KAIST. Created with Biorender.com. Image usage restrictions: News organizations may use or redistribute these figures and image, with proper attribution, as part of news coverage of this paper only. -Publication: Jang Hyun Park, and Heung Kyu Lee. (2020). Re-analysis of Single Cell Transcriptome Reveals That the NR3C1-CXCL8-Neutrophil Axis Determines the Severity of COVID-19. Frontiers in Immunology, Available online at https://doi.org/10.3389/fimmu.2020.02145 -Profile: Heung Kyu Lee Associate Professor heungkyu.lee@kaist.ac.kr https://www.heungkyulee.kaist.ac.kr/ Laboratory of Host Defenses Graduate School of Medical Science and Engineering (GSMSE) The Center for Epidemic Preparedness at KAIST Institute http://kaist.ac.kr Korea Advanced Institute of Science and Technology (KAIST) Daejeon, Republic of Korea Profile: Jang Hyun Park PhD Candidate janghyun.park@kaist.ac.kr GSMSE, KAIST
2020.09.17
View 13300
Sturdy Fabric-Based Piezoelectric Energy Harvester Takes Us One Step Closer to Wearable Electronics
KAIST researchers presented a highly flexible but sturdy wearable piezoelectric harvester using the simple and easy fabrication process of hot pressing and tape casting. This energy harvester, which has record high interfacial adhesion strength, will take us one step closer to being able to manufacture embedded wearable electronics. A research team led by Professor Seungbum Hong said that the novelty of this result lies in its simplicity, applicability, durability, and its new characterization of wearable electronic devices. Wearable devices are increasingly being used in a wide array of applications from small electronics to embedded devices such as sensors, actuators, displays, and energy harvesters. Despite their many advantages, high costs and complex fabrication processes remained challenges for reaching commercialization. In addition, their durability was frequently questioned. To address these issues, Professor Hong’s team developed a new fabrication process and analysis technology for testing the mechanical properties of affordable wearable devices. For this process, the research team used a hot pressing and tape casting procedure to connect the fabric structures of polyester and a polymer film. Hot pressing has usually been used when making batteries and fuel cells due to its high adhesiveness. Above all, the process takes only two to three minutes. The newly developed fabrication process will enable the direct application of a device into general garments using hot pressing just as graphic patches can be attached to garments using a heat press. In particular, when the polymer film is hot pressed onto a fabric below its crystallization temperature, it transforms into an amorphous state. In this state, it compactly attaches to the concave surface of the fabric and infiltrates into the gaps between the transverse wefts and longitudinal warps. These features result in high interfacial adhesion strength. For this reason, hot pressing has the potential to reduce the cost of fabrication through the direct application of fabric-based wearable devices to common garments. In addition to the conventional durability test of bending cycles, the newly introduced surface and interfacial cutting analysis system proved the high mechanical durability of the fabric-based wearable device by measuring the high interfacial adhesion strength between the fabric and the polymer film. Professor Hong said the study lays a new foundation for the manufacturing process and analysis of wearable devices using fabrics and polymers. He added that his team first used the surface and interfacial cutting analysis system (SAICAS) in the field of wearable electronics to test the mechanical properties of polymer-based wearable devices. Their surface and interfacial cutting analysis system is more precise than conventional methods (peel test, tape test, and microstretch test) because it qualitatively and quantitatively measures the adhesion strength. Professor Hong explained, “This study could enable the commercialization of highly durable wearable devices based on the analysis of their interfacial adhesion strength. Our study lays a new foundation for the manufacturing process and analysis of other devices using fabrics and polymers. We look forward to fabric-based wearable electronics hitting the market very soon.” The results of this study were registered as a domestic patent in Korea last year, and published in Nano Energy this month. This study has been conducted through collaboration with Professor Yong Min Lee in the Department of Energy Science and Engineering at DGIST, Professor Kwangsoo No in the Department of Materials Science and Engineering at KAIST, and Professor Seunghwa Ryu in the Department of Mechanical Engineering at KAIST. This study was supported by the High-Risk High-Return Project and the Global Singularity Research Project at KAIST, the National Research Foundation, and the Ministry of Science and ICT in Korea. -Publication: Jaegyu Kim, Seoungwoo Byun, Sangryun Lee, Jeongjae Ryu, Seongwoo Cho, Chungik Oh, Hongjun Kim, Kwangsoo No, Seunghwa Ryu, Yong Min Lee, Seungbum Hong*, Nano Energy 75 (2020), 104992. https://doi.org/10.1016/j.nanoen.2020.104992 -Profile: Professor Seungbum Hong seungbum@kaist.ac.kr http://mii.kaist.ac.kr/ Department of Materials Science and Engineering KAIST
2020.09.17
View 11802
Advanced NVMe Controller Technology for Next Generation Memory Devices
KAIST researchers advanced non-volatile memory express (NVMe) controller technology for next generation information storage devices, and made this new technology named ‘OpenExpress’ freely available to all universities and research institutes around the world to help reduce the research cost in related fields. NVMe is a communication protocol made for high-performance storage devices based on a peripheral component interconnect-express (PCI-E) interface. NVMe has been developed to take the place of the Serial AT Attachment (SATA) protocol, which was developed to process data on hard disk drives (HDDs) and did not perform well in solid state drives (SSDs). Unlike HDDs that use magnetic spinning disks, SSDs use semiconductor memory, allowing the rapid reading and writing of data. SSDs also generate less heat and noise, and are much more compact and lightweight. Since data processing in SSDs using NVMe is up to six times faster than when SATA is used, NVMe has become the standard protocol for ultra-high speed and volume data processing, and is currently used in many flash-based information storage devices. Studies on NVMe continue at both the academic and industrial levels, however, its poor accessibility is a drawback. Major information and communications technology (ICT) companies around the world expend astronomical costs to procure intellectual property (IP) related to hardware NVMe controllers, necessary for the use of NVMe. However, such IP is not publicly disclosed, making it difficult to be used by universities and research institutes for research purposes. Although a small number of U.S. Silicon Valley startups provide parts of their independently developed IP for research, the cost of usage is around 34,000 USD per month. The costs skyrocket even further because each copy of single-use source code purchased for IP modification costs approximately 84,000 USD. In order to address these issues, a group of researchers led by Professor Myoungsoo Jung from the School of Electrical Engineering at KAIST developed a next generation NVMe controller technology that achieved parallel data input/output processing for SSDs in a fully hardware automated form. The researchers presented their work at the 2020 USENIX Annual Technical Conference (USENIX ATC ’20) in July, and released it as an open research framework named ‘OpenExpress.’ This NVMe controller technology developed by Professor Jung’s team comprises a wide range of basic hardware IP and key NVMe IP cores. To examine its actual performance, the team made an NVMe hardware controller prototype using OpenExpress, and designed all logics provided by OpenExpress to operate at high frequency. The field-programmable gate array (FPGA) memory card prototype developed using OpenExpress demonstrated increased input/output data processing capacity per second, supporting up to 7 gigabit per second (GB/s) bandwidth. This makes it suitable for ultra-high speed and volume next generation memory device research. In a test comparing various storage server loads on devices, the team’s FPGA also showed 76% higher bandwidth and 68% lower input/output delay compared to Intel’s new high performance SSD (Optane SSD), which is sufficient for many researchers studying systems employing future memory devices. Depending on user needs, silicon devices can be synthesized as well, which is expected to further enhance performance. The NVMe controller technology of Professor Jung’s team can be freely used and modified under the OpenExpress open-source end-user agreement for non-commercial use by all universities and research institutes. This makes it extremely useful for research on next-generation memory compatible NVMe controllers and software stacks. “With the product of this study being disclosed to the world, universities and research institutes can now use controllers that used to be exclusive for only the world’s biggest companies, at no cost,ˮ said Professor Jung. He went on to stress, “This is a meaningful first step in research of information storage device systems such as high-speed and volume next generation memory.” This work was supported by a grant from MemRay, a company specializing in next generation memory development and distribution. More details about the study can be found at http://camelab.org. Image credit: Professor Myoungsoo Jung, KAIST Image usage restrictions: News organizations may use or redistribute these figures and image, with proper attribution, as part of news coverage of this paper only. -Publication: Myoungsoo Jung. (2020). OpenExpress: Fully Hardware Automated Open Research Framework for Future Fast NVMe Devices. Presented in the Proceedings of the 2020 USENIX Annual Technical Conference (USENIX ATC ’20), Available online at https://www.usenix.org/system/files/atc20-jung.pdf -Profile: Myoungsoo Jung Associate Professor m.jung@kaist.ac.kr http://camelab.org Computer Architecture and Memory Systems Laboratory School of Electrical Engineering http://kaist.ac.kr Korea Advanced Institute of Science and Technology (KAIST) Daejeon, Republic of Korea (END)
2020.09.04
View 9533
Before Eyes Open, They Get Ready to See
- Spontaneous retinal waves can generate long-range horizontal connectivity in visual cortex. - A KAIST research team’s computational simulations demonstrated that the waves of spontaneous neural activity in the retinas of still-closed eyes in mammals develop long-range horizontal connections in the visual cortex during early developmental stages. This new finding featured in the August 19 edition of Journal of Neuroscience as a cover article has resolved a long-standing puzzle for understanding visual neuroscience regarding the early organization of functional architectures in the mammalian visual cortex before eye-opening, especially the long-range horizontal connectivity known as “feature-specific” circuitry. To prepare the animal to see when its eyes open, neural circuits in the brain’s visual system must begin developing earlier. However, the proper development of many brain regions involved in vision generally requires sensory input through the eyes. In the primary visual cortex of the higher mammalian taxa, cortical neurons of similar functional tuning to a visual feature are linked together by long-range horizontal circuits that play a crucial role in visual information processing. Surprisingly, these long-range horizontal connections in the primary visual cortex of higher mammals emerge before the onset of sensory experience, and the mechanism underlying this phenomenon has remained elusive. To investigate this mechanism, a group of researchers led by Professor Se-Bum Paik from the Department of Bio and Brain Engineering at KAIST implemented computational simulations of early visual pathways using data obtained from the retinal circuits in young animals before eye-opening, including cats, monkeys, and mice. From these simulations, the researchers found that spontaneous waves propagating in ON and OFF retinal mosaics can initialize the wiring of long-range horizontal connections by selectively co-activating cortical neurons of similar functional tuning, whereas equivalent random activities cannot induce such organizations. The simulations also showed that emerged long-range horizontal connections can induce the patterned cortical activities, matching the topography of underlying functional maps even in salt-and-pepper type organizations observed in rodents. This result implies that the model developed by Professor Paik and his group can provide a universal principle for the developmental mechanism of long-range horizontal connections in both higher mammals as well as rodents. Professor Paik said, “Our model provides a deeper understanding of how the functional architectures in the visual cortex can originate from the spatial organization of the periphery, without sensory experience during early developmental periods.” He continued, “We believe that our findings will be of great interest to scientists working in a wide range of fields such as neuroscience, vision science, and developmental biology.” This work was supported by the National Research Foundation of Korea (NRF). Undergraduate student Jinwoo Kim participated in this research project and presented the findings as the lead author as part of the Undergraduate Research Participation (URP) Program at KAIST. Figures and image credit: Professor Se-Bum Paik, KAIST Image usage restrictions: News organizations may use or redistribute these figures and image, with proper attribution, as part of news coverage of this paper only. Publication: Jinwoo Kim, Min Song, and Se-Bum Paik. (2020). Spontaneous retinal waves generate long-range horizontal connectivity in visual cortex. Journal of Neuroscience, Available online athttps://www.jneurosci.org/content/early/2020/07/17/JNEUROSCI.0649-20.2020 Profile: Se-Bum Paik Assistant Professor sbpaik@kaist.ac.kr http://vs.kaist.ac.kr/ VSNN Laboratory Department of Bio and Brain Engineering Program of Brain and Cognitive Engineering http://kaist.ac.kr Korea Advanced Institute of Science and Technology (KAIST) Daejeon, Republic of Korea Profile: Jinwoo Kim Undergraduate Student bugkjw@kaist.ac.kr Department of Bio and Brain Engineering, KAIST Profile: Min Song Ph.D. Candidate night@kaist.ac.kr Program of Brain and Cognitive Engineering, KAIST (END)
2020.08.25
View 10261
Microscopy Approach Poised to Offer New Insights into Liver Diseases
Researchers have developed a new way to visualize the progression of nonalcoholic fatty liver disease (NAFLD) in mouse models of the disease. The new microscopy method provides a high-resolution 3D view that could lead to important new insights into NAFLD, a condition in which too much fat is stored in the liver. “It is estimated that a quarter of the adult global population has NAFLD, yet an effective treatment strategy has not been found,” said professor Pilhan Kim from the Graduate School of Medical Science and Engineering at KAIST. “NAFLD is associated with obesity and type 2 diabetes and can sometimes progress to liver failure in serious case.” In the Optical Society (OSA) journal Biomedical Optics Express, Professor Kim and colleagues reported their new imaging technique and showed that it can be used to observe how tiny droplets of fat, or lipids, accumulate in the liver cells of living mice over time. “It has been challenging to find a treatment strategy for NAFLD because most studies examine excised liver tissue that represents just one timepoint in disease progression,” said Professor Kim. “Our technique can capture details of lipid accumulation over time, providing a highly useful research tool for identifying the multiple parameters that likely contribute to the disease and could be targeted with treatment.” Capturing the dynamics of NAFLD in living mouse models of the disease requires the ability to observe quickly changing interactions of biological components in intact tissue in real-time. To accomplish this, the researchers developed a custom intravital confocal and two-photon microscopy system that acquires images of multiple fluorescent labels at video-rate with cellular resolution. “With video-rate imaging capability, the continuous movement of liver tissue in live mice due to breathing and heart beating could be tracked in real time and precisely compensated,” said Professor Kim. “This provided motion-artifact free high-resolution images of cellular and sub-cellular sized individual lipid droplets.” The key to fast imaging was a polygonal mirror that rotated at more than 240 miles per hour to provide extremely fast laser scanning. The researchers also incorporated four different lasers and four high-sensitivity optical detectors into the setup so that they could acquire multi-color images to capture different color fluorescent probes used to label the lipid droplets and microvasculature in the livers of live mice. “Our approach can capture real-time changes in cell behavior and morphology, vascular structure and function, and the spatiotemporal localization of biological components while directly visualizing of lipid droplet development in NAFLD progression,” said Professor Kim. “It also allows the analysis of the highly complex behaviors of various immune cells as NAFLD progresses.” The researchers demonstrated their approach by using it to observe the development and spatial distribution of lipid droplets in individual mice with NAFLD induced by a methionine and choline-deficient diet. Next, they plan to use it to study how the liver microenvironment changes during NAFLD progression by imaging the same mouse over time. They also want to use their microscope technique to visualize various immune cells and lipid droplets to better understand the complex liver microenvironment in NAFLD progression.
2020.08.21
View 7739
Deep Learning-Based Cough Recognition Model Helps Detect the Location of Coughing Sounds in Real Time
The Center for Noise and Vibration Control at KAIST announced that their coughing detection camera recognizes where coughing happens, visualizing the locations. The resulting cough recognition camera can track and record information about the person who coughed, their location, and the number of coughs on a real-time basis. Professor Yong-Hwa Park from the Department of Mechanical Engineering developed a deep learning-based cough recognition model to classify a coughing sound in real time. The coughing event classification model is combined with a sound camera that visualizes their locations in public places. The research team said they achieved a best test accuracy of 87.4 %. Professor Park said that it will be useful medical equipment during epidemics in public places such as schools, offices, and restaurants, and to constantly monitor patients’ conditions in a hospital room. Fever and coughing are the most relevant respiratory disease symptoms, among which fever can be recognized remotely using thermal cameras. This new technology is expected to be very helpful for detecting epidemic transmissions in a non-contact way. The cough event classification model is combined with a sound camera that visualizes the cough event and indicates the location in the video image. To develop a cough recognition model, a supervised learning was conducted with a convolutional neural network (CNN). The model performs binary classification with an input of a one-second sound profile feature, generating output to be either a cough event or something else. In the training and evaluation, various datasets were collected from Audioset, DEMAND, ETSI, and TIMIT. Coughing and others sounds were extracted from Audioset, and the rest of the datasets were used as background noises for data augmentation so that this model could be generalized for various background noises in public places. The dataset was augmented by mixing coughing sounds and other sounds from Audioset and background noises with the ratio of 0.15 to 0.75, then the overall volume was adjusted to 0.25 to 1.0 times to generalize the model for various distances. The training and evaluation datasets were constructed by dividing the augmented dataset by 9:1, and the test dataset was recorded separately in a real office environment. In the optimization procedure of the network model, training was conducted with various combinations of five acoustic features including spectrogram, Mel-scaled spectrogram and Mel-frequency cepstrum coefficients with seven optimizers. The performance of each combination was compared with the test dataset. The best test accuracy of 87.4% was achieved with Mel-scaled Spectrogram as the acoustic feature and ASGD as the optimizer. The trained cough recognition model was combined with a sound camera. The sound camera is composed of a microphone array and a camera module. A beamforming process is applied to a collected set of acoustic data to find out the direction of incoming sound source. The integrated cough recognition model determines whether the sound is cough or not. If it is, the location of cough is visualized as a contour image with a ‘cough’ label at the location of the coughing sound source in a video image. A pilot test of the cough recognition camera in an office environment shows that it successfully distinguishes cough events and other events even in a noisy environment. In addition, it can track the location of the person who coughed and count the number of coughs in real time. The performance will be improved further with additional training data obtained from other real environments such as hospitals and classrooms. Professor Park said, “In a pandemic situation like we are experiencing with COVID-19, a cough detection camera can contribute to the prevention and early detection of epidemics in public places. Especially when applied to a hospital room, the patient's condition can be tracked 24 hours a day and support more accurate diagnoses while reducing the effort of the medical staff." This study was conducted in collaboration with SM Instruments Inc. Profile: Yong-Hwa Park, Ph.D. Associate Professor yhpark@kaist.ac.kr http://human.kaist.ac.kr/ Human-Machine Interaction Laboratory (HuMaN Lab.) Department of Mechanical Engineering (ME) Korea Advanced Institute of Science and Technology (KAIST) https://www.kaist.ac.kr/en/ Daejeon 34141, Korea Profile: Gyeong Tae Lee PhD Candidate hansaram@kaist.ac.kr HuMaN Lab., ME, KAIST Profile: Seong Hu Kim PhD Candidate tjdgnkim@kaist.ac.kr HuMaN Lab., ME, KAIST Profile: Hyeonuk Nam PhD Candidate frednam@kaist.ac.kr HuMaN Lab., ME, KAIST Profile: Young-Key Kim CEO sales@smins.co.kr http://en.smins.co.kr/ SM Instruments Inc. Daejeon 34109, Korea (END)
2020.08.13
View 13672
<<
첫번째페이지
<
이전 페이지
11
12
13
14
15
16
17
18
19
20
>
다음 페이지
>>
마지막 페이지 59