본문 바로가기
대메뉴 바로가기
KAIST
Newsletter Vol.25
Receive KAIST news by email!
View
Subscribe
Close
Type your e-mail address here.
Subscribe
Close
KAIST
NEWS
유틸열기
홈페이지 통합검색
-
검색
KOREAN
메뉴 열기
Association+for+Computing+Machinery
by recently order
by view order
KAIST Professor Uichin Lee Receives Distinguished Paper Award from ACM
< Photo. Professor Uichin Lee (left) receiving the award > KAIST (President Kwang Hyung Lee) announced on the 25th of October that Professor Uichin Lee’s research team from the School of Computing received the Distinguished Paper Award at the International Joint Conference on Pervasive and Ubiquitous Computing and International Symposium on Wearable Computing (Ubicomp / ISWC) hosted by the Association for Computing Machinery (ACM) in Melbourne, Australia on October 8. The ACM Ubiquitous Computing Conference is the most prestigious international conference where leading universities and global companies from around the world present the latest research results on ubiquitous computing and wearable technologies in the field of human-computer interaction (HCI). The main conference program is composed of invited papers published in the Proceedings of the ACM (PACM) on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT), which covers the latest research in the field of ubiquitous and wearable computing. The Distinguished Paper Award Selection Committee selected eight papers among 205 papers published in Vol. 7 of the ACM Proceedings (PACM IMWUT) that made outstanding and exemplary contributions to the research community. The committee consists of 16 prominent experts who are current and former members of the journal's editorial board which made the selection after a rigorous review of all papers for a period that stretched over a month. < Figure 1. BeActive mobile app to promote physical activity to form active lifestyle habits > The research that won the Distinguished Paper Award was conducted by Dr. Junyoung Park, a graduate of the KAIST Graduate School of Data Science, as the 1st author, and was titled “Understanding Disengagement in Just-in-Time Mobile Health Interventions” Professor Uichin Lee’s research team explored user engagement of ‘Just-in-Time Mobile Health Interventions’ that actively provide interventions in opportune situations by utilizing sensor data collected from health management apps, based on the premise that these apps are aptly in use to ensure effectiveness. < Figure 2. Traditional user-requested digital behavior change intervention (DBCI) delivery (Pull) vs. Automatic transmission (Push) for Just-in-Time (JIT) mobile DBCI using smartphone sensing technologies > The research team conducted a systematic analysis of user disengagement or the decline in user engagement in digital behavior change interventions. They developed the BeActive system, an app that promotes physical activities designed to help forming active lifestyle habits, and systematically analyzed the effects of users’ self-control ability and boredom-proneness on compliance with behavioral interventions over time. The results of an 8-week field trial revealed that even if just-in-time interventions are provided according to the user’s situation, it is impossible to avoid a decline in participation. However, for users with high self-control and low boredom tendency, the compliance with just-in-time interventions delivered through the app was significantly higher than that of users in other groups. In particular, users with high boredom proneness easily got tired of the repeated push interventions, and their compliance with the app decreased more quickly than in other groups. < Figure 3. Just-in-time Mobile Health Intervention: a demonstrative case of the BeActive system: When a user is identified to be sitting for more than 50 mins, an automatic push notification is sent to recommend a short active break to complete for reward points. > Professor Uichin Lee explained, “As the first study on user engagement in digital therapeutics and wellness services utilizing mobile just-in-time health interventions, this research provides a foundation for exploring ways to empower user engagement.” He further added, “By leveraging large language models (LLMs) and comprehensive context-aware technologies, it will be possible to develop user-centered AI technologies that can significantly boost engagement." < Figure 4. A conceptual illustration of user engagement in digital health apps. Engagement in digital health apps consists of (1) engagement in using digital health apps and (2) engagement in behavioral interventions provided by digital health apps, i.e., compliance with behavioral interventions. Repeated adherences to behavioral interventions recommended by digital health apps can help achieve the distal health goals. > This study was conducted with the support of the 2021 Biomedical Technology Development Program and the 2022 Basic Research and Development Program of the National Research Foundation of Korea funded by the Ministry of Science and ICT. < Figure 5. A conceptual illustration of user disengagement and engagement of digital behavior change intervention (DBCI) apps. In general, user engagement of digital health intervention apps consists of two components: engagement in digital health apps and engagement in behavioral interventions recommended by such apps (known as behavioral compliance or intervention adherence). The distinctive stages of user can be divided into adoption, abandonment, and attrition. > < Figure 6. Trends of changes in frequency of app usage and adherence to behavioral intervention over 8 weeks, ● SC: Self-Control Ability (High-SC: user group with high self-control, Low-SC: user group with low self-control) ● BD: Boredom-Proneness (High-BD: user group with high boredom-proneness, Low-BD: user group with low boredom-proneness). The app usage frequencies were declined over time, but the adherence rates of those participants with High-SC and Low-BD were significantly higher than other groups. >
2024.10.25
View 865
Professor Dongsu Han Named Program Chair for ACM CoNEXT 2020
Professor Dongsu Han from the School of Electrical Engineering has been appointed as the program chair for the 16th Association for Computing Machinery’s International Conference on emerging Networking EXperiments and Technologies (ACM CoNEXT 2020). Professor Han is the first program chair to be appointed from an Asian institution. ACM CoNEXT is hosted by ACM SIGCOMM, ACM's Special Interest Group on Data Communications, which specializes in the field of communication and computer networks. Professor Han will serve as program co-chair along with Professor Anja Feldmann from the Max Planck Institute for Informatics. Together, they have appointed 40 world-leading researchers as program committee members for this conference, including Professor Song Min Kim from KAIST School of Electrical Engineering. Paper submissions for the conference can be made by the end of June, and the event itself is to take place from the 1st to 4th of December. Conference Website: https://conferences2.sigcomm.org/co-next/2020/#!/home (END)
2020.06.02
View 8369
Sound-based Touch Input Technology for Smart Tables and Mirrors
(from left: MS candidate Anish Byanjankar, Research Assistant Professor Hyosu Kim and Professor Insik Shin) Time passes so quickly, especially in the morning. Your hands are so busy brushing your teeth and checking the weather on your smartphone. You might wish that your mirror could turn into a touch screen and free up your hands. That wish can be achieved very soon. A KAIST team has developed a smartphone-based touch sound localization technology to facilitate ubiquitous interactions, turning objects like furniture and mirrors into touch input tools. This technology analyzes touch sounds generated from a user’s touch on a surface and identifies the location of the touch input. For instance, users can turn surrounding tables or walls into virtual keyboards and write lengthy e-mails much more conveniently by using only the built-in microphone on their smartphones or tablets. Moreover, family members can enjoy a virtual chessboard or enjoy board games on their dining tables. Additionally, traditional smart devices such as smart TVs or mirrors, which only provide simple screen display functions, can play a smarter role by adding touch input function support (see the image below). Figure 1.Examples of using touch input technology: By using only smartphone, you can use surrounding objects as a touch screen anytime and anywhere. The most important aspect of enabling the sound-based touch input method is to identify the location of touch inputs in a precise manner (within about 1cm error). However, it is challenging to meet these requirements, mainly because this technology can be used in diverse and dynamically changing environments. Users may use objects like desks, walls, or mirrors as touch input tools and the surrounding environments (e.g. location of nearby objects or ambient noise level) can be varied. These environmental changes can affect the characteristics of touch sounds. To address this challenge, Professor Insik Shin from the School of Computing and his team focused on analyzing the fundamental properties of touch sounds, especially how they are transmitted through solid surfaces. On solid surfaces, sound experiences a dispersion phenomenon that makes different frequency components travel at different speeds. Based on this phenomenon, the team observed that the arrival time difference (TDoA) between frequency components increases in proportion to the sound transmission distance, and this linear relationship is not affected by the variations of surround environments. Based on these observations, Research Assistant Professor Hyosu Kim proposed a novel sound-based touch input technology that records touch sounds transmitted through solid surfaces, then conducts a simple calibration process to identify the relationship between TDoA and the sound transmission distance, finally achieving accurate touch input localization. The accuracy of the proposed system was then measured. The average localization error was lower than about 0.4 cm on a 17-inch touch screen. Particularly, it provided a measurement error of less than 1cm, even with a variety of objects such as wooden desks, glass mirrors, and acrylic boards and when the position of nearby objects and noise levels changed dynamically. Experiments with practical users have also shown positive responses to all measurement factors, including user experience and accuracy. Professor Shin said, “This is novel touch interface technology that allows a touch input system just by installing three to four microphones, so it can easily turn nearby objects into touch screens.” The proposed system was presented at ACM SenSys, a top-tier conference in the field of mobile computing and sensing, and was selected as a best paper runner-up in November 2018. (The demonstration video of the sound-based touch input technology)
2018.12.26
View 7563
It's Time to 3D Sketch with Air Scaffolding
People often use their hands when describing an object, while pens are great tools for describing objects in detail. Taking this idea, a KAIST team introduced a new 3D sketching workflow, combining the strengths of hand and pen input. This technique will ease the way for ideation in three dimensions, leading to efficient product design in terms of time and cost. For a designer's drawing to become a product in reality, one has to transform a designer's 2D drawing into a 3D shape; however, it is difficult to infer accurate 3D shapes that match the original intention from an inaccurate 2D drawing made by hand. When creating a 3D shape from a planar 2D drawing, unobtainable information is required. On the other hand, loss of depth information occurs when a 3D shape is expressed as a 2D drawing using perspective drawing techniques. To fill in these “missing links” during the conversion, "3D sketching" techniques have been actively studied. Their main purpose is to help designers naturally provide missing 3D shape information in a 2D drawing. For example, if a designer draws two symmetric curves from a single point of view or draws the same curves from different points of view, the geometric clues that are left in this process are collected and mathematically interpreted to define the proper 3D curve. As a result, designers can use 3D sketching to directly draw a 3D shape as if using pen and paper. Among 3D sketching tools, sketching with hand motions, in VR environments in particular, has drawn attention because it is easy and quick. But the biggest limitation is that they cannot articulate the design solely using rough hand motions, hence they are difficult to be applied to product designs. Moreover, users may feel tired after raising their hands in the air during the entire drawing process. Using hand motions but to elaborate designs, Professor Seok-Hyung Bae and his team from the Department of Industrial Design integrated hand motions and pen-based sketching, allocating roles according to their strengths. This new technique is called Agile 3D Sketching with Air Scaffolding. Designers use their hand motions in the air to create rough 3D shapes which will be used as scaffolds, and then they can add details with pen-based 3D sketching on a tablet (Figure 1). Figure 1. In the agile 3D sketching workflow with air scaffolding, the user (a) makes unconstrained hand movements in the air to quickly generate rough shapes to be used as scaffolds, (b) uses the scaffolds as references and draws finer details with them, (c) produces a high-fidelity 3D concept sketch of a steering wheel in an iterative and progressive manner. The team came up with an algorithm to identify descriptive hand motions from transitory hand motions and extract only the intended shapes from unconstrained hand motions, based on air scaffolds from the identified motions. Through user tests, the team identified that this technique is easy to learn and use, and demonstrates good applicability. Most importantly, the users can reduce time, yet enhance the accuracy of defining the proportion and scale of products. Eventually, this tool will be able to be applied to various fields including the automobile industry, home appliances, animations and the movie making industry, and robotics. It also can be linked to smart production technology, such as 3D printing, to make manufacturing process faster and more flexible. PhD candidate Yongkwan Kim, who led the research project, said, “I believe the system will enhance product quality and work efficiency because designers can express their 3D ideas quickly yet accurately without using complex 3D CAD modeling software. I will make it into a product that every designer wants to use in various fields.” “There have been many attempts to encourage creative activities in various fields by using advanced computer technology. Based on in-depth understanding of designers, we will take the lead in innovating the design process by applying cutting-edge technology,” Professor Bae added. Professor Bae and his team from the Department of Industrial Design has been delving into developing better 3D sketching tools. They started with a 3D curve sketching system for professional designers called ILoveSketch and moved on to SketchingWithHands for designing a handheld product with first-person hand postures captured by a hand-tracking sensor. They then took their project to the next level and introduced Agile 3D Sketching with Air Scaffolding, a new 3D sketching workflow combining hand motion and pen drawing which was chosen as one of the CHI (Conference on Human Factors in Computing Systems) 2018 Best Papers by the Association for Computing Machinery. - Click the link to watch video clip of SketchingWithHands
2018.07.25
View 8773
A New Theory Improves Button Designs
Pressing a button appears effortless. People easily dismisses how challenging it is. Researchers at KAIST and Aalto University in Finland, created detailed simulations of button-pressing with the goal of producing human-like presses. The researchers argue that the key capability of the brain is a probabilistic model. The brain learns a model that allows it to predict a suitable motor command for a button. If a press fails, it can pick a very good alternative and try it out. "Without this ability, we would have to learn to use every button like it was new," tells Professor Byungjoo Lee from the Graduate School of Culture Technology at KAIST. After successfully activating the button, the brain can tune the motor command to be more precise, use less energy and to avoid stress or pain. "These factors together, with practice, produce the fast, minimum-effort, elegant touch people are able to perform." The brain uses probabilistic models also to extract information optimally from the sensations that arise when the finger moves and its tip touches the button. It "enriches" the ephemeral sensations optimally based on prior experience to estimate the time the button was impacted. For example, tactile sensation from the tip of the finger a better predictor for button activation than proprioception (angle position) and visual feedback. Best performance is achieved when all sensations are considered together. To adapt, the brain must fuse their information using prior experiences. Professor Lee explains, "We believe that the brain picks up these skills over repeated button pressings that start already as a child. What appears easy for us now has been acquired over years." The research was triggered by admiration of our remarkable capability to adapt button-pressing. Professor Antti Oulasvirta at Aalto University said, "We push a button on a remote controller differently than a piano key. The press of a skilled user is surprisingly elegant when looked at terms of timing, reliability, and energy use. We successfully press buttons without ever knowing the inner workings of a button. It is essentially a black box to our motor system. On the other hand, we also fail to activate buttons, and some buttons are known to be worse than others." Previous research has shown that touch buttons are worse than push-buttons, but there has not been adequate theoretical explanation. "In the past, there has been very little attention to buttons, although we use them all the time" says Dr. Sunjun Kim from Aalto University. The new theory and simulations can be used to design better buttons. "One exciting implication of the theory is that activating the button at the moment when the sensation is strongest will help users better rhythm their keypresses." To test this hypothesis, the researchers created a new method for changing the way buttons are activated. The technique is called Impact Activation. Instead of activating the button at first contact, it activates it when the button cap or finger hits the floor with maximum impact. The technique was 94% better in rapid tapping than the regular activation method for a push-button (Cherry MX switch) and 37% than a regular touchscreen button using a capacitive touch sensor. The technique can be easily deployed in touchscreens. However, regular physical keyboards do not offer the required sensing capability, although special products exist (e.g., the Wooting keyboard) on which it can be implemented. The simulations shed new light on what happens during a button press. One problem the brain must overcome is that muscles do not activate as perfectly as we will, but every press is slightly different. Moreover, a button press is very fast, occurring within 100 milliseconds, and is too fast for correcting movement. The key to understanding button-pressing is therefore to understand how the brain adapts based on the limited sensations that are the residue of the brief press event. The researchers also used the simulation to explain differences among physical and touchscreen-based button types. Both physical and touch buttons provide clear tactile signals from the impact of the tip with the button floor. However, with the physical button this signal is more pronounced and longer. "Where the two button types also differ is the starting height of the finger, and this makes a difference," explains Professor Lee. "When we pull up the finger from the touchscreen, it will end up at different height every time. Its down-press cannot be as accurately controlled in time as with a push-button where the finger can rest on top of the key cap." Three scientific articles, "Neuromechanics of a Button Press", "Impact activation improves rapid button pressing", and "Moving target selection: A cue integration model", will be presented at the CHI Conference on Human Factors in Computing Systems in Montréal, Canada, in April 2018.
2018.03.22
View 6868
Sangeun Oh Recognized as a 2017 Google Fellow
Sangeun Oh, a Ph.D. candidate in the School of Computing was selected as a Google PhD Fellow in 2017. He is one of 47 awardees of the Google PhD Fellowship in the world. The Google PhD Fellowship awards students showing outstanding performance in the field of computer science and related research. Since being established in 2009, the program has provided various benefits, including scholarships worth $10,000 USD and one-to-one research discussion with mentors from Google. His research work on a mobile system that allows interactions among various kinds of smart devices was recognized in the field of mobile computing. He developed a mobile platform that allows smart devices to share diverse functions, including logins, payments, and sensors. This technology provides numerous user experiences that existing mobile platforms could not offer. Through cross-device functionality sharing, users can utilize multiple smart devices in a more convenient manner. The research was presented at The Annual International Conference on Mobile Systems, Applications, and Services (MobiSys) of the Association for Computing Machinery in July, 2017. Oh said, “I would like to express my gratitude to my advisor, the professors in the School of Computing, and my lab colleagues. I will devote myself to carrying out more research in order to contribute to society.” His advisor, Insik Shin, a professor in the School of Computing said, “Being recognized as a Google PhD Fellow is an honor to both the student as well as KAIST. I strongly anticipate and believe that Oh will make the next step by carrying out good quality research.”
2017.09.27
View 10085
Multi-Device Mobile Platform for App Functionality Sharing
Case 1. Mr. Kim, an employee, logged on to his SNS account using a tablet PC at the airport while traveling overseas. However, a malicious virus was installed on the tablet PC and some photos posted on his SNS were deleted by someone else. Case 2. Mr. and Mrs. Brown are busy contacting credit card and game companies, because his son, who likes games, purchased a million dollars worth of game items using his smartphone. Case 3. Mr. Park, who enjoys games, bought a sensor-based racing game through his tablet PC. However, he could not enjoy the racing game on his tablet because it was not comfortable to tilt the device for game control. The above cases are some of the various problems that can arise in modern society where diverse smart devices, including smartphones, exist. Recently, new technology has been developed to easily solve these problems. Professor Insik Shin from the School of Computing has developed ‘Mobile Plus,’ which is a mobile platform that can share the functionalities of applications between smart devices. This is a novel technology that allows applications to easily share their functionalities without needing any modifications. Smartphone users often use Facebook to log in to another SNS account like Instagram, or use a gallery app to post some photos on their SNS. These examples are possible, because the applications share their login and photo management functionalities. The functionality sharing enables users to utilize smartphones in various and convenient ways and allows app developers to easily create applications. However, current mobile platforms such as Android or iOS only support functionality sharing within a single mobile device. It is burdensome for both developers and users to share functionalities across devices because developers would need to create more complex applications and users would need to install the applications on each device. To address this problem, Professor Shin’s research team developed platform technology to support functionality sharing between devices. The main concept is using virtualization to give the illusion that the applications running on separate devices are on a single device. They succeeded in this virtualization by extending a RPC (Remote Procedure Call) scheme to multi-device environments. This virtualization technology enables the existing applications to share their functionalities without needing any modifications, regardless of the type of applications. So users can now use them without additional purchases or updates. Mobile Plus can support hardware functionalities like cameras, microphones, and GPS as well as application functionalities such as logins, payments, and photo sharing. Its greatest advantage is its wide range of possible applications. Professor Shin said, "Mobile Plus is expected to have great synergy with smart home and smart car technologies. It can provide novel user experiences (UXs) so that users can easily utilize various applications of smart home/vehicle infotainment systems by using a smartphone as their hub." This research was published at ACM MobiSys, an international conference on mobile computing that was hosted in the United States on June 21. Figure1. Users can securely log on to SNS accounts by using their personal devices Figure 2. Parents can control impulse shopping of their children. Figure 3. Users can enjoy games more and more by using the smartphone as a controller.
2017.08.09
View 8737
Students from Science Academies Shed a Light on KAIST
Recent KAIST statistics show that graduates from science academies distinguish themselves not only by their academic performance at KAIST but also in various professional careers after graduation. Every year, approximately 20% of newly-enrolled students of KAIST are from science academies. In the case of the class of 2017, 170 students from science academies accounted for 22% of the newly-enrolled students. Moreover, they are forming a top-tier student group on campus. As shown in the table below, the ratio of students graduating early for either enrolling in graduate programs or landing a job indicates their excellent performance at KAIST. There are eight science academies in Korea: Korea Science Academy of KAIST located in Busan, Seoul Science High School, Gyeonggi Science High School, Gwangju Science High School, Daejeon Science High School, Sejong Academy of Science and Arts, and Incheon Arts and Sciences Academy. Recently, KAIST analyzed 532 university graduates from the class of 2012. It was found that 23 out of 63 graduates with the alma mater of science academies finished their degree early; as a result, the early graduation ratio of the class of 2012 stood at 36.5%. This percentage was significantly higher than that of students from other high schools. Among the notable graduates, there was a student who made headlines with donation of 30 million KRW to KAIST. His donation was the largest donation from an enrolled student on record. His story goes back when Android smartphones were about to be distributed. Seung-Gyu Oh, then a student in the School of Electrical Engineering felt that existing subway apps were inconvenient, so he invented his own subway app that navigated the nearest subway lines in 2015. His app hit the market and ranked second in the subway app category. It had approximately five million users, which led to it generating advertising revenue. After the successful launch of the app, Oh accepted the takeover offered by Daum Kakao. He then donated 30 million KRW to his alma mater. “Since high school, I’ve always been thinking that I have received many benefits from my country and felt heavily responsible for it,” the alumnus of Korea Science of Academy and KAIST said. “I decided to make a donation to my alma mater, KAIST because I wanted to return what I had received from my country.” After graduation, Oh is now working for the web firm, Daum Kakao. In May 24, 2017, the 41st International Collegiate Programming Contest, hosted by Association for Computing Machinery (ACM) and sponsored by IBM, was held in Rapid City, South Dakota in the US. It is a prestigious contest that has been held annually since 1977. College students from around the world participate in this contest; and in 2017, a total of 50,000 students from 2,900 universities in 104 countries participated in regional competitions, and approximately 400 students made it to the final round, entering into a fierce competition. KAIST students also participated in this contest. The team was comprised of Ji-Hoon Ko, Jong-Won Lee, and Han-Pil Kang from the School of Computing. They are also alumni of Gyeonggi Science High School. They received the ‘First Problem Solver’ award and a bronze medal which came with a 3,000 USD cash prize. Sung-Jin Oh, who also graduated from Korea Science Academy of KAIST, is a research professor at the Korea Institute of Advanced Study (KIAS). He is the youngest recipient of the ‘Young Scientist Award’, which he received by proving a hypothesis from Einstein’s Theory of General Relativity mathematically at the age of 27. After graduating from KAIST, Oh earned his master’s and doctorate degrees from Princeton University, completed his post-doctoral fellow at UC Berkeley, and is now immersing himself in research at KIAS. Heui-Kwang Noh from the Department of Chemistry and Kang-Min Ahn from the School of Computing, who were selected to receive the presidential scholarship for science in 2014, both graduated from Gyeonggi Science High School. Noh was recognized for his outstanding academic capacity and was also chosen for the ‘GE Foundation Scholar-Leaders Program’ in 2015. The ‘GE Foundation Scholar-Leaders Program’, established in 1992 by the GE Foundation, aims at fostering talented students. This program is for post-secondary students who have both creativity and leadership. It selects five outstanding students and provides 3 million KRW per annum for a maximum of three years. The grantees of this program have become influential people in various fields, including professors, executives, staff members of national/international firms, and researchers. And they are making a huge contribution to the development of engineering and science. Noh continues doing various activities, including the completion of his internship at ‘Harvard-MIT Biomedical Optics’ and the publication of a paper (3rd author) for the ACS Omega of American Chemical Society (ACS). Ahn, a member of the Young Engineers Honor Society (YEHS) of the National Academy of Engineering of Korea, had an interest in startup businesses. In 2015, he founded DataStorm, a firm specializing in developing data solution, and merged with a cloud back-office, Jobis & Villains, in 2016. Ahn is continuing his business activities and this year he founded, and is successfully running, cocKorea. “KAIST students whose alma mater are science academies form a top-tier group on campus and produce excellent performance,” said Associate Vice President for Admissions, Hayong Shin. “KAIST is making every effort to assist these students so that they can perform to the best of their ability.” (Clockwise from top left: Seung-Gyu Oh, Sung-Jin Oh, Heui-Kwang Noh and Kang-Min Ahn)
2017.08.09
View 7968
Professor Otfried Cheong Named as Distinguished Scientist by ACM
Professor Otfried Cheong (Schwarzkopf) of the School of Computing was named as a Distinguished Scientist of 2016 by the Association for Computing Machinery (ACM). The ACM recognized 45 Distinguished Members in the category of Distinguished Scientist, Educator, and Engineer for their individual contributions to the field of computing. Professor Cheong is the sole recipient from a Korean institution. The recipients were selected among the top 10 percent of ACM members with at least 15 years of professional experience and five years of continuous professional membership. He is known as one of the authors of the widely used computational geometry textbook Computational Geometry: Algorithms and Applications and as the developer of Ipe, a vector graphics editor. Professor Cheong joined KAIST in 2005, after earning his doctorate from the Free University of Berlin in 1992. He previously taught at Ultrecht University, Pohang University of Science and Technology, Hong Kong University of Science and Technology, and the Eindhoven University of Technology.
2017.04.17
View 6684
Improving Traffic Safety with a Crowdsourced Traffic Violation Reporting App
KAIST researchers revealed that crowdsourced traffic violation reporting with smartphone-based continuous video capturing can dramatically change the current practice of policing activities on the road and will significantly improve traffic safety. Professor Uichin Lee of the Department of Industrial and Systems Engineering and the Graduate School of Knowledge Service Engineering at KAIST and his research team designed and evaluated Mobile Roadwatch, a mobile app that helps citizen record traffic violation with their smartphones and report the recorded videos to the police. This app supports continuous video recording just like onboard vehicle dashboard cameras. Mobile Roadwatch allows drivers to safely capture traffic violations by simply touching a smartphone screen while driving. The captured videos are automatically tagged with contextual information such as location and time. This information will be used as important evidence for the police to ticket the violators. All of the captured videos can be conveniently reviewed, allowing users to decide which events to report to the police. The team conducted a two-week field study to understand how drivers use Mobile Roadwatch. They found that the drivers tended to capture all traffic risks regardless of the level of their involvement and the seriousness of the traffic risks. However, when it came to actual reporting, they tended to report only serious traffic violations, which could have led to car accidents, such as traffic signal violations and illegal U-turns. After receiving feedback about their reports from the police, drivers typically felt very good about their contributions to traffic safety. At the same time, some drivers felt pleased to know that the offenders received tickets since they thought these offenders deserved to be ticketed. While participating in the Mobile Roadwatch campaign, drivers reported that they tried to drive as safely as possible and abide by traffic laws. This was because they wanted to be as fair as possible so that they could capture others’ violations without feeling guilty. They were also afraid that other drivers might capture their violations. Professor Lee said, “Our study participants answered that Mobile Roadwatch served as a very useful tool for reporting traffic violations, and they were highly satisfied with its features. Beyond simple reporting, our tool can be extended to support online communities, which help people actively discuss various local safety issues and work with the police and local authorities to solve these safety issues.” Korea and India were the early adaptors supporting video-based reporting of traffic violations to the police. In recent years, the number of reports has dramatically increased. For example, Korea’s ‘Looking for a Witness’ (released in April 2015) received more than half million reported violations as of November 2016. In the US, authorities started tapping into smartphone recordings by releasing video-based reporting apps such as ICE Blackbox and Mobile Justice. Professor Lee said that the existing services cannot be used while driving, because none of the existing services support continuous video recording and safe event capturing behind the wheel. Professor Lee’s team has been incorporating advanced computer vision techniques into Mobile Roadwatch for automatically capturing traffic violations and safety risks, including potholes and obstacles. The researchers will present their results in May at the ACM CHI Conference on Human Factors in Computing Systems (CHI 2017) in Denver, CO, USA. Their research was supported by the KAIST-KUSTAR fund. (Caption: A driver is trying to capture an event by touching a screen. The Mobile Radwatch supports continuous video recording and safe event captureing behind the wheel.)
2017.04.10
View 9774
Furniture That Learns to Move by Itself
A novel strategy for displacing large objects by attaching relatively small vibration sources. After learning how several random bursts of vibration affect an object's pose, an optimization algorithm discovers the optimal sequence of vibration patterns required to (slowly but surely) move the object to a specified position. Displacements of large objects induced by vibration are a common occurrence, but generally result in unpredictable motion. Think, for instance, of an unbalanced front-loading washing machine. For controlled movement, wheels or legs are usually preferred. Professor Daniel Saakes of the Department of Industrial Design and his team explored a strategy for moving everyday objects by harvesting external vibration rather than using a mechanical system with wheels. This principle may be useful for displacing large objects in situations where attaching wheels or complete lifting is impossible – assuming the speed of the process is not a concern. His team designed vibration modules that can be easily attached to furniture and objects, and this could be a welcomed creation for people with limited mobility, including the elderly. Embedding these vibration modules as part of mass-produced objects may provide a low-cost way to make almost any object mobile. Vibration as a principle for directed locomotion has been previously applied in micro-robots. For instance, the three-legged Kilobots move thanks to centrifugal forces alternatively generated by a pair of vibrations on two of its legs. The unbalanced weight transforms the robot into a ratchet and the resulting motion is deterministic with respect to the input vibration. To the best of our knowledge, we are the first to add vibratory actuators to deterministically steer large objects regardless of their structural properties. The perturbation resulting from a particular pattern of vibration depends on a myriad of parameters, including but not limited to the microscopic properties of the contact surfaces. The key challenge is to empirically discover and select the sequence of vibration patterns to bring the object to the target pose. Their approach is as follows. In the first step we systematically explore the object’s response by manipulating the amplitudes of the motors. This generates a pool of available moves (translations and rotations). We then calculate from this pool the most efficient way (either in terms of length or number of moves) to go from pose A to pose B using optimization strategies, such as genetic algorithms. The learning process may be repeated from time to time to account for changes in the mechanical response, at least for the patterns of vibration that contribute more to the change. Prototype modules are made with eccentric rotating motors (type 345-002 Precision Microdrive) with a nominal force of 115g, which proved sufficient to shake (and eventually locomote) four-legged IKEA chairs and small furniture such as tables and stools. The motors are powered by NiMH batteries and communicate wirelessly with a low-cost ESP8266 WiFi module. The team designed modules that are externally attached using straps as well as motors embedded in furniture. To study the general method, the team employed an overhead camera to track the chair and generate the pool of available moves. The team demonstrated that the system discovered pivot-like gaits and others. However, as one can imagine, using a pre-computed sequence to move to a target pose does not end up providing perfect matches. This is because the contact properties vary with location. Although this can be considered a secondary disturbance, it may in certain cases be mandatory to recompute the matrix of moves every now and then. The chair could, for instance, move into a wet area, over plastic carpet, etc. The principle and application in furniture is called “ratchair” as a portmanteau combining “Ratchet” and “Chair”. Ratchair was demonstrated at the 2016 ACM Siggraph Emerging Technologies and won the DC-EXPO award jointly organized by the Japanese Ministry of Economy, Trade and Industry (METI) and the Digital Content Association of Japan (DCAJ). At the DCEXPO Exhibition, Fall 2016, the work was one of 20 Innovative Technologies and the only non-Japanese contribution. *This article is from the KAIST Breakthroughs, research newsletter from the College of Engineering. For more stories of the KAIST Breakthroughs, please visit http://breakthroughs.kaist.ac.kr http://mid.kaist.ac.kr/projects/ratchair/ http://s2016.siggraph.org/content/emerging-technologies https://www.dcexpo.jp/ko/15184 Figure 1. The vibration modules embedded and attached to furniture. Figure 2. A close-up of the vibration module. Figure 3. A close-up of the embedded modules. Figure 4. A close-up of the vibration motor.
2017.03.23
View 8709
Professor Naehyuck Chang Appointed a 2015 Fellow by the ACM
The Association for Computing Machinery (ACM), the world’s largest educational and scientific computing society, released a list of its new fellows on December 8, 2015, the 2015 ACM Fellows. Professor Naehyuck Chang of the School of Electrical Engineering at KAIST was among the 42 new members who became ACM Fellows in recognition of their contributions to the development and application of computing in areas from data management and spoken-language processing to robotics and cryptography. Professor Chang is known for his leading research in power and energy optimization from embedded systems applications to large-scale energy systems such as device- and system-level power and energy measurement and estimation, liquid crystal display power reduction, dynamic voltage scaling, hybrid electrical energy storage systems, and photovoltaic cell arrays. He is the fourth Korean to be nominated an ACM Fellow. Professor Chang is also a Fellow of the Institute of Electrical and Electronics Engineers (IEEE) and the Editor-in-Chief of the journal, ACM Transactions on Design Automation of Electronic Systems (TODAES). He served as the President of the ACM Special Interest Group on Design Automation in 2012. Additional information about the ACM 2015 Fellows, go to http://www.acm.org/press-room/news-releases/2015/fellows-2015:
2015.12.11
View 8773
<<
첫번째페이지
<
이전 페이지
1
2
>
다음 페이지
>>
마지막 페이지 2