본문 바로가기
대메뉴 바로가기
KAIST
Newsletter Vol.25
Receive KAIST news by email!
View
Subscribe
Close
Type your e-mail address here.
Subscribe
Close
KAIST
NEWS
유틸열기
홈페이지 통합검색
-
검색
KOREAN
메뉴 열기
augmented+reality
by recently order
by view order
See-through exhibitions using smartphones: KAIST develops the AR magic lens, WonderScope
WonderScope shows what’s underneath the surface of an object through an augmented reality technology. < Photo 1. Demonstration at ACM SIGGRAPH > - A KAIST research team led by Professor Woohun Lee from the Department of Industrial Design and Professor Geehyuk Lee from the School of Computing have developed a smartphone “appcessory” called WonderScope that can easily add an augmented reality (AR) perspective to the surface of exhibits - The research won an Honorable Mention for Emerging Technologies Best in Show at ACM SIGGRAPH, one of the largest international conferences on computer graphics and interactions - The technology was improved and validated through real-life applications in three special exhibitions including one at the Geological Museum at the Korea Institute of Geoscience and Mineral Resources (KIGAM) held in 2020, and two at the National Science Museum each in 2021 and 2022 - The technology is expected to be used for public science exhibitions and museums as well as for interactive teaching materials to stimulate children’s curiosity A KAIST research team led by Professor Woohun Lee from the Department of Industrial Design and Professor Geehyuk Lee from the School of Computing developed a novel augmented reality (AR) device, WonderScope, which displays the insides of an object directly from its surface. By installing and connecting WonderScope to a mobile device through Bluetooth, users can see through exhibits as if looking through a magic lens. Many science museums nowadays have incorporated the use of AR apps for mobile devices. Such apps add digital information to the exhibition, providing a unique experience. However, visitors must watch the screen from a certain distance away from the exhibited items, often causing them to focus more on the digital contents rather than the exhibits themselves. In other words, the distance and distractions that exist between the exhibit and the mobile device may actually cause the visitors to feel detached from the exhibition. To solve this problem, museums needed a magic AR lens that could be used directly from the surface of the item. To accomplish this, smartphones must know exactly where on the surface of an object it is placed. Generally, this would require an additional recognition device either on the inside or on the surface of the item, or a special pattern printed on its surface. Realistically speaking, these are impractical solutions, as exhibits would either appear overly complex or face spatial restrictions. WonderScope, on the other hand, uses a much more practical method to identify the location of a smartphone on the surface of an exhibit. First, it reads a small RFID tag attached to the surface of an object, and calculates the location of the moving smartphone by adding its relative movements based on the readings from an optical displacement sensor and an acceleration sensor. The research team also took into consideration the height of the smartphone, and the characteristics of the surface profile in order to calculate the device’s position more accurately. By attaching or embedding RFID tags on exhibits, visitors can easily experience the effects of a magic AR lens through their smartphones. For its wider use, WonderScope must be able to locate itself from various types of exhibit surfaces. To this end, WoderScope uses readings from an optical displacement sensor and an acceleration sensor with complementary characteristics, allowing stable locating capacities on various textures including paper, stone, wood, plastic, acrylic, and glass, as well as surfaces with physical patterns or irregularities. As a result, WonderScope can identify its location from a distance as close as 4 centimeters from an object, also enabling simple three-dimensional interactions near the surface of the exhibits. The research team developed various case project templates and WonderScope support tools to allow the facile production of smartphone apps that use general-purpose virtual reality (VR) and the game engine Unity. WonderScope is also compatible with various types of devices that run on the Android operating system, including smartwatches, smartphones, and tablets, allowing it to be applied to exhibitions in many forms. < Photo 2. Human body model showing demonstration > < Photo 3. Demonstration of the underground mineral exploration game > < Photo 4. Demonstration of Apollo 11 moon exploration experience > The research team developed WonderScope with funding from the science and culture exhibition enhancement support project by the Ministry of Science and ICT. Between October 27, 2020 and February 28, 2021, WonderScope was used to observe underground volcanic activity and the insides of volcanic rocks at “There Once was a Volcano”, a special exhibition held at the Geological Museum in the Korea institute of Geoscience and Mineral Resources (KIGAM). From September 28 to October 3, 2021, it was used to observe the surface of Jung-moon-kyung (a bronze mirror with fine linear design) at the special exhibition “A Bronze Mirror Shines on Science” at the National Science Museum. And from August 2 to October 3, 2022 it was applied to a moon landing simulation at “The Special Exhibition on Moon Exploration”, also at the National Science Museum. Through various field demonstrations over the years, the research team has improved the performance and usability of WonderScope. < Photo 5. Observation of surface corrosion of the main gate > The research team demonstrated WonderScope at the Emerging Technologies forum during ACM SIGGRAPH 2022, a computer graphics and interaction technology conference that was held in Vancouver, Canada between August 8 and 11 this year. At this conference, where the latest interactive technologies are introduced, the team won an Honorable Mention for Best in Show. The judges commented that “WonderScope will be a new technology that provides the audience with a unique joy of participation during their visits to exhibitions and museums.” < Photo 6. Cover of Digital Creativity > WonderScope is a cylindrical “appcessory” module, 5cm in diameter and 4.5cm in height. It is small enough to be easily attached to a smartphone and embedded on most exhibits. Professor Woohun Lee from the KAIST Department of Industrial Design, who supervised the research, said, “WonderScope can be applied to various applications including not only educational, but also industrial exhibitions, in many ways.” He added, “We also expect for it to be used as an interactive teaching tool that stimulates children’s curiosity.” Introductory video of WonderScope: https://www.youtube.com/watch?v=X2MyAXRt7h4&t=7s
2022.10.24
View 7013
K-Glass 3 Offers Users a Keyboard to Type Text
KAIST researchers upgraded their smart glasses with a low-power multicore processor to employ stereo vision and deep-learning algorithms, making the user interface and experience more intuitive and convenient. K-Glass, smart glasses reinforced with augmented reality (AR) that were first developed by KAIST in 2014, with the second version released in 2015, is back with an even stronger model. The latest version, which KAIST researchers are calling K-Glass 3, allows users to text a message or type in key words for Internet surfing by offering a virtual keyboard for text and even one for a piano. Currently, most wearable head-mounted displays (HMDs) suffer from a lack of rich user interfaces, short battery lives, and heavy weight. Some HMDs, such as Google Glass, use a touch panel and voice commands as an interface, but they are considered merely an extension of smartphones and are not optimized for wearable smart glasses. Recently, gaze recognition was proposed for HMDs including K-Glass 2, but gaze cannot be realized as a natural user interface (UI) and experience (UX) due to its limited interactivity and lengthy gaze-calibration time, which can be up to several minutes. As a solution, Professor Hoi-Jun Yoo and his team from the Electrical Engineering Department recently developed K-Glass 3 with a low-power natural UI and UX processor. This processor is composed of a pre-processing core to implement stereo vision, seven deep-learning cores to accelerate real-time scene recognition within 33 milliseconds, and one rendering engine for the display. The stereo-vision camera, located on the front of K-Glass 3, works in a manner similar to three dimension (3D) sensing in human vision. The camera’s two lenses, displayed horizontally from one another just like depth perception produced by left and right eyes, take pictures of the same objects or scenes and combine these two different images to extract spatial depth information, which is necessary to reconstruct 3D environments. The camera’s vision algorithm has an energy efficiency of 20 milliwatts on average, allowing it to operate in the Glass more than 24 hours without interruption. The research team adopted deep-learning-multi core technology dedicated for mobile devices. This technology has greatly improved the Glass’s recognition accuracy with images and speech, while shortening the time needed to process and analyze data. In addition, the Glass’s multi-core processor is advanced enough to become idle when it detects no motion from users. Instead, it executes complex deep-learning algorithms with a minimal power to achieve high performance. Professor Yoo said, “We have succeeded in fabricating a low-power multi-core processer that consumes only 126 milliwatts of power with a high efficiency rate. It is essential to develop a smaller, lighter, and low-power processor if we want to incorporate the widespread use of smart glasses and wearable devices into everyday life. K-Glass 3’s more intuitive UI and convenient UX permit users to enjoy enhanced AR experiences such as a keyboard or a better, more responsive mouse.” Along with the research team, UX Factory, a Korean UI and UX developer, participated in the K-Glass 3 project. These research results entitled “A 126.1mW Real-Time Natural UI/UX Processor with Embedded Deep-Learning Core for Low-Power Smart Glasses” (lead author: Seong-Wook Park, a doctoral student in the Electrical Engineering Department, KAIST) were presented at the 2016 IEEE (Institute of Electrical and Electronics Engineers) International Solid-State Circuits Conference (ISSCC) that took place January 31-February 4, 2016 in San Francisco, California. YouTube Link: https://youtu.be/If_anx5NerQ Figure 1: K-Glass 3 K-Glass 3 is equipped with a stereo camera, dual microphones, a WiFi module, and eight batteries to offer higher recognition accuracy and enhanced augmented reality experiences than previous models. Figure 2: Architecture of the Low-Power Multi-Core Processor K-Glass 3’s processor is designed to include several cores for pre-processing, deep-learning, and graphic rendering. Figure 3: Virtual Text and Piano Keyboard K-Glass 3 can detect hands and recognize their movements to provide users with such augmented reality applications as a virtual text or piano keyboard.
2016.02.26
View 11963
Professor Woontack Woo Demonstrates an Optical Platform Technology for Augmented Reality at Smart Cloud Show
Professor Woontack Woo of the Graduate School of Culture Technology at KAIST participated in the Smart Cloud Show, a technology exhibition, hosted by the university’s Augmented Human Research Center and presented the latest development of his research, an optical platform system for augmented reality. This event took place on September 16-17, 2015 at Grand Seoul Nine Tree Convention Center in Seoul. At the event, Professor Woo introduced a smart glass with an embedded augmented reality system, which permits remote collaboration between an avatar and the user’s hand. The previous remote collaboration was difficult for ordinary users to employ because of its two-dimensional screen and complicated virtual reality system. However, with the new technology, the camera attached to artificial reality (AR) glasses recognizes the user’s hand and tracks it down to collaborate. The avatar in the virtual space and the user’s hand interact in real space and time. The key to this technology is the stable, real-time hand-tracking technique that allows the detection of the hand’s locations and the recognition of finger movements even in situations of self-occlusion. Through this method, a user can touch and manipulate augmented contents as if they were real-life objects, thereby collaborating remotely with another user who is physically distant by linking his or her movements with an avatar. If this technology is adopted widely, it may bring some economic benefits such as increased productivity due to lower costs for mobility and reduction in social overhead costs from the decrease in the need of traveling long distance. Professor Woo said, “This technology will provide us with a greater opportunity for collaboration, not necessarily restricted to physical travelling, which can be widely used in the fields of medicine, education, entertainment, and tourism.” Professor Woo plans to present his research results on hand-movement tracking and detection at the 12th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI 2015), to be held on October 28-30, 2015, at Kintex in Goyang, Korea. He will also present a research paper on remote collaboration at the ICAT-EGVE 2015 conference, the merger of the 25th International Conference on Artificial Reality and Telexistence (ICAT 2015) and the 20th Eurographics Symposium on Virtual Environments (EGVE 2015), which will take place on October 28-30, 2015 at the Kyoto International Community House, Kyoto, Japan.
2015.09.16
View 7661
'Mirror or Mirror' Exhibition at Dongdaemun Design Plaza
An exhibition, called “Mirror or Mirror,” displaying the integration of fashion design and technology took place at Dongdaemun Design Plaza (DDP) in Seoul from July 18-25, 2015. DDP is the center of Korea’s fashion hub. The exhibition was created by Professor Daniel Pieter Saakes of the Industrial Design Department at KAIST and introduced a new design system reinforced with an interactive technology that incorporates augmented reality into the design process. Users stand before the Mirror or Mirror system, and by using augmented reality, they can design their own fashion items including clothes based on their need and fashion preferences. The augmented reality allows users to draw their favorite patterns or new designs over their body, thereby enabling them to check the result immediately and try out a variety of different designs right away. Professor Saakes said, “Fashion has always been a way to express individual and personal style. With our system, people can easily fulfill such desires, customizing their own designs.” At the exhibition, visitors also had opportunities to produce their own shirts while using the Mirror or Mirror system. Picture 1: A user wears a newly designed virtual shirt over her body using augmented reality provided by the Mirror or Mirror system. Picture 2: The shirt was designed and produced through the Mirror or Mirror system.
2015.07.31
View 10007
KAIST developed an extremely low-powered, high-performance head-mounted display embedding an augmented reality chip
Walking around the streets searching for a place to eat will be no hassle when a head-mounted display (HMD) becomes affordable and ubiquitous. Researchers at the Korea Advanced Institute of Science and Technology (KAIST) developed K-Glass, a wearable, hands-free HMD that enables users to find restaurants while checking out their menus. If the user of K-Glass walks up to a restaurant and looks at the name of the restaurant, today’s menu and a 3D image of food pop up. The Glass can even show the number of tables available inside the restaurant. K-Glass makes this possible because of its built-in augmented reality (AR) processor. Unlike virtual reality which replaces the real world with a computer-simulated environment, AR incorporates digital data generated by the computer into the reality of a user. With the computer-made sensory inputs such as sound, video, graphics or GPS data, the user’s real and physical world becomes live and interactive. Augmentation takes place in real-time and in semantic context with surrounding environments, such as a menu list overlain on the signboard of a restaurant when the user passes by it, not an airplane flight schedule, which is irrelevant information, displayed. Most commonly, location-based or computer-vision services are used in order to generate AR effects. Location-based services activate motion sensors to identify the user’s surroundings, whereas computer-vision uses algorithms such as facial, pattern, and optical character recognition, or object and motion tracking to distinguish images and objects. Many of the current HMDs deliver augmented reality experiences employing location-based services by scanning the markers or barcodes printed on the back of objects. The AR system tracks the codes or markers to identify objects and then align them with virtual reality. However, this AR algorithm is difficult to use for the objects or spaces which do not have barcodes, QR codes, or markers, particularly those in outdoor environments and thus cannot be recognized. To solve this problem, Hoi-Jun Yoo, Professor of Electrical Engineering at KAIST and his team developed, for the first time in the world, an AR chip that works just like human vision. This processor is based on the Visual Attention Model (VAM) that duplicates the ability of human brain to process visual data. VAM, almost unconsciously or automatically, disentangles the most salient and relevant information about the environment in which human vision operates, thereby eliminating unnecessary data unless they must be processed. In return, the processor can dramatically speed up the computation of complex AR algorithms. The AR processor has a data processing network similar to that of a human brain’s central nervous system. When the human brain perceives visual data, different sets of neurons, all connected, work concurrently on each fragment of a decision-making process; one group’s work is relayed to other group of neurons for the next round of the process, which continues until a set of decider neurons determines the character of the data. Likewise, the artificial neural network allows parallel data processing, alleviating data congestion and reducing power consumption significantly. KAIST’s AR processor, which is produced using the 65 nm (nanometers) manufacturing process with the area of 32 mm2, delivers 1.22 TOPS (tera-operations per second) peak performance when running at 250 MHz and consumes 778 miliWatts on a 1.2V power supply. The ultra-low power processor shows 1.57 TOPS/W high efficiency rate of energy consumption under the real-time operation of 30fps/720p video camera, a 76% improvement in power conservation over other devices. The HMDs, available on the market including the Project Glass whose battery lasts only for two hours, have revealed so far poor performance. Professor Yoo said, “Our processor can work for long hours without sacrificing K-Glass’s high performance, an ideal mobile gadget or wearable computer, which users can wear for almost the whole day.” He further commented:“HMDs will become the next mobile device, eventually taking over smartphones. Their markets have been growing fast, and it’s really a matter of time before mobile users will eventually embrace an optical see-through HMD as part of their daily use. Through augmented reality, we will have richer, deeper, and more powerful reality in all aspects of our life from education, business, and entertainment to art and culture.” The KAIST team presented a research paper at the International Solid-State Circuits Conference (ISSCC) held on February 9-13, 2014 in San Francisco, CA, which is entitled “1.22TOPS and 1.52mW/MHz Augmented Reality Multi-Core Processor with Neural Network NoC for HMD Applications.”Youtube Link: http://www.youtube.com/watch?v=wSqY30FOu2s&feature=c4-overview&list=UUirZA3OFhxP4YFreIJkTtXw
2014.02.20
View 15563
<<
첫번째페이지
<
이전 페이지
1
>
다음 페이지
>>
마지막 페이지 1