본문 바로가기
대메뉴 바로가기
KAIST
Newsletter Vol.25
Receive KAIST news by email!
View
Subscribe
Close
Type your e-mail address here.
Subscribe
Close
KAIST
NEWS
유틸열기
홈페이지 통합검색
-
검색
KOREAN
메뉴 열기
SIGGRAPH
by recently order
by view order
“3D sketch” Your Ideas and Bring Them to Life, Instantly!
Professor Seok-Hyung Bae’s research team at the Department of Industrial Design developed a novel 3D sketching system that rapidly creates animated 3D concepts through simple user interactions like sketching on a piece of paper or playing a toy. Foldable drones, transforming vehicles, and multi-legged robots from sci-fi movies are now becoming commonplace thanks to technological progress. However, designing them remains a difficult challenge even for skilled experts, because complex design decisions must be made regarding not only their form, but also the structure, poses, and motions, which are interdependent on one another. Creating a 3D concept comprising of multiple moving parts connected by different types of joints using a traditional 3D CAD tool, which is more suited for processing precise and elaborate modeling, is a painstaking and time-consuming process. This presents a major bottleneck for the workflow during the early stage of design, in which it is preferred that as many ideas are tried and discarded out as quickly as possible in order to explore a wide range of possibilities in the shortest amount of time. A research team led by Professor Bae has focused on designers’ freehand sketches drew up with a pen on a paper that serve as the starting point for virtually all design projects. This led them to develop their 3D sketching technology to generate desired 3D curves from the rough but expressive 2D strokes drawn with a digital stylus on a digital tablet. Their latest research helps designers bring their 3D sketches to life almost instantly. Using the intuitive set of multi-touch gestures the team successfully designed and implemented, designers can handle the 3D sketches they are working on with their fingers as if they are playing with toys and put them into animation in no time. < Figure 1. A novel 3D sketching system for rapidly designing articulated 3D concepts with a small set of coherent pen and multi-touch gestures. (a) Sketching: A 3D sketch curve is created by marking a pen stroke that is projected onto a sketch plane widget. (b) Segmenting: Entire or partial sketch curves are added to separate parts that serve as links in the kinematic chain. (c) Rigging: Repeatedly demonstrating the desired motion of a part leaves behind a trail, from which the system infers a joint. (d) Posing: Desired poses can be achieved through actuating joints via forward or inverse kinematics. (e) Filming: A sequence of keyframes specifying desired poses and viewpoints is connected as a smooth motion. > < Figure 2. (a) Concept drawing of an autonomous excavator. It features (b, c) four caterpillars that swivel for high maneuverability, (d) an extendable boom and a bucket connected by multiple links, and (e) a rotating platform. The concept’s designer, who had 8 years of work experience, estimated that it would take 1-2 weeks to express and communicate such a complex articulated object with existing tools. With the proposed system, it took only 2 hours and 52 minutes. > The major findings of their work were published under the title “Rapid Design of Articulated Objects” in ACM Transactions on Graphics (impact factor: 7.403), the top international journal in the field of computer graphics, and presented at ACM SIGGRAPH 2022 (h5-index: 103), the world’s largest international academic conference in the field, which was held back in August in Vancouver, Canada with Joon Hyub Lee, a Ph.D. student of the Department of Industrial Design as the first author. The ACM SIGGRAPH 2022 conference was reportedly attended by over 10,000 participants including researchers, artists, and developers from world-renowned universities; film, animation, and game studies, such as Marvel, Pixar, and Blizzard; high-tech manufacturers, such as Lockheed Martin and Boston Dynamics; and metaverse platform companies, such as Meta and Roblox. < Figure 3. The findings of Professor Bae’s research team were published in ACM Transactions on Graphics, the top international academic journal in the field of computer graphics, and presented at ACM SIGGRAPH 2022, the largest international academic conference held in conjunction early August in Vancouver, Canada. The team’s live demo at the Emerging Technologies program was highly praised by numerous academics and industry officials and received an Honorable Mention. > The team was also invited to present their technical paper as a demo and a special talk at the Emerging Technologies program at ACM SIGGRAPH 2022 as one of the top-three impactful technologies. The live performance, in which Hanbit Kim, a Ph.D. student of the Department of Industrial Design at KAIST and a co-author, sketched and animated a sophisticated animal-shaped robot from scratch in a matter of a few minutes, wowed the audience and won the Honorable Mention Award from the jury. Edwin Catmull, the co-founder of Pixar and a keynote speaker at the SIGGRAPH conference, praised the team’s research on 3D sketching as “really excellent work” and “a kind of tool that would be useful to Pixar's creative model designers.” This technology, which became virally popular in Japan after featuring in an online IT media outlet and attracting more than 600K views, received a special award from the Digital Content Association of Japan (DCAJ) and was invited and exhibited for three days at Tokyo in November, as a part of Inter BEE 2022, the largest broadcasting and media expo in Japan. “The more we come to understand how designers think and work, the more effective design tools can be built around that understanding,” said Professor Bae, explaining that “the key is to integrate different algorithms into a harmonious system as intuitive interactions.” He added that “this work wouldn’t have been possible if it weren’t for the convergent research environment cultivated by the Department of Industrial Design at KAIST, in which all students see themselves not only as aspiring creative designers, but also as practical engineers.” By enabling designers to produce highly expressive animated 3D concepts far more quickly and easily in comparison to using existing methods, this new tool is expected to revolutionize design practices and processes in the content creation, manufacturing, and metaverse-related industries. This research was funded by the Ministry of Science and ICT, and the National Research Foundation of Korea. More info: https://sketch.kaist.ac.kr/publications/2022_siggraph_rapid_design Video: https://www.youtube.com/watch?v=rsBl0QvSDqI < Figure 4. From left to right: Ph.D. students Hanbit Kim, and Joon Hyub Lee and Professor Bae of the Department of Industrial Design, KAIST >
2022.11.23
View 6725
Furniture That Learns to Move by Itself
A novel strategy for displacing large objects by attaching relatively small vibration sources. After learning how several random bursts of vibration affect an object's pose, an optimization algorithm discovers the optimal sequence of vibration patterns required to (slowly but surely) move the object to a specified position. Displacements of large objects induced by vibration are a common occurrence, but generally result in unpredictable motion. Think, for instance, of an unbalanced front-loading washing machine. For controlled movement, wheels or legs are usually preferred. Professor Daniel Saakes of the Department of Industrial Design and his team explored a strategy for moving everyday objects by harvesting external vibration rather than using a mechanical system with wheels. This principle may be useful for displacing large objects in situations where attaching wheels or complete lifting is impossible – assuming the speed of the process is not a concern. His team designed vibration modules that can be easily attached to furniture and objects, and this could be a welcomed creation for people with limited mobility, including the elderly. Embedding these vibration modules as part of mass-produced objects may provide a low-cost way to make almost any object mobile. Vibration as a principle for directed locomotion has been previously applied in micro-robots. For instance, the three-legged Kilobots move thanks to centrifugal forces alternatively generated by a pair of vibrations on two of its legs. The unbalanced weight transforms the robot into a ratchet and the resulting motion is deterministic with respect to the input vibration. To the best of our knowledge, we are the first to add vibratory actuators to deterministically steer large objects regardless of their structural properties. The perturbation resulting from a particular pattern of vibration depends on a myriad of parameters, including but not limited to the microscopic properties of the contact surfaces. The key challenge is to empirically discover and select the sequence of vibration patterns to bring the object to the target pose. Their approach is as follows. In the first step we systematically explore the object’s response by manipulating the amplitudes of the motors. This generates a pool of available moves (translations and rotations). We then calculate from this pool the most efficient way (either in terms of length or number of moves) to go from pose A to pose B using optimization strategies, such as genetic algorithms. The learning process may be repeated from time to time to account for changes in the mechanical response, at least for the patterns of vibration that contribute more to the change. Prototype modules are made with eccentric rotating motors (type 345-002 Precision Microdrive) with a nominal force of 115g, which proved sufficient to shake (and eventually locomote) four-legged IKEA chairs and small furniture such as tables and stools. The motors are powered by NiMH batteries and communicate wirelessly with a low-cost ESP8266 WiFi module. The team designed modules that are externally attached using straps as well as motors embedded in furniture. To study the general method, the team employed an overhead camera to track the chair and generate the pool of available moves. The team demonstrated that the system discovered pivot-like gaits and others. However, as one can imagine, using a pre-computed sequence to move to a target pose does not end up providing perfect matches. This is because the contact properties vary with location. Although this can be considered a secondary disturbance, it may in certain cases be mandatory to recompute the matrix of moves every now and then. The chair could, for instance, move into a wet area, over plastic carpet, etc. The principle and application in furniture is called “ratchair” as a portmanteau combining “Ratchet” and “Chair”. Ratchair was demonstrated at the 2016 ACM Siggraph Emerging Technologies and won the DC-EXPO award jointly organized by the Japanese Ministry of Economy, Trade and Industry (METI) and the Digital Content Association of Japan (DCAJ). At the DCEXPO Exhibition, Fall 2016, the work was one of 20 Innovative Technologies and the only non-Japanese contribution. *This article is from the KAIST Breakthroughs, research newsletter from the College of Engineering. For more stories of the KAIST Breakthroughs, please visit http://breakthroughs.kaist.ac.kr http://mid.kaist.ac.kr/projects/ratchair/ http://s2016.siggraph.org/content/emerging-technologies https://www.dcexpo.jp/ko/15184 Figure 1. The vibration modules embedded and attached to furniture. Figure 2. A close-up of the vibration module. Figure 3. A close-up of the embedded modules. Figure 4. A close-up of the vibration motor.
2017.03.23
View 8724
A KAIST Student Team Wins the ACM UIST 2014 Student Innovation Contest
A KAIST team consisted of students from the Departments of Industrial Design and Computer Science participated in the ACM UIST 2014 Student Innovation Contest and received 1st Prize in the category of People’s Choice. The Association for Computing Machinery (ACM) Symposium on User Interface Software and Technology (UIST) is an international forum to promote innovations in human-computer interfaces, which takes place annually and is sponsored by ACM Special Interest Groups on Computer-Human Interaction (SIGCHI) and Computer Graphics (SIGGRAPH). The ACM UIST conference brings together professionals in the fields of graphical and web-user interfaces, tangible and ubiquitous computing, virtual and augmented reality, multimedia, and input and output devices. The Student Innovation Contest has been held during the UIST conference since 2009 to innovate new interactions on state-of-the-art hardware. The participating students were given with the hardware platform to build on—this year, it was Kinoma Create, a JavaScript-powered construction kit that allows makers, professional product designers, and web developers to create personal projects, consumer electronics, and "Internet of Things" prototypes. Contestants demonstrated their creations on household interfaces, and two winners in each of three categories -- Most Creative, Most Useful, and the People’s Choice -- were awarded. Utilizing Kinoma Create, which came with a built-in touchscreen, WiFi, Bluetooth, a front-facing sensor connector, and a 50-pin rear sensor dock, the KAIST team developed a “smart mop,” transforming the irksome task of cleaning into a fun game. The smart mop identifies target dirt and shows its location on the display built in the rod of a mop. If the user turns on a game mode, then winning scores are gained wherever the target dirt is cleaned. The People’s Choice award was decided by conference attendees, and they voted the smart mop as their most favorite project. Professor Tek-Jin Nam of the Department of Industrial Design at KAIST, who advised the students, said, "A total of 24 teams from such prestigious universities as Carnegie Mellon University, Georgia Institute of Technology, and the University of Tokyo joined the contest, and we are pleased with the good results. Many people, in fact, praised the integration of creativity and technical excellence our have shown through the smart mop.” Team KAIST: pictured from right to left, Sun-Jun Kim, Se-Jin Kim, and Han-Jong Kim The Smart Mop can clean the floor and offer users a fun game.
2014.11.12
View 10258
Transparent Glass Wall as a Touch Game Media
Professor Woo-hoon Lee - Selected as the “Highlight” at SIGGRAPH emerging technology conference - “An excellent example of the transparent display panel in everyday life” A joint research team led by KAIST Industrial Design Department’s Prof. Woo-hoon Lee and Computer Sciences Prof. Ki-hyuk Lee has developed a brand new concept game media “TransWall”, which utilizes both sides of the glass wall as the touch medium. TransWall has been chosen as the “highlight” of 2013 SIGGRAPH emerging technology conference. SIGGRAPH is a world-renowned conference in the area of computer graphics and interaction technique, last held 21st-25th July at Anaheim, in the United States. It all started with the thought, wouldn’t it be possible to turn the glass walls surrounding us into a medium for entertainment and communication? TransWall utilizes holographic screen film inserted between two glass sheets with a multi-touch function, onto which the image can be projected using the beam projector from both sides. Furthermore, an additional Surface Transducer attached to the glass can deliver the sound and vibration. What seemed as an ordinary glass wall has been transformed into a multi-sensory media that can transmit and receive visual, auditory and tactile information. TransWall can be implemented at public places such as theme parks, large shopping malls and subway stations, providing the citizens with a new form of entertainment. This touch-interaction method can also be applied to developing a variety of cultural contents in the future. Professor Lee said, “TransWall shows an example of near-future where touch-interaction method can be utilized with the soon-to-be commercialized transparent display panel in everyday lives.” TransWall Introduction video (https://vimeo.com/70391422) TransWall at SIGGRAPH 2013 Display (https://vimeo.com/71718874) Picture 1. Both sides of the glass wall can be used as a touch platform for various medias, including games. Picture 2. TransWall attracts the interests of the audience at SIGGRAPH emerging technology. Picture 3. Structure of TransWall Picture 4. Photo of TransWall from side
2013.09.19
View 8908
3D contents using our technology
Professor Noh Jun Yong’s research team from KAIST Graduate School of Culture Technology has successfully developed a software program that improves the semiautomatic conversation rate efficiency of 3D stereoscopic images by 3 times. This software, named ‘NAKiD’, was first presented at the renowned Computer Graphics conference/exhibition ‘Siggraph 2012’ in August and received intense interest from the participants. The ‘NAKiD’ technology is forecasted to replace the expensive imported equipment and technology used in 3D filming. For multi-viewpoint no-glasses 3D stereopsis, two cameras are needed to film the image. However, ‘NAKiD’ can easily convert images from a single camera into a 3D image, greatly decreasing the problems in the film production process as well as its cost. There are 2 methods commonly used in the production of 3D stereoscopic images; filming using two cameras and the 3D conversion using computer software. The use of two cameras requires expensive equipment and the filmed images need further processing after production. On the other hand, 3D conversion technology does not require extra devices in the production process and can also convert the existing 2D contents into 3D, a main reason why many countries are focusing on the development of stereoscopic technology. Stereoscopic conversion is largely divided in to 3 steps; object separation, formation of depth information and stereo rendering. Professor Noh’s teams focused on the optimization of each step to increase the efficiency of the conversion system. Professor Noh’s research team first increased the separation accuracy to the degree of a single hair and created an algorithm that automatically fills in the background originally covered by the separated object. The team succeeded in the automatic formation of depth information using the geographic or architectural characteristic and vanishing points. For the stereo rendering process, the team decreased the rendering time by reusing the rendered information of one side, rather than the traditional method of rendering the left and right images separately. Professor Noh said that ‘although 3D TVs are becoming more and more commercialized, there are not enough programs that can be watched in 3D’ and that ‘stereoscopic conversion technology is receiving high praise in the field of graphics because it allows the easy production of 3D contents with small cost’.
2012.10.20
View 8999
KAIST Animation 'Captain Banana' To Be Shown at SIGGRAPH Asia 2010
‘Captain Banana,’ a short animation created by researchers of Associate Professor Junyong Noh’s Visual Media Laboratory and current students of the Graduate School of Culture Technology (CT), will be shown at SIGGRAPH Asia 2010 this December 15th to 18th. Following last year’s screening of ‘Taming the Cat’ at the SIGGRAPH CG Animation Festival in the United States, Professor Noh’s work has been chosen to be shown at global CG animation festivals for two years in a row. Since the first exhibition by ACM in 1974, SIGGRAPH has become a global computer graphic festival that has a strong influence on the global CG and interactive technology industry. The Asian version of SIGGRAPH, SIGGRAPH Asia has been held annually at Singapore in 2008 and Yokohama, Japan in 2009. This year’s SIGGRAPH Asia will be held at COEX in Seoul from December 15th to 18th. ‘Captain Banana’ is a five minute film about Captain Banana, who explains recent issues over sex including unwanted pregnancies, the eradication of abortions and prevention of AIDS through a series of funny situations with his ten little friends. “Along with being chosen for screening at global CG animation festivals two years in a row, this year’s selection is significant in that the VM Lab has increased the efficiency of the creating process by using technology we have developed on our own,” said Professor Noh. The production period took approximate five months with approximately 20 current students of CT and five researchers of the Visual Media Lab.
2010.10.20
View 10472
<<
첫번째페이지
<
이전 페이지
1
>
다음 페이지
>>
마지막 페이지 1