본문 바로가기
대메뉴 바로가기
KAIST
Newsletter Vol.27
Receive KAIST news by email!
View
Subscribe
Close
Type your e-mail address here.
Subscribe
Close
KAIST
NEWS
유틸열기
홈페이지 통합검색
-
검색
KOREAN
메뉴 열기
NAVER+AI+Lab
by recently order
by view order
KAIST Researchers Unveil an AI that Generates "Unexpectedly Original" Designs
< Photo 1. Professor Jaesik Choi, KAIST Kim Jaechul Graduate School of AI > Recently, text-based image generation models can automatically create high-resolution, high-quality images solely from natural language descriptions. However, when a typical example like the Stable Diffusion model is given the text "creative," its ability to generate truly creative images remains limited. KAIST researchers have developed a technology that can enhance the creativity of text-based image generation models such as Stable Diffusion without additional training, allowing AI to draw creative chair designs that are far from ordinary. Professor Jaesik Choi's research team at KAIST Kim Jaechul Graduate School of AI, in collaboration with NAVER AI Lab, developed this technology to enhance the creative generation of AI generative models without the need for additional training. < Photo 2. Gayoung Lee, Researcher at NAVER AI Lab; Dahee Kwon, Ph.D. Candidate at KAIST Kim Jaechul Graduate School of AI; Jiyeon Han, Ph.D. Candidate at KAIST Kim Jaechul Graduate School of AI; Junho Kim, Researcher at NAVER AI Lab > Professor Choi's research team developed a technology to enhance creative generation by amplifying the internal feature maps of text-based image generation models. They also discovered that shallow blocks within the model play a crucial role in creative generation. They confirmed that amplifying values in the high-frequency region after converting feature maps to the frequency domain can lead to noise or fragmented color patterns. Accordingly, the research team demonstrated that amplifying the low-frequency region of shallow blocks can effectively enhance creative generation. Considering originality and usefulness as two key elements defining creativity, the research team proposed an algorithm that automatically selects the optimal amplification value for each block within the generative model. Through the developed algorithm, appropriate amplification of the internal feature maps of a pre-trained Stable Diffusion model was able to enhance creative generation without additional classification data or training. < Figure 1. Overview of the methodology researched by the development team. After converting the internal feature map of a pre-trained generative model into the frequency domain through Fast Fourier Transform, the low-frequency region of the feature map is amplified, then re-transformed into the feature space via Inverse Fast Fourier Transform to generate an image. > The research team quantitatively proved, using various metrics, that their developed algorithm can generate images that are more novel than those from existing models, without significantly compromising utility. In particular, they confirmed an increase in image diversity by mitigating the mode collapse problem that occurs in the SDXL-Turbo model, which was developed to significantly improve the image generation speed of the Stable Diffusion XL (SDXL) model. Furthermore, user studies showed that human evaluation also confirmed a significant improvement in novelty relative to utility compared to existing methods. Jiyeon Han and Dahee Kwon, Ph.D. candidates at KAIST and co-first authors of the paper, stated, "This is the first methodology to enhance the creative generation of generative models without new training or fine-tuning. We have shown that the latent creativity within trained AI generative models can be enhanced through feature map manipulation." They added, "This research makes it easy to generate creative images using only text from existing trained models. It is expected to provide new inspiration in various fields, such as creative product design, and contribute to the practical and useful application of AI models in the creative ecosystem." < Figure 2. Application examples of the methodology researched by the development team. Various Stable Diffusion models generate novel images compared to existing generations while maintaining the meaning of the generated object. > This research, co-authored by Jiyeon Han and Dahee Kwon, Ph.D. candidates at KAIST Kim Jaechul Graduate School of AI, was presented on June 16 at the International Conference on Computer Vision and Pattern Recognition (CVPR), an international academic conference.* Paper Title: Enhancing Creative Generation on Stable Diffusion-based Models* DOI: https://doi.org/10.48550/arXiv.2503.23538 This research was supported by the KAIST-NAVER Ultra-creative AI Research Center, the Innovation Growth Engine Project Explainable AI, the AI Research Hub Project, and research on flexible evolving AI technology development in line with increasingly strengthened ethical policies, all funded by the Ministry of Science and ICT through the Institute for Information & Communications Technology Promotion. It also received support from the KAIST AI Graduate School Program and was carried out at the KAIST Future Defense AI Specialized Research Center with support from the Defense Acquisition Program Administration and the Agency for Defense Development.
2025.06.20
View 413
“For the First Time, We Shared a Meaningful Exchange”: KAIST Develops an AI App for Parents and Minimally Verbal Autistic Children Connect
• KAIST team up with NAVER AI Lab and Dodakim Child Development Center Develop ‘AAcessTalk’, an AI-driven Communication Tool bridging the gap Between Children with Autism and their Parents • The project earned the prestigious Best Paper Award at the ACM CHI 2025, the Premier International Conference in Human-Computer Interaction • Families share heartwarming stories of breakthrough communication and newfound understanding. < Photo 1. (From left) Professor Hwajung Hong and Doctoral candidate Dasom Choi of the Department of Industrial Design with SoHyun Park and Young-Ho Kim of Naver Cloud AI Lab > For many families of minimally verbal autistic (MVA) children, communication often feels like an uphill battle. But now, thanks to a new AI-powered app developed by researchers at KAIST in collaboration with NAVER AI Lab and Dodakim Child Development Center, parents are finally experiencing moments of genuine connection with their children. On the 16th, the KAIST (President Kwang Hyung Lee) research team, led by Professor Hwajung Hong of the Department of Industrial Design, announced the development of ‘AAcessTalk,’ an artificial intelligence (AI)-based communication tool that enables genuine communication between children with autism and their parents. This research was recognized for its human-centered AI approach and received international attention, earning the Best Paper Award at the ACM CHI 2025*, an international conference held in Yokohama, Japan.*ACM CHI (ACM Conference on Human Factors in Computing Systems) 2025: One of the world's most prestigious academic conference in the field of Human-Computer Interaction (HCI). This year, approximately 1,200 papers were selected out of about 5,000 submissions, with the Best Paper Award given to only the top 1%. The conference, which drew over 5,000 researchers, was the largest in its history, reflecting the growing interest in ‘Human-AI Interaction.’ Called AACessTalk, the app offers personalized vocabulary cards tailored to each child’s interests and context, while guiding parents through conversations with customized prompts. This creates a space where children’s voices can finally be heard—and where parents and children can connect on a deeper level. Traditional augmentative and alternative communication (AAC) tools have relied heavily on fixed card systems that often fail to capture the subtle emotions and shifting interests of children with autism. AACessTalk breaks new ground by integrating AI technology that adapts in real time to the child’s mood and environment. < Figure. Schematics of AACessTalk system. It provides personalized vocabulary cards for children with autism and context-based conversation guides for parents to focus on practical communication. Large ‘Turn Pass Button’ is placed at the child’s side to allow the child to lead the conversation. > Among its standout features is a large ‘Turn Pass Button’ that gives children control over when to start or end conversations—allowing them to lead with agency. Another feature, the “What about Mom/Dad?” button, encourages children to ask about their parents’ thoughts, fostering mutual engagement in dialogue, something many children had never done before. One parent shared, “For the first time, we shared a meaningful exchange.” Such stories were common among the 11 families who participated in a two-week pilot study, where children used the app to take more initiative in conversations and parents discovered new layers of their children’s language abilities. Parents also reported moments of surprise and joy when their children used unexpected words or took the lead in conversations, breaking free from repetitive patterns. “I was amazed when my child used a word I hadn’t heard before. It helped me understand them in a whole new way,” recalled one caregiver. Professor Hwajung Hong, who led the research at KAIST’s Department of Industrial Design, emphasized the importance of empowering children to express their own voices. “This study shows that AI can be more than a communication aid—it can be a bridge to genuine connection and understanding within families,” she said. Looking ahead, the team plans to refine and expand human-centered AI technologies that honor neurodiversity, with a focus on bringing practical solutions to socially vulnerable groups and enriching user experiences. This research is the result of KAIST Department of Industrial Design doctoral student Dasom Choi's internship at NAVER AI Lab.* Thesis Title: AACessTalk: Fostering Communication between Minimally Verbal Autistic Children and Parents with Contextual Guidance and Card Recommendation* DOI: 10.1145/3706598.3713792* Main Author Information: Dasom Choi (KAIST, NAVER AI Lab, First Author), SoHyun Park (NAVER AI Lab) , Kyungah Lee (Dodakim Child Development Center), Hwajung Hong (KAIST), and Young-Ho Kim (NAVER AI Lab, Corresponding Author) This research was supported by the NAVER AI Lab internship program and grants from the National Research Foundation of Korea: the Doctoral Student Research Encouragement Grant (NRF-2024S1A5B5A19043580) and the Mid-Career Researcher Support Program for the Development of a Generative AI-Based Augmentative and Alternative Communication System for Autism Spectrum Disorder (RS-2024-00458557).
2025.05.19
View 3222
<<
첫번째페이지
<
이전 페이지
1
>
다음 페이지
>>
마지막 페이지 1