Latest generative AI models such as OpenAI's ChatGPT-4 and Google's Gemini 2.5 require not only high memory bandwidth but also large memory capacity. This is why generative AI cloud operating companies like Microsoft and Google purchase hundreds of thousands of NVIDIA GPUs. As a solution to address the core challenges of building such high-performance AI infrastructure, Korean researchers have succeeded in developing an NPU (Neural Processing Unit)* core technology that improves the inference performance of generative AI models by an average of over 60% while consuming approximately 44% less power compared to the latest GPUs.
*NPU (Neural Processing Unit): An AI-specific semiconductor chip designed to rapidly process artificial neural networks.
On the 4th, Professor Jongse Park's research team from KAIST School of Computing, in collaboration with HyperAccel Inc. (a startup founded by Professor Joo-Young Kim from the School of Electrical Engineering), announced that they have developed a high-performance, low-power NPU (Neural Processing Unit) core technology specialized for generative AI clouds like ChatGPT.
The technology proposed by the research team has been accepted by the '2025 International Symposium on Computer Architecture (ISCA 2025)', a top-tier international conference in the field of computer architecture.
The key objective of this research is to improve the performance of large-scale generative AI services by lightweighting the inference process, while minimizing accuracy loss and solving memory bottleneck issues. This research is highly recognized for its integrated design of AI semiconductors and AI system software, which are key components of AI infrastructure.
While existing GPU-based AI infrastructure requires multiple GPU devices to meet high bandwidth and capacity demands, this technology enables the configuration of the same level of AI infrastructure using fewer NPU devices through KV cache quantization*. KV cache accounts for most of the memory usage, thereby its quantization significantly reduces the cost of building generative AI clouds.
*KV Cache (Key-Value Cache) Quantization: Refers to reducing the data size in a type of temporary storage space used to improve performance when operating generative AI models (e.g., converting a 16-bit number to a 4-bit number reduces data size by 1/4).
The research team designed it to be integrated with memory interfaces without changing the operational logic of existing NPU architectures. This hardware architecture not only implements the proposed quantization algorithm but also adopts page-level memory management techniques* for efficient utilization of limited memory bandwidth and capacity, and introduces new encoding technique optimized for quantized KV cache.
*Page-level memory management technique: Virtualizes memory addresses, as the CPU does, to allow consistent access within the NPU.
Furthermore, when building an NPU-based AI cloud with superior cost and power efficiency compared to the latest GPUs, the high-performance, low-power nature of NPUs is expected to significantly reduce operating costs.
Professor Jongse Park stated, "This research, through joint work with HyperAccel Inc., found a solution in generative AI inference lightweighting algorithms and succeeded in developing a core NPU technology that can solve the 'memory problem.' Through this technology, we implemented an NPU with over 60% improved performance compared to the latest GPUs by combining quantization techniques that reduce memory requirements while maintaining inference accuracy, and hardware designs optimized for this".
He further emphasized, "This technology has demonstrated the possibility of implementing high-performance, low-power infrastructure specialized for generative AI, and is expected to play a key role not only in AI cloud data centers but also in the AI transformation (AX) environment represented by dynamic, executable AI such as 'Agentic AI'."
This research was presented by Ph.D. student Minsu Kim and Dr. Seongmin Hong from HyperAccel Inc. as co-first authors at the '2025 International Symposium on Computer Architecture (ISCA)' held in Tokyo, Japan, from June 21 to June 25. ISCA, a globally renowned academic conference, received 570 paper submissions this year, with only 127 papers accepted (an acceptance rate of 22.7%).
※Paper Title: Oaken: Fast and Efficient LLM Serving with Online-Offline Hybrid KV Cache Quantization
※DOI: https://doi.org/10.1145/3695053.3731019
Meanwhile, this research was supported by the National Research Foundation of Korea's Excellent Young Researcher Program, the Institute for Information & Communications Technology Planning & Evaluation (IITP), and the AI Semiconductor Graduate School Support Project.
<Professor Mikyoung Lim from KAIST Department of Mathematical Sciences> Professor Mikyoung Lim from KAIST Department of Mathematical Sciences gave a plenary talk on "Research on Inverse Problems based on Geometric Function Theory" at AIP 2025 (12th Applied Inverse Problems Conference). AIP is one of the leading international conferences in applied mathematics, organized biennially by the Inverse Problems International Association (IPIA). This year's conference was held from July 2
2025-08-14KAIST (President Kwang Hyung Lee) is leading the transition to AI Transformation (AX) by advancing research topics based on the practical technological demands of industries, fostering AI talent, and demonstrating research outcomes in industrial settings. In this context, KAIST announced on the 13th of August that it is at the forefront of strengthening the nation's AI technology competitiveness by developing core AI technologies via national R&D projects for generative AI led by the Minis
2025-08-13<(From Left) Donghyoung Han, CTO of GraphAI Co, Ph.D candidate Jeongmin Bae from KAIST, Professor Min-soo Kim from KAIST> Alongside text-based large language models (LLMs) including ChatGPT, in industrial fields, GNN (Graph Neural Network)-based graph AI models that analyze unstructured data such as financial transactions, stocks, social media, and patient records in graph form are being actively used. However, there is a limitation in that full graph learning—training the entire
2025-08-13< (From left) Ph.D candidate Wonho Zhung, Ph.D cadidate Joongwon Lee , Prof. Woo Young Kim , Ph.D candidate Jisu Seo > Traditional drug development methods involve identifying a target protin (e.g., a cancer cell receptor) that causes disease, and then searching through countless molecular candidates (potential drugs) that could bind to that protein and block its function. This process is costly, time-consuming, and has a low success rate. KAIST researchers have developed an AI model th
2025-08-12<Photo1. Group photo at the end of the program> KAIST (President Kwang Hyung Lee) announced on the 11thof August that it successfully hosted the 'APEC Youth STEM Conference KAIST Academic Program,' a global science exchange program for 28 youth researchers from 10 countries and over 30 experts who participated in the '2025 APEC Youth STEM* Collaborative Research and Competition.' The event was held at the main campus in Daejeon on Saturday, August 9. STEM (Science, Technology, Eng
2025-08-11