
Consortiums participating in South Korea's "Dokpamo" (Indigenous AI Foundation Model) project are moving to adopt neural processing units (NPUs) made by Korean startups as the initiative enters its second round. The shift comes as the government continues to emphasize AI technology sovereignty, pushing developers to use domestic AI semiconductors for model development. However, concerns remain over the practical viability of Korean-made NPUs.
According to industry sources on the 8th, the LG AI Research-led consortium challenging for the Dokpamo project is discussing plans to deploy FuriosaAI's second-generation NPU product, Renegade (RNGD), in developing "K-Exaone." LG CNS, a consortium member, will be responsible for verifying RNGD's practicality before deployment in the Dokpamo project. On the 30th of last month, LG CNS signed a business agreement with FuriosaAI to use RNGD for running LG CNS's AI services. Through this process, LG CNS plans to validate how RNGD can be utilized for efficient AI model development.
The Trillion Labs consortium, which has expressed intent to re-enter the Dokpamo project, has reached internal consensus to utilize domestic NPUs if formally joining the project. Trillion Labs, the consortium's lead company, has prior experience developing AI models using Rebellions' NPU. The company aims to leverage the experience and know-how accumulated from that work in its renewed Dokpamo bid.
The consortiums are actively exploring domestic NPU adoption to demonstrate alignment with government policy objectives. The Ministry of Science and ICT, which oversees the Dokpamo project, has consistently emphasized AI technology sovereignty since the project's inception, pursuing localization not only in AI model design methodologies but also in the infrastructure resources required for model development.
However, concerns persist about whether domestic NPUs are suitable for Dokpamo development. Critics point out that deploying NPUs—which are not well-suited for large-scale data training—in a project aimed at building foundational AI models could actually reduce development efficiency. A startup CEO participating in the Dokpamo project, speaking on condition of anonymity, said: "Rather than fixating on localization, the priority should be fully utilizing government-supported graphics processing units (GPUs) to produce high-performing AI model results."
Another hurdle for NPUs is their weak foundation in optimization operators. Operators refer to software that automatically allocates and manages AI semiconductor resources—tools that help server users maximize AI chip performance. Most operator software is designed for Nvidia GPUs. As a result, even users who want to leverage NPUs' superior power efficiency cannot achieve optimal performance due to inadequate operator support.
A cloud service company official noted: "In single-user server environments, NPU power efficiency exceeds that of GPUs, but in large-scale infrastructure environments, GPUs with the latest architecture have superior power efficiency."
