ETRI Develops AI That Never Forgets Prior Knowledge

News|
| Updated 2026.03.24. 11:10:30
|
By Park Hee-yun
||
ETRI Develops 'AI Without Forgetfulness' Technology… AI That Doesn't Forget Even When Learning New Knowledge - Seoul Economic Daily Technology News from South Korea
ETRI Develops 'AI Without Forgetfulness' Technology… AI That Doesn't Forget Even When Learning New Knowledge

South Korean researchers have drawn global attention by developing a core technology that enables multimodal artificial intelligence to retain existing knowledge stably even as it repeatedly learns new information.

The Electronics and Telecommunications Research Institute (ETRI) announced Tuesday that a technology called "MemEIC" (Memory-based Editing for Multimodal AI in Continuous and Compositional Knowledge), jointly developed by a research team led by Lim Su-jong, head of ETRI's Language Intelligence Research Division, along with Pohang University of Science and Technology (POSTECH) and Sungkyunkwan University, was accepted at NeurIPS 2025, the world's most prestigious AI conference, and presented in San Diego, United States.

Multimodal AI systems that simultaneously understand images and text — such as ChatGPT, Gemini and Claude — have been spreading rapidly in recent years. These are AI systems capable of describing photos or answering text-based questions about the content of images.

However, such AI had a critical problem. When it learns new information or modifies existing information, a phenomenon called "catastrophic forgetting" occurs, in which previously learned knowledge is lost along the way. In other words, when AI learns something new, it forgets what it previously knew — a form of "AI amnesia."

This problem was especially pronounced when visual and language information needed to be modified simultaneously. The two types of knowledge would become entangled, causing the AI to fail to understand properly and frequently give incorrect answers to compositional questions.

For example, if an AI is sequentially taught the visual information that "the dessert in this photo is a Dubai chewy cookie" and the language information that "Dubai chewy cookies are popular in Korea," and then asked "In which country is this dessert popular?" — existing AI models showed limitations in properly linking the photo to its related knowledge.

In such cases, existing models would misidentify the dessert in the photo and generate inaccurate responses such as "The image shows chocolate truffles, which are popular in Europe," with hallucination occurring frequently.

ETRI researchers developed a new knowledge-editing AI technology that can accurately answer compositional questions to address this problem.

Conventional approaches primarily modified key parameters inside the AI directly to alter knowledge. This "brain surgery-style approach" fundamentally changes the existing model's structure, with the limitation that previously stored information could be affected during the knowledge modification process.

Instead, the researchers proposed storing new information in external memory rather than inside the AI. This auxiliary memory approach retrieves information only when needed, maintaining the stability of the existing model while flexibly adding new information, thereby also securing scalability.

MemEIC was designed with inspiration from the human brain structure. Just as the human brain is divided into left and right hemispheres with different roles, the AI was designed to store knowledge in separate compartments.

Image-related visual information is stored in a "visual adapter," while text-related language information is stored independently in a "language adapter." When the AI receives a compositional question requiring understanding of both image and text, a "knowledge connector" links the two types of information contextually to generate an answer.

AI equipped with ETRI's MemEIC technology was confirmed to correctly answer by accurately combining visual and language information, responding: "The dessert in the photo is a Dubai chewy cookie, which is popular in Korea."

Through this separated storage and selective combination structure — where knowledge is stored separately and connected only when needed — the researchers implemented an AI architecture capable of compositional reasoning for complex questions by minimizing internal interference between different types of information and the degradation of existing knowledge.

To verify the technology's performance, the researchers built a Continuous and Compositional Knowledge Editing Benchmark (CCKEB) consisting of 1,278 items and conducted experiments editing hundreds of pieces of knowledge sequentially. MemEIC achieved approximately 70% accuracy on compositional questions.

This represents more than a twofold improvement compared with existing technologies, which recorded 36% to 52% accuracy. The technology also demonstrated "locality" preservation — meaning answers to existing questions remained unchanged even after new knowledge was added, maintaining response stability.

This research holds significant meaning in that it goes a step beyond merely mitigating AI's forgetting phenomenon to simultaneously solve two difficult challenges: continuous knowledge editing and compositional reasoning. The technology is expected to have strong practical applicability in intelligent service areas that require continuous updates as new information is constantly added and changed, such as policy and regulatory information, product information and industrial data.

"This research has established a technical foundation that enables multimodal AI to simultaneously achieve up-to-date information reflection and reliability assurance required in real-world service environments," said Lim Su-jong, head of ETRI's Language Intelligence Research Division. "We will further advance the technology so that it can stably incorporate diverse information from industrial settings."

Sung Jin, an ETRI Language Intelligence Research Division researcher and lead author of the paper, said, "Conventional approaches had the problem of interference occurring when visual knowledge and language knowledge were modified at once. MemEIC overcame this limitation through a structure that stores the two types of knowledge independently and connects them only when needed."

AI-translated from Korean. Quotes from foreign sources are based on Korean-language reports and may not reflect exact original wording.