LG Unveils EXAONE 4.5 Multimodal AI, Claims Victory Over OpenAI and Google

Multimodal AI for Text and Image Reasoning · Dominates GPT-5 Mini and Claude Sonnet Across 13 Visual Benchmarks · Outperforms Gemma in Coding · Physical Intelligence as Ultimate Goal

News|
|
By Seo Jong-gap
||
null - Seoul Economic Daily Finance News from South Korea

LG's (003550.KS) next-generation multimodal artificial intelligence model EXAONE 4.5 has outperformed the latest models from global big tech companies including OpenAI and Google. Industry observers say the model demonstrates the potential of Korean AI through its overwhelming visual intelligence capabilities.

LG AI Research unveiled EXAONE 4.5 on Monday, a vision language model (VLM) that combines its proprietary Vision Encoder with a large language model (LLM) in a unified architecture.

The model's most distinctive feature is its powerful visual capability. It can instantly read and reason through complex documents including contracts, technical drawings, and financial statements found in industrial settings, as well as infographics. In global benchmark evaluations, EXAONE 4.5 delivered dominant results over competitors. In average scores across 13 visual capability metrics, it surpassed OpenAI's GPT-5 mini, Anthropic's Claude Sonnet 4.5, and Alibaba's Qwen3-VL.

The model achieved top-tier performance with an average score of 77.3 points across five metrics measuring science, technology, engineering, and mathematics (STEM) capabilities. In LiveCodeBench, a leading coding evaluation benchmark, it scored 81.4 points, beating Google's latest model Gemma 4, which scored 80 points. "It goes beyond simply recognizing text in images to analyzing complex charts, understanding context, and deriving answers independently with the highest level of comprehension," LG AI Research said.

While boosting performance, the model significantly reduced its size. EXAONE 4.5 has 33 billion parameters, only one-seventh the size of the previous large model K-EXAONE. The company maximized efficiency by applying proprietary hybrid attention architecture and high-speed inference technology, enabling top-tier text comprehension and reasoning capabilities with fewer computing resources. Language support has also expanded to six languages including Korean, English, Spanish, German, Japanese, and Vietnamese. Following its predecessor EXAONE 3.0, LG plans to release EXAONE 4.5 as open weights for research and academic use on global open-source platform Hugging Face, aiming to lead the expansion of the global AI ecosystem.

LG AI Research is now setting its sights on physical intelligence—AI that can perceive and act in the physical world beyond virtual environments. "EXAONE 4.5 signals our full entry into the multimodal era, embracing visual information beyond text," said Lee Jin-sik, head of the EXAONE Lab at LG AI Research. "Going forward, we will expand our understanding to include voice, video, and even physical environments, completing practical AI that can judge and act independently in industrial settings."

The company is also strengthening its identity as a homegrown AI that best understands Korea's unique context. It is currently collaborating with the Northeast Asian History Foundation and other organizations to intensively train on Korean historical and cultural data. "Based on our proprietary AI risk classification system (K-AUT), we will evolve into the most trustworthy AI that deeply understands history and culture," said Kim Myung-shin, head of the Trust and Safety Office at LG AI Research.

null - Seoul Economic Daily Finance News from South Korea
null - Seoul Economic Daily Finance News from South Korea
null - Seoul Economic Daily Finance News from South Korea

Related Video

AI-translated from Korean. Quotes from foreign sources are based on Korean-language reports and may not reflect exact original wording.