
SK Telecom is transforming resource connectivity in artificial intelligence data centers using CXL, a next-generation computing connection technology. The company aims to improve GPU cost efficiency by enhancing AI processing performance without unnecessary equipment expansion.
SK Telecom announced on the 4th that it signed an MOU with Korean company Panesia for joint development of next-generation AIDC architecture at MWC 2026 in Barcelona, Spain. CXL is a data connection standard that enables ultra-high-speed, low-latency processing by organically linking data between CPUs, GPUs, and memory. Using CXL allows flexible expansion and utilization of computing resources that were previously bound to individual server units.
Panesia, SK Telecom's new partner, is a startup with world-class technology in the CXL field. The company provides various link semiconductors—communication chips that optimize data movement—essential for building efficient AI data centers, including fabric link switches that manage data flow by connecting multiple devices and link controllers that facilitate efficient data transmission between devices.
Conventional AIDCs have CPUs and GPUs fixed at the server level, making it difficult to utilize surplus resources from one server in another. This created problems where memory shortages required adding unnecessary GPUs. To address this, SK Telecom is applying Panesia's CXL-based technology to expand resource connectivity from within individual servers to rack-level units spanning multiple servers, enabling selective utilization of needed resources.
Additionally, while conventional AIDCs exchanged data through general-purpose networks such as Ethernet, this partnership enables more direct resource connections through CXL-based technology. CXL-based technology allows high-speed resource connections without going through networks, simplifying data transmission processes and improving computational efficiency.
The two companies plan to unveil the next-generation AI DC architecture by year-end after comprehensively verifying GPU and memory utilization rates, latency, and throughput while running actual AI models. They will subsequently pursue commercialization following validation in large-scale AI DC environments.
