Samsung Electronics will ship HBM4, the sixth-generation high-bandwidth memory critical to next-generation artificial intelligence accelerators, to Nvidia after the Lunar New Year holiday—becoming the world's first supplier to deliver the advanced chip.

The shipment marks Samsung's return to "first supplier" status with Nvidia after a three-year gap, positioning the Korean tech giant to lead the custom AI semiconductor (ASIC) market.
According to industry sources on February 8, Samsung plans to ship mass-produced HBM4 to Nvidia as early as the third week of this month. This makes Samsung the first among the world's three major memory makers—Samsung, SK Hynix, and Micron—to complete final delivery of HBM4 to Nvidia, their largest customer.
Samsung signaled the imminent delivery during its Q4 2025 earnings call on January 29, stating that "qualification tests with major customers have progressed smoothly and are now entering the completion phase." The company passed final testing earlier this month, received purchase orders, and confirmed shipment schedules.
Nvidia is expected to unveil its next-generation AI accelerator "Vera Rubin" featuring Samsung's HBM4 at its GTC2026 technology conference next month.
Three-Year Comeback
Samsung's first-to-market HBM4 shipment represents a dramatic turnaround. The company, once the primary supplier of second-generation HBM (HBM2), lost its preferred supplier status to SK Hynix for fourth-generation HBM3 in 2023. Subsequent delays in fifth-generation HBM3E deliveries due to heat dissipation issues hammered Samsung's revenue and cost it the top position in the memory market.
The breakthrough came through what industry observers call a "bold process choice." Samsung applied its latest sixth-generation 1c-class (11-nanometer) DRAM technology to HBM4's core components—a more advanced node than competitors. The company also designed the logic die connecting AI accelerators to HBM at 4 nanometers, compared to competitors using fifth-generation 1b DRAM and 12-nanometer logic dies.
The strategy paid off. Samsung's HBM4 achieved an industry-leading operating speed of 11.7 gigabits per second, outperforming rival products.
Vice Chairman Young-hyun Jeon, who returned as head of the Device Solutions division in May 2024, issued what amounted to a public apology after Q3 results in October. "We are sorry to have caused concerns about our fundamental technological competitiveness and the company's future with results that fell short of market expectations," he said, before declaring a comprehensive organizational overhaul.
Market Impact
Analysts project Samsung's HBM revenue will nearly triple this year to approximately 25 trillion won ($17.6 billion), compared to last year.
Micron has been excluded from the initial HBM4 supplier group after failing to meet Nvidia's performance requirements, setting up a direct competition between Samsung and market leader SK Hynix, which dominated the HBM3 and HBM3E segments.
Samsung has already completed mass production preparations for 16-layer HBM4, following the 12-layer version shipping this month. The company also announced industry-first HBM4E targets at 13 gigabits per second last October—30% faster than HBM4.
Strategic Advantage in ASIC Market

Samsung's HBM4 gains extend beyond memory chips. Unlike previous generations focused primarily on data processing, HBM4 requires custom-designed base dies (logic dies), making integrated design, manufacturing, and foundry capabilities essential.
Samsung is the only company globally capable of providing this "one-stop solution."
"Unlike competitors who must send manufactured HBM to Taiwan for assembly, Samsung can handle the entire process in-house," an industry official said. "The performance and supply volume of Samsung's HBM4 will determine next-generation AI accelerator capabilities."
Samsung also reclaimed the top position in the DRAM market in Q4 (according to Counterpoint Research) after delivering HBM3E to Nvidia in the latter half of last year—its first number-one ranking in a year.
