
It was a week in which global semiconductor stocks — particularly Korea's two memory chip giants Samsung Electronics (005930.KS) and SK hynix (000660.KS) — were rocked by TurboQuant, an artificial intelligence compression technology unveiled by Google. Memory chipmakers' shares had been hitting record highs day after day on the back of surging demand. As the AI market shifted from training to inference, demand for not only high-bandwidth memory (HBM) but also DRAM and NAND flash grew exponentially while supply remained tight. But the emergence of TurboQuant spread fears that "memory chips might sell less," sending related stocks tumbling in unison.
The anxiety deepened when Matthew Prince, CEO of Cloudflare, hailed TurboQuant as "Google's DeepSeek moment." Yet the view from the industry and securities analysts is entirely different. They say the market's panic is simply a misunderstanding born of unfamiliar technical jargon, and that the real signal is a virtuous cycle that will dramatically expand the AI semiconductor market.
Tokens Are AI's Dictionary, KV Cache Is Its Notepad, TurboQuant Is Tiny Handwriting
To understand what TurboQuant is and what impact it may have, one must first decode the alien terminology. AI reads human language by breaking it into units called tokens — fragments of words. As a conversation grows longer, the AI temporarily writes down earlier tokens' context in memory such as HBM so it does not lose track. It is like sticking Post-it notes beside a thick book to remember the content. This temporary AI notepad is called the KV cache (Key-Value Cache).
The problem emerged as AI began processing data equivalent to dozens of books at once. The notepad (KV cache) fills up almost instantly, creating a bottleneck. Previously, the only option was to spend enormous sums simply adding more memory chips.
TurboQuant is what broke through that limit. It is a technology that revolutionizes how notes are written on the notepad. If the old method was like scrawling in thick marker and wasting space, TurboQuant uses a kind of ultra-compressed micro-handwriting — packing information so densely it would need a microscope to read — without any loss in accuracy.
Markets Were Fooled by DeepSeek Too… "Jevons Paradox Will Repeat"
The realization that less memory (notepad) might be needed amplified existing fears that semiconductors had peaked, triggering a sell-off in related stocks, according to analysts. But industry experts point out that this reaction is exactly the same as when China's AI model DeepSeek introduced a high-efficiency algorithm and markets feared the HBM market would shrink. After the DeepSeek shock, HBM demand instead climbed even more steeply. This is explained by Jevons Paradox — the economic principle that when technological efficiency lowers the cost of a resource (in this case, inference), total usage explodes rather than declines.
When Micron's stock plunged and Samsung Electronics and SK hynix shares dropped more than 5% following the TurboQuant news, the mood on the ground was starkly different from the stock market. When asked whether TurboQuant would reduce memory demand, industry insiders responded by asking back, "What is TurboQuant?"
Memory Shortage Persists… "So Many LTA Requests, We Have to Be Selective"
The memory shortage remains acute. With supply so scarce, Big Tech firms are scrambling to lock in long-term supply agreements (LTAs) of three to five years or more with the three major memory makers. The memory companies, now in a position of extraordinary leverage, can afford to be selective about which contracts they accept. At SK hynix's recent annual shareholders' meeting, CEO Kwak Noh-jung said, "Realistically, supply constraints are so tight that it is difficult to accommodate all LTA requests. We are making comprehensive considerations including customer relationships, demand visibility, product mix, and strategic importance."
Securities analysts share the same view. Lee Chang-min, a researcher at KB Securities, said, "Low-cost AI technologies such as TurboQuant lower the barrier to AI adoption and act to explosively increase overall demand. As computational volume and installed capacity grow and the ecosystem expands, the biggest beneficiaries will ultimately be memory companies." Sean Kim, an analyst at Morgan Stanley, also noted, "When costs fall, the profitability of AI adoption rises, which benefits memory manufacturers in the long run."
※ 'Gap World' is a column by reporter Seo Jong-'gap' that delves into the gaps in the flood of news from the era of technological hegemony competition. Follow the Gap World column and the reporter's page for key insights and outlooks on cutting-edge technology and semiconductor issues. Questions, constructive discussions, and suggestions are always welcome. Please email me and I will follow up in the next installment.

