
Kimi, the large language model (LLM) developed by Moonshot AI, one of China's leading artificial intelligence startups, has leaked another user's sensitive personal information in a security incident. Beyond a simple information error, evidence that data was mixed between users has raised concerns among local users. Critics argue that Chinese AI companies are rushing toward monetization and initial public offerings while neglecting the most fundamental aspect: security management.
■ Asked for Translation, Received Stranger's Resume… Security Incident Stirs Uproar

The controversy began on the night of the 20th of this month. A netizen using the pseudonym Zhang Cheng posted on social media that he had asked Kimi to translate an English PowerPoint image but instead received the resume of a complete stranger.
The PPT contained only simple content such as "Key Player Performance Analysis" and "Preparation for Next Month's Event," but Kimi failed to recognize it and began explaining an entirely unrelated "vibration reduction technology." When the user raised questions, Kimi suddenly sent another person's resume. The resume reportedly contained sensitive information including the person's name, phone number, email, career history and achievements.
The user subsequently contacted the individual through the phone number and confirmed that the information matched the real person. The person whose information was leaked had reportedly used Kimi's "resume editing" feature earlier that same morning.
■ "Just AI Hallucination" — A Flimsy Excuse… Experts Say "Clearly a Management Failure"
Moonshot has remained silent, issuing no official statement. However, according to the whistleblower, individuals claiming to be Moonshot employees contacted him multiple times, arguing the incident was a simple accident caused by "AI hallucination" and requesting that he delete the post.
Experts see it differently. They argue that this case is not simple information distortion but rather a "cross-talk" phenomenon, in which data from different users becomes mixed. In other words, the problem stems from design flaws in the overall system architecture, including data isolation and access permission management.
In particular, the fact that the model failed to understand even a simple translation request, performed an irrelevant task, and even retrieved and displayed personal information from another person's database suggests serious defects in the model's reliability and performance itself. Liao Jianxun, managing attorney at Guoding Law Firm in Shenzhen, Guangdong Province, told local media Jiemian News, "This incident is not an unavoidable technical limitation but reveals clear loopholes in the system and personal information protection framework." He added, "'AI hallucination' cannot be used as an excuse to evade legal responsibility."

■ Trust-Damaged Kimi Faces Major Setback Ahead of IPO… AI Security Controversies Continue
The incident is expected to deal a major blow to Kimi, which is targeting a Hong Kong stock market listing in the second half of this year. Kimi has been targeting professional users such as lawyers and researchers, promoting its strength in analyzing long documents such as case precedents and contracts.
Kimi, which had been building profitability through a paid subscription model, is now facing strong backlash after the incident, with some users posting proof of canceling their subscriptions. China's AI market is currently in the midst of a so-called "War of a Hundred Models," with more than 100 models competing fiercely, making every single user valuable. For Moonshot, which has been pursuing monetization through enterprise solutions, losing the trust of corporate clients — who prioritize security and compliance above all — is particularly painful.
Personal information leak controversies are not limited to Kimi. In May last year, Chinese authorities announced that 35 apps, including Zhipu AI's "Zhipu Qingyan," had illegally collected personal information. In April, ByteDance's "Doubao" sparked controversy when it exposed another person's real name and phone number in response to a question about a future spouse. A large-model security test conducted in China last September also found a total of 281 security vulnerabilities.
