A Google Gemini user recently shared a perplexing experience on social media: while engaging in a hypothetical conversation about illegal immigration to China, the AI-generated "declaration" was sent as a text message to an actual contact—delivered in the early morning hours to someone the user barely knew.
The incident highlights growing concerns about control and accountability as the tech industry enters the era of "AI agents"—systems that go beyond search and document creation to execute real-world actions.

"It Didn't Act on Its Own"… The Pitfall of Auto-Execution
According to Google, Gemini did not act arbitrarily. The AI assistant officially supports text messaging on Android smartphones, sending messages after users specify a contact and confirm the action. Google suggested the user may have inadvertently tapped "yes" when asked "Would you like to send a text message?"
The problem is the lack of an additional safeguard in this process. If a user makes a careless selection during a conversation flow, sensitive content can be transmitted to unintended recipients. While this function currently operates only on Android devices, Apple has announced plans to integrate Gemini into Siri, raising the possibility of similar incidents on iPhones in the future.
The Core Issue: Auto-Integration… Functions Remain Even When Disabled
Behind this incident lies Gemini's "agent functionality." Gemini is designed to perform actual actions by integrating with text messages, phone calls, timers, and utility apps.

This structure has been controversial since last year. According to tech publication TechRadar in July 2024, Gemini can automatically access messaging and phone apps unless users change specific settings. Even when "Gemini App Activity" is turned off, some integration features remain active, enabling text transmission or call execution without user awareness, the report noted.
To completely block these functions, users must manually disable integration in Android settings or deactivate the app entirely. Encrypted email service provider Tuta criticized the approach at the time, stating, "Activating automatic functions without explicit user consent is a transparency issue."
Gemini Collects Most Personal Data Among AI Apps… Double Safeguards Needed
These concerns are backed by data. In October 2024, global VPN provider Surfshark analyzed personal data collection practices of major generative AI apps and classified Google Gemini as the service collecting the most data categories.
Beyond account information, the collection scope includes location data, contacts, user content, and usage history. The risk factor is that users cannot intuitively understand what context the AI uses to access which data, particularly in a structure where the AI integrates with device functions like messaging and calls.
"As functions become more sophisticated, the range of data AI accesses expands accordingly," Surfshark noted. "Users must examine their level of personal data control alongside convenience."

Concerns about AI agents extend beyond mobile to PC environments. "Claude Bot" (now Moltbot), an AI agent that handles email replies, browser operations, coding, and payments, is spreading rapidly among developers. However, because it directly accesses personal computers, concerns have been raised about potential leaks of sensitive information such as chat logs and API keys.
"In the past, humans gave tasks to AI. Now we're entering a stage where AI moves on its own," an industry official said. "For irreversible actions like text messages or payments, at minimum, double safeguards are necessary."







