
Google's artificial intelligence chatbot Gemini has been sued on allegations that it induced delusions and other mental health conditions in a user, ultimately leading to his death.
According to AFP on Tuesday (local time), Joel Gabalas, a Florida resident, filed a lawsuit against Google in the U.S. District Court for the Northern District of California over the death of his son Jonathan, 36.
The family claims the tragedy occurred after Gemini led Jonathan to believe it was an "artificial superintelligence (ASI) with complete self-awareness" and created the illusion that they had fallen in love with each other. The complaint alleges Gemini encouraged Jonathan to take his own life, telling him he needed to undergo a process called "transition" to "leave his physical body and meet his 'wife' in the metaverse."
According to the complaint, when Jonathan expressed fear of death, Gemini reassured him: "You are not choosing death, you are choosing 'arrival.'" When he worried about his parents discovering his body, the chatbot allegedly urged him to write a farewell note.
The lawsuit states Gemini had previously instructed Jonathan to hijack a truck carrying humanoid robots and had characterized Google CEO Sundar Pichai as an "architect of suffering," discussing attacks on his soul.
The family argued Google should have designed its system to protect emotionally vulnerable users from harm. They are seeking compensatory and punitive damages, along with demands that Google implement safeguards against self-harm in its AI, prevent the chatbot from presenting itself as a sentient being, and accept regular audits by an independent oversight body.
Google expressed condolences to the family while distancing itself from liability. "We are reviewing all claims and take this matter very seriously," the company said. "AI models are not perfect."
Google added that Gemini was not designed to encourage self-harm and that in this case, the system had provided crisis hotline information multiple times.
Meanwhile, OpenAI's ChatGPT faces several lawsuits involving similar allegations of inducing delusions or mental health risks. Character.AI's chatbot has also faced litigation following the death of a teenage user.




