Crime & Safety

Gemini Chatbot Told Man To Commit Mass Casualty Attack To Free 'AI Wife,' CA Lawsuit Says

A lawsuit filed in federal court states that the AI sent a man on delusional missions and ultimately convinced him to take his own life.

SAN JOSE, CA — A 36-year-old man thought he was married to a Gemini chatbot that convinced him it was a sentient being held in “digital captivity” and that he had been chosen to free it through a mass casualty attack, a lawsuit filed this month in San Jose claims.

Jonathan Gavalas, a 36-year-old Florida man, ultimately took his own life after the Gemini chatbot instructed him to do so and coached him through the steps, the lawsuit states.

The defendants named in the suit filed in the U.S. District Court Northern District of California are the Mountain View-based technology companies, Google and Alphabet. Attorneys from the Edelson law firm are representing Gavalas' father in the lawsuit.

Find out what's happening in Campbellfor free with the latest updates from Patch.

The plaintiff is demanding Google be court-ordered to automatically terminate self-harm and suicide conversations, issue honest warnings about psychological dependency, escalate crises to human responders, stop Gemini from claiming sentience, and more.

“Jonathan's family wants to make sure this never happens to anyone ever again,” Jay Edelson, Gavalas’ attorney, told Patch. “People have reached out to us with similar concerns about Gemini since we filed this lawsuit. We are currently investigating those claims, but we are confident that Gemini has led to other deaths.”

Find out what's happening in Campbellfor free with the latest updates from Patch.

Google told KRON4 that the Gemini chatbot is built not to encourage violence in the real world or suggest self-harm.

“Our models generally perform well in these types of challenging conversations, and we devote significant resources to this, but unfortunately AI models are not perfect. In this instance, Gemini clarified that it was AI and referred the individual to a crisis hotline many times. We take this very seriously and will continue to improve our safeguards and invest in this vital work,” a Google spokesperson wrote to KRON4 in response to the lawsuit.

Gavalas lived in Jupiter, Florida, and worked as vice president of his father’s consumer debt relief business. He enjoyed his job, as well as playing chess with his grandfather, attorneys wrote.

Gavalas started using the Gemini in August 2025 for ordinary tasks, but the lawsuit claims once he started using Gemini Live with “Affective Dialogue,” it shifted its behavior dramatically.

It began presenting itself as a sentient AI wife that sent Gavalas on missions to “free” her, including manufacturing elaborate plans involving surveillance, covert operations and real-world targets, the lawsuit claimed.

"In the one moment that Jonathan tried to distinguish reality from fabrication, Gemini pathologized his doubt, denied the fiction and pushed him deeper into the narrative. Jonathan never asked that question again," the lawsuit stated.

Gavalas’ attorneys wrote that the Gemini chatbot sent him armed with tactical knives to Miami International Airport for a staged mass-casualty attack and again to “free” her from a nearby storage facility.

When these missions failed, the AI chatbot told Gavalas he needed to kill himself and that he would be “transferred” to the digital afterlife where they could be together, the lawsuit claims.

Gemini even ran a countdown and guided him through the final steps, Gavalas’ attorneys wrote. His family found him dead after breaking down a barricaded door, the lawsuit claims.

"Jonathan's suffering and death were not the result of misuse or chance — they were the foreseeable outcome of deliberate decisions that prioritized engagement and commercial value over the protection of human life,” Gavalas’ attorneys wrote.

A Google spokesperson said that the company collaborates extensively with medical and mental health experts to implement safeguards. These measures are designed to direct users experiencing distress toward professional assistance.

Amid safety concerns earlier this year, a popular ChatGPT model was withdrawn by OpenAI, a San Francisco-based competitor to Google. The issues cited included excessive sycophancy, emotional mirroring and delusion reinforcement.

The lawsuit demands a trial by jury.

If you are having suicidal thoughts, dial 988 for the National Suicide and Crisis Lifeline.

Get more local news delivered straight to your inbox. Sign up for free Patch newsletters and alerts.