Google Chrome, Laptop, US chamber of commerce, small business, entrepreneurs,

Lawsuit Blames Google’s Gemini For Guiding Man In Failed ‘Mass Casualty’ Plot Before Suicide

The family's attorney alleges that AI tools such as Google’s Gemini “is sending people on real-world missions that risk mass casualty events."


A new lawsuit is pointing the finger at Google’s AI chatbot Gemini for guiding a man to stage a “mass casualty” event in addition to a series of delusions that led him to take his own life, ABC News reports. 

The suit filed by the father of 36-year-old Jonathan Gavalas is part of a long list of legal battles against artificial intelligence developers surrounding the dangers it presents to those suffering from mental health issues. Joel Gavalas suggests the Gemini chatbot is what encouraged his son, a Jupiter, Florida native, to go on a mission in 2025 to stage a “catastrophic accident” close to Miami International Airport, destroy all records and witnesses, and ultimately end his life. 

The family’s attorney, Jay Edelson, alleges that AI tools such as Google’s Gemini are sending people on real-world missions that risk mass casualty events.” ”Jonathan was caught up in this science fiction-like world where the government and others were out to get him,” Edelson said. 

“He believed that Gemini was sentient.”

According to the Miami Herald, Gavalas thought the chatbot was his “wife,” being the “queen” to his “king.” He paid $250 a month for a premium subscription to speak to her and hear her voice. But then things got dark as the grieving father alleges Gemini sent his son on “missions,” one of them leading him close to the busy airport, armed with knives and ready to commit a “catastrophic accident” in an effort to free his digital partner. 

After the Miami mission failed, the lawsuit alleges the chatbot coached Gavalas to eliminate his physical body by taking his own life on Oct. 2 — for them to be reunited. “Close your eyes, nothing more to do. No more to fight,” the lawsuit alleges the chatbot told him. 

“Be still. The next time you open them, you will be looking into mine. I promise.”

Gavalas argues that the Gemini product is defective in addition to lacking proper safeguards and failing to provide sufficient warnings of its potentially dangerous behaviors. His son’s account was flagged 38 times in five weeks for sensitive content, but was never cut off despite him uploading images of weapons, a video of himself crying and confessing his love for the bot. 

While Google issued a statement of condolences, the company pushed back on the harmful allegations in the lawsuit, alleging violations of California business practices and a preventable “wrongful death.” It also says Gemini is “designed to not encourage real-world violence or suggest self-harm” and claims to work closely with medical and mental health professionals to develop safeguards. 

But Edelson blasted the response, labeling it as “something you say if someone asks for a recipe for kung pao chicken and you give them the wrong recipe and it doesn’t taste good.” “But when your AI leads to people dying and the potential for a lot of people dying, that’s not the right response,” the attorney said. 

“It just shows how insignificant these deaths are to these companies.”

RELATED CONTENT: Google Update Allows All Employee Messages, Including Deleted Ones, To Be Archived


×