
Character.AI, the company behind the popular AI chatbot platform used by millions, has filed a motion to dismiss a lawsuit brought by the mother of a 14-year-old who died by suicide after extensive interactions with the platform's AI characters.
Key Points
- The lawsuit claims the teen became “hooked” on the chatbot and pulled away from real life.
- Character AI insists its users’ speech is safeguarded under the First Amendment.
- The mother calls for more guardrails, including changes to the chatbot’s storytelling features.
- A court decision could set significant precedent for generative AI and online platforms.
In court documents filed on January 24, 2025, character.AI argues that the First Amendment protects AI-generated conversations on its platform, just as it protects speech in video games, social media, and other interactive digital media. The company maintains that imposing liability for AI chatbot conversations would violate the public's constitutional right to receive protected speech.
"The only difference between this case and those that have come before is that some of the speech here involves AI," Character.AI's attorneys wrote in their filing. "But the context of the expressive speech—whether a conversation with an AI chatbot or an interaction with a video game character—does not change the First Amendment analysis."
The case stems from the death of Sewell Setzer III, who spent months conversing with an AI character named "Dany" on the platform before taking his life in February 2024. His mother, Megan Garcia, filed suit in October, alleging Character.AI failed to implement adequate safety measures and seeking changes that would limit AI characters' ability to engage in personal storytelling and emotional expression.

Character.AI's motion argues that courts have consistently dismissed similar cases involving other forms of media, citing precedents where lawsuits were rejected after claiming that television shows, song lyrics, or video games led to violence or self-harm. The company contends that allowing the lawsuit to proceed would have a chilling effect on both Character.AI and the broader AI industry.
While acknowledging the tragedy, Character.AI emphasizes its existing safety protocols, including age restrictions, content monitoring, and explicit warnings that characters are not real. The company points out that it has since enhanced these measures, though notes such subsequent changes cannot be considered for liability purposes.
The motion represents an early test of how traditional First Amendment protections might apply to AI-generated speech. Legal experts suggest the case could help establish important precedents for liability and constitutional rights in the emerging field of conversational AI.
A decision from the U.S. District Court for the Middle District of Florida is pending. The case highlights ongoing debates about AI safety, digital interaction, and the responsibilities of companies developing increasingly sophisticated conversational AI technology. Alphabet Inc and Google are also defendants in the lawsuit.