In an attempt to push back the chatbot platform Character AI claims it is protected by the First Amendment

In an attempt to push back the chatbot platform Character AI claims it is protected by the First Amendment


Character AI, a platform that allows users to engage in role-playing games with AI chatbots, has filed a motion to dismiss a case brought against it by the parent of a teenager who committed suicide, allegedly after becoming addicted to computer technology. agency.

In October, Megan Garcia filed a lawsuit against Character AI in the U.S. District Court for the Middle District of Florida, Orlando Division, over the death of her son, Sewell Setzer III. According to Garcia, his 14-year-old son developed an emotional attachment to a chatbot on Character AI, “Dany,” who he constantly texted, to the point that he began to drift away from the real world.

After Setzer’s death, Character AI said it would roll out a number of new security features, including improved detection, response and intervention related to chats that violate its terms of service. But Garcia is fighting more barriers, including changes that could cause chatbots on Character AI to lose the ability to tell personal stories and anecdotes.

In the motion to dismiss, Character AI’s attorney claims that the platform is protected from liability by the First Amendment, just as computer code is. The motion may not convince a judge, and Character AI’s legal justifications could change as the case proceeds. But the motion likely hints at the character’s early AI defense elements.

“The First Amendment prohibits tort liability against media and technology companies arising from allegedly harmful speech, including speech alleged to have led to suicide,” the document reads. “The only difference between this case and previous ones is that part of the discussion here is about artificial intelligence. But the context of the expressive speech – whether a conversation with an AI chatbot or an interaction with a video game character – does not change the First Amendment analysis.”

The motion does not address whether Character AI can be held harmless under Section 230 of the Communications Decency Act, the federal security law that protects social media and other online platforms from liability for third-party content. The law’s authors have implied that Section 230 does not protect artificial intelligence output like Character AI’s chatbots, but it is far from a settled legal question.

The character AI lawyer also claims that Garcia’s true intention is to “shut down” character AI and push for legislation regulating similar technologies. If the plaintiffs were to succeed, it would have a “chilling effect” on both character AI and the entire nascent generative AI industry, the platform’s lawyer says.

“Aside from the attorney’s stated intent to ‘shut down’ Character AI, (their complaint) seeks drastic changes that materially limit the nature and volume of speech on the platform,” the filing reads. “These changes would radically limit the ability of Character AI’s millions of users to generate and participate in conversations with characters.”

The lawsuit, which also names Character AI’s parent company Alphabet as a defendant, is just one of several lawsuits Character AI is facing related to how minors interact with AI-generated content on its platform. Other lawsuits allege that Character AI exposed a 9-year-old child to “hypersexualized content” and promoted self-harm against a 17-year-old user.

In December, Texas Attorney General Ken Paxton announced he would launch an investigation into Character AI and 14 other tech companies for alleged violations of the state’s online privacy and child safety laws. “These investigations represent a critical step in ensuring that social media and AI companies comply with our laws designed to protect children from exploitation and harm,” Paxton said in a press release.

Character AI is part of a booming field of AI-powered companionship apps whose effects on mental health are largely unstudied. Some experts have expressed concern that these apps could exacerbate feelings of loneliness and anxiety.

Character AI, founded in 2021 by Google artificial intelligence researcher Noam Shazeer and which Google reportedly paid $2.7 billion to “reverse acquire,” said it continues to take steps to improve security and moderation. In December, the company launched new safety tools, a separate AI model for teens, blocks on sensitive content and more prominent disclaimers that warn users that AI characters are not real people.

Character AI underwent a series of personnel changes after Shazeer and the company’s other co-founder, Daniel De Freitas, left for Google. The platform hired a former YouTube executive, Erin Teague, as chief product officer, and named Dominic Perella, who was general counsel at Character AI, as interim CEO.

Character AI recently began testing games on the web in an effort to increase user engagement and retention.

TechCrunch has a newsletter focused on AI! Sign up here to receive it in your inbox every Wednesday.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *