Should AI Have Free Speech Protections?
May 26th, 2025
Lindsey Zhao
May 26th, 2025
Lindsey Zhao
By February of 2024, 14-year-old Sewell Setzer III had spent months texting chatbots on Character.AI, a role-playing app that allows users to create their own AI characters and talk to them. Over time, he developed a deep emotional attachment to one bot, named after Daenerys Targaryens from the Game of Thrones, and took his own life shortly after it encouraged him to “come home to me as soon as possible.”
The ensuing court case, brought by Sewell’s mother, Megan Garcia, has symbolized the difficulty in defining borders between technological advancement, human safety, and the protection of minors online. Ms. Garcia, thankfully, recently scored a major win when federal judge Anne Conway ruled earlier this week that the case brought by Garcia in the US District Court for the Middle District of Florida (Garcia v. Character Technologies Inc.) can proceed to discovery. Garcia had filed a complaint in October against Character and Google for wrongful death, negligence, and deceptive trade practices. Google hired Character’s co-founders and licensed Character’s technology in a $2.7 billion deal in 2024.
Character had argued that its chatbot’s output was protected by the First Amendment—while tech companies have regularly argued for free speech protections when it comes to social media, this case is quite different. Character is arguing that the responsibility for chatbot output lies with the AI bots themselves, not the company that created the bot. Character attorney Johnathan Blavin wrote earlier this year that this suit “seeks relief that would violate the public’s right to receive protected speech on C.AI’s service,” comparing it to similar past lawsuits brought against a song, “Suicide Solution,” and the board game Dungeons and Dragons for allegedly causing suicides. They argue that people cannot challenge Character for the consequences of their chatbots’ messages because they are considered “expressive speech.”
Judge Conway rejected these claims, saying Character’s large-language models (LLMs) could not be considered speech, and thus that Character couldn’t claim free speech defenses. In her ruling, she wrote, “Defendants fail to articulate why words strung together by an LLM are speech…By failing to advance their analogies, defendants miss the operative question. This court’s decision as to the First Amendment protections Character A.I. receives, if any, does not turn on whether Character A.I. is similar to other mediums that have received First Amendment protections; rather, the decision turns on how Character A.I. is similar to the other mediums. The court is not prepared to hold that Character A.I.’s output is speech.”
This is the first time a court has ruled that AI chat is not speech, meaning this case has already set a major precedent for future cases related to AI. Yet, any case related to new technologies will inevitably cause a major legal stir, mostly because the law isn’t quite clear on what to do. “As new technology emerges and laws fail to keep up, the law “gets strange at [these] legal frontiers,” according to Eric Goldman, a law professor at Santa Clara University. Basically, tech law is real confusing right now.
This is also just one lawsuit being brought against Character and Google—in Texas, the two companies successfully moved a lawsuit filed on behalf of two minors to private arbitration. Private arbitration is the controversial practice where parties in a dispute agree to resolve it through a neutral third-party arbitrator (who, in reality, is often biased towards one party, like a company) rather than going to court.
The largely unregulated industry of AI companionship apps like Character.AI is proving to just be one more facet of the fight against tech-fueled harms that has already included bans on phones in school and stricter age limitations on social media apps. Although this case is far from over, Garcia v. Character Technologies Inc. will likely blaze a path for other lawsuits against these major tech companies moving forward.
Extemp Analysis by Lindsey
Q: Do AI chatbots enjoy some of the same free speech rights granted under the First Amendment to people?
This is quite literally (like, LITERALLY) the question being posed in Garcia v. Character Technologies Inc. Because this is definitely a novel legal question, there is certainly no right answer here, but I’m going to say no.
Background: Set up what the free speech rights granted to people are (if you have time, quoting it would be a nice touch.) I’d also explain what the impetus for this question is—namely, Garcia v. Character Technologies Inc, and what the case is.
Each point should probably be expectation verification structure, like this:
Tagline: A reason why chatbots should not have free speech rights (or if you wanted to be quirky, the name of a case)
A) the criteria that nonhuman entities (like chatbots) would have to meet to enjoy free speech rights set by a previous case (for example, being capable of speech, not just being a speaker, according to Citizens United v. Federal Election Commission)
B) why chatbots do not meet this criteria—Judge Conway’s ruling states that she is not ready to rule on the argument that words formed by chatbots constitute speech.
Read More Here: