We’ve all heard about so-called AI “hallucinations,” when AI programs like ChatGPT make up “facts” that are not true.  For example, lawyers have gotten in trouble for citing fake AI-generated court cases, as we wrote about here and here. But could the creator of the AI platform itself also be held accountable?

Conservative radio host Mark Walters thinks so. Walters has sued ChatGPT’s creator, OpenAI, for libel in Georgia based on an alleged ChatGPT hallucination. At issue in the case—which appears to be the first of its kind—is whether ChatGPT’s output is a “publication” sufficient to support a libel claim. For now, though, the case is mired in a procedural dispute that will likely delay any substantive ruling.

Walters’ lawsuit centers around a reporter’s use of ChatGPT to research a story about a federal lawsuit filed in Washington. Walters alleges that the reporter provided ChatGPT with a link to the complaint and asked ChatGPT to summarize it. Although Walters was not actually named as a defendant, ChatGPT allegedly informed the reporter that the suit accused Walters of “defrauding and embezzling funds.” It then allegedly produced a copy of the purported complaint against Walters, which was completely fabricated.

Walters claims that OpenAI told the reporterthat ChatGPT’s responses were accurate, despite knowing that ChatGPT sometimes “hallucinates.” Walters is seeking damages based on that OpenAI’s statements to the reporter, which he claims were libelous.

Walters originally filed suit in Georgia state court, but OpenAI removed the case to federal court based on diversity jurisdiction and moved to dismiss. OpenAI argued, among other things, that Walters could not establish libel because the reporter did not and could not reasonably have viewed ChatGPT’s output as defamatory, given that the full transcript of the reporter’s chats with ChatGPT showed that the reporter told ChatGPT that its outputs were false. OpenAI also contended that ChatGPT’s output was not a “publication” for purposes of a libel claim because the output is just “draft content for the user’s internal benefit.”

Walters opposed the motion, but simultaneously filed an amended complaint that potentially obviates the motion to dismiss. A few weeks later, the court issued an order directing OpenAI to disclose facts supporting diversity of citizenship of the parties. OpenAI responded by refusing to provide additional facts and withdrawing its notice of removal. Walters is now seeking attorneys’ fees in connection with OpenAI’s removal.

Walters’ case will likely end up back in state court, and may face a renewed motion to dismiss. Whichever court ends up overseeing the case will ultimately have to weigh in on the difficult issues surrounding conduct by a machine that lacks the knowledge and intent a human actor possesses, but is ultimately controlled by human puppeteers.