Artificial intelligence programs can “hallucinate”—make things up. We’ve seen that when lawyers have had AI write their legal briefs, and the resulting documents cite totally fictitious “hallucinated” precedents. Because of that, I’m wary of the veracity of AI-generated content.
Still, it surprised me when AI hallucinated something about me. A Google Alert concerning news about my law firm told me I’d been quoted in an article about a celebrity libel case. That’s not improbable—I’ve handled many libel cases and written a lot about libel. Still, this turned out to be really weird.
The article quoted me as saying something I’d never said. In fact, it totally misstated a key issue in libel law—specifically, what facts developed in a journalistic investigation require further checking by the publisher. I won’t give that fake quote the dignity of being republished. Suffice it to say it wasn’t me, and it wasn’t correct.
But the article led the false quote with this flattering preamble: “As Mark Sableman, a partner at Thompson Coburn LLP and a nationally recognized media attorney has explained ….” And it followed the false quote with a citation: “Mark Sableman, Thompson Coburn LLP (from “Truth or Consequences: Avoiding Defamation in the Digital Age.)” That title was underlined and presented in light blue type, as is typical of a hyperlink.
So while I didn’t recall writing the cited piece, I wondered for a moment if there could be some basis for this. I took up the article’s implicit invitation and clicked on the apparent link. The underlining then disappeared, and no linked page appeared. Then I checked the webpage’s HTML source code. There was no hyperlink.
By now there was no question that the quotation was totally fake. The article title, referring to a celebrity libel case, and the plethora of ads on the page, all corroborated that the article had been posted solely as clickbait, to attract users who would then click on one or more of the ads, generating revenue for the page host.
But I kept on checking, moving on to the published bio of the article’s alleged author. His name was given as “George Daniel,” and the bio explained that “Richard’s reporting focused on how the law shapes everyday life …”
Enough said. Clearly, an AI program was used to create the article, for clickbait purposes. The Large Language Models on which the program was trained probably included legitimate published articles of mine, so the AI program decided to cite me as a libel law expert. But the theme of the article (perhaps set by the prompts) was that libel defendants had to laboriously verify every assertion in their publications—something that a lay person may believe, but which is inconsistent with constitutional standards in public figure cases. So the AI program made up (“hallucinated”) a quotation consistent with its erroneous theme, and attributed it to me.
This is not just a temporary quirk at the onset of AI. Recent research has demonstrated that hallucinations are inevitable in AI. OpenAI researchers have acknowledged this, and even admitted that current evaluation methods worsen the problem. As long as we use AI content generation, we will get hallucinations in the responses.
Is this just good fun, something that comes with AI that we should brush off?
Not at all. This example illustrates the serious damage being done by this kind of use of AI content generation. A totally false story was created. The theme was wrong, the legal explanations were wrong, and the quotations were fake. Anyone who read this article would have been misled and misinformed. (Nor would their lives have been improved if they had clicked on the ads for cheap goods or on links to other clickbait stories.)
For myself, and other legitimate authors whose names are misused in AI stories, the erroneous quotation could have caused real harm. Particularly if the story stayed posted, and worked its way into search engines, and possibly even more AI-generated content, it could have hurt me and my firm. You’re not going to have a good reputation as a media lawyer if you are quoted on the web misstating basic libel law.
If you are a published writer, you now face a new concern. It’s not just that AI may be learning from and copying your content, without authorization, likely in violation of your copyright. It may also be scraping your name from the databases on which it was groomed, and using your name in connection with stories it creates and content and quotations it hallucinates. Your credibility and your reputation will be threatened.
My firm has resources and stands up for its rights, so my partners Matt Braunel and Nathan Fonda tracked down the malefactor and got the hallucinated quote removed. (The remainder of the misleading article remains online). But you may have a problem in these situations if you don’t immediately catch the misuses of your name or content, or if you can’t respond so forcefully, or if you face a spiral of misuse that multiplies beyond reasonable control.
The clickbait site continues and has even attributed a second hallucinated quotation to me, albeit one closer to my actual writings. It has refused to take down this quote, despite our notices, thereby creating a record—useful for future litigants—that it keeps hallucinated quotes posted even when it knows they are hallucinated.
“Artificial Intelligence”? I think that is a strange name for an arithmetic word-prediction tool, which creates “hallucinated” (i.e., false and fraudulent) content. “Intelligence” doesn’t misuse reputable writers’ names, or misrepresent what they’ve written. “Intelligence” doesn’t lure readers to a page full of clickbait.
In these cases, rather than call this AI, let’s call it what it is — a Financially Remunerative Artificial Utterance Developer (FRAUD).

