In a recent article for Missouri Lawyers Media, Thompson Coburn partner Mike Nepple explored how generative artificial intelligence is reshaping workplaces—while introducing a new legal challenge: defamation by AI.
“Everyone knows this thing hallucinates. Everyone knows that there’s a warning sign on there … It’s up to the person who receives the information to verify it,” said Mike. “They’re (OpenAI) warning people. There’s a pop-up saying, hey, this thing may not give you the truthful answer. You need to verify it before you use it.”
One unresolved issue is whether AI companies can claim the same immunity that shields online platforms under Section 230 of the Communications Decency Act. Section 230 generally protects internet platforms from being treated as publishers of third-party content, meaning they aren’t liable for most defamatory posts or comments made by users.
“As a general rule, Section 230 provides platforms with a defense for third-party content, not their own content,” said Mike. “The only time platforms get in trouble is when they create the content.”
Read the full article here (registration required).

