AI in Defamation Lawsuit Cases

The Defamation Expert Witness  / Defamation Law, General Commentary /  AI in Defamation Lawsuit Cases

AI in Defamation Lawsuit Cases

0 Comments
Defamation by AI Chatbots

AI is increasingly a component in defamation cases. As an expert witness in defamation, I have seen AI play a part in multiple defamation cases at this point.

A number of the top generative AI companies have made various modifications and trained their AI bots to reduce and avoid committing defamation some, and to reduce defamation liability exposure. They could reduce it even further, but it might also limit the machines’ effectiveness, since it would reduce how much information they could relay from data sources.

In the last year, I was contracted by attorneys representing an organization and an individual, both of whom had been allegedly defamed by Meta’s Llama AI. However, Meta apparently modified their system to reduce or eliminate the defamatory imputations after hearing from my clients’ attorney. Even so, some of the answers can still convey defamatory claims made by others.

Simply submitting a person’s or a company’s name to AI chatbots generally does not surface defamatory content. But asking more pointed questions that people are likely to ask will result in responses that reiterate negative, damaging, and defamatory content from the internet. For instance, it is very likely for people to ask “Does so-and-so have a good reputation;” “Does so-and-so have any reputation problems;” and, “Are there any concerns about so-and-so?”

Increasingly, the AI chatbots, such as ChatGPT, Gemini, Claude, Grok, and others, will use qualifying language when repeating gossip or reputation-damaging content. Language I am seeing includes things like “some have said..,” “there are allegations of…,” “someone with that name has been alleged to…”, etc.

For those being harmed by what generative AI is saying about them, the question is likely to be: “Who is responsible?!?”

In a number of my cases, I have found that allegedly defamatory statements originated from people who have posted the statements online. I do not know the full legal theories about which party is responsible or partially responsible. I would imagine that large tech companies are attempting to push Section 230 of the Communications Decency Act (which has largely immunized internet companies from liability for what other parties say on the internet — so, Facebook is not responsible for what people post there and Google is not responsible for what is written on webpage content they reflect in search results, according to the Section 230 theory). On one hand, if chatbots transmit allegedly defamatory stuff somewhat verbatim or paraphrased from others who originated it, then practical sense suggests that they should be protected under Section 230. However, I have seen instances where chatbots make defamatory statements about subjects absent the context of the source materials — such as transmitting a defamatory claim from a news article without the context that the defamer had lost a lawsuit over the defamation. In other words, the generative AIs can make mistakes with some frequency in presenting representations about people and companies — there is an argument that the companies producing the AI bot agents should be responsible for defamation.

One of the worst aspects is that the generative AI models are not truly “intelligent”, so if an individual has widely spread a defamatory claim repeatedly, the AIs can interpret the volume of damaging claims as something like a community consensus, lending too great a weight to the disparaging assertions than they should.

Another concern is in determining how long inaccurate and defamatory data may “stick” inside the AI models. While one hopes that deletion of defamation from source materials should rapidly lead to it getting removed from the chatbots, there is no standardized policy about this, and consumers really need some formalized protections.

One of the most difficult aspects in defamation cases involving AI is determining how many people may have been exposed to false and defamatory assertions about a defamation victim. The AI companies are not sharing a lot of data about volumes and types of prompts submitted to them, and this could impact how much weight a court should put on what AI says in relation to a reputation. As it currently stands, AI use has increased extraordinarily in the last few years, and AI overviews appearing in search engine results are prominent in affecting online reputations.

If you are seeking an expert witness involving slander or libel transmitted via AI chatbots, contact The Defamation Expert Witness, Chris Silver Smith. You need someone with a background in technology, search engines, online reputation management, analytics, and programming.

The Defamation Expert Witness


Leave a Reply

Your email address will not be published. Required fields are marked *