If an AI platform publishes or creates content that is false, significant harm can be inflicted.
While ChatGPT quickly provides users with answers to virtually any question, its well-documented tendency to produce erroneous information is troubling. That is why it is not surprising to see that a defamation claim has been launched against the artificial intelligence (AI) platform in Georgia.
Georgia radio host Mark Walters claims that a ChatGPT response cited a legal complaint that accused him of embezzling money from a gun rights group. The problem is, Walters says he’s never been accused of embezzlement or worked for the group in question.
This is the first case in the United States where an AI platform has faced legal action for publishing defamatory content. The platform’s owner, OpenAI, in the U.S. may argue that section 230 of the American Communications Decency Act protects them from liability. The section protects American internet platforms from legal liability for the content of others they disseminate. In any event, this defence would not apply in Canada.
As anyone who has used the platform knows, ChatGPT expresses its responses, and its lies, with confidence. It doesn’t provide footnotes or source material that someone can double-check. It simply presents the information it provides as truth.
Reports of alleged “hallucinations,” as the industry refers to them, of fake facts generated by ChatGPT have emerged from around the globe.
In Australia, a mayor in April said he was preparing to sue OpenAI because ChatGPT falsely claimed that he was convicted and imprisoned for bribery.
In the United States, a federal judge imposed US$5,000 fines on two lawyers and a law firm after ChatGPT was blamed for their submission of fictitious legal research in an aviation injury claim.
“Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance,” the judge wrote. “But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.”
In Manitoba, a chief justice issued a practice direction that states lawyers have to disclose if they used AI to craft their submissions.
“While it is impossible at this time to completely and accurately predict how artificial intelligence may develop or how to exactly define the responsible use of artificial intelligence in court cases, there are legitimate concerns about the reliability and accuracy of the information generated from the use of artificial intelligence,” the direction states.
From a defamation viewpoint, the real interesting question is whether the lies promulgated by an AI response to an inquiry will constitute a publication that can then lead to liability.
While an answer to a ChatGPT query is only given to one person, the people who designed the AI platform know, or ought to know, that the responses could gain wide circulation, which then opens up the prospect of significant damages.
If an AI platform publishes or creates content that is false, significant harm can be inflicted. In those cases, it is likely courts will be inclined to fashion a remedy to provide compensation.
One mitigating factor for OpenAI is that it expressly admits that the product can produce some false information.
Under the subtitle “Limitations,” the ChatGPT website notes, “Despite its capabilities, GPT-4 … is not fully reliable (it “hallucinates” facts and makes reasoning errors). Great care should be taken when using language model outputs.”
Whether this disclosure is sufficiently brought to the attention of users has not been tested.
In the Georgia case and other lawsuits to follow, it is possible that a simple disclaimer hidden in terms of use will not be sufficient to protect AI platforms from liability. Yet if OpenAI prominently and widely acknowledges that many ChatGTP responses are fabricated or “hallucinations,” that will undermine the functional use and credibility of the platform.
Time will tell whether future versions of the software will be more reliable. In the meantime, when it comes to AI responses, be careful what you repeat.
Howard Winkler is the founder and principal of Winkler Law. For over 35 years, his areas of practice include media law, defamation and reputation management.