Large Libel Models
Lawyer's Affidavit in the Colorado AI-Hallucinated Precedent Case
"Overwhelmingly impressed by the technology, I excitedly used it to find case law that supports my client's position, or so I thought."
Colorado Lawyer "Says ChatGPT Created Fake Cases He Cited in Court Documents"
"I felt ... my efficiency ... could be exponentially augmented to the benefit of my clients by expediting the time-intensive research portion of drafting."
First (?) Libel-by-AI (ChatGPT) Lawsuit Filed
"Every statement of fact in the summary [provided by ChatGPT] pertaining to [plaintiff] Walters is false."
Communications Can Be Defamatory Even If Readers Realize There's a Considerable Risk of Error
And AI programs' "tendency [to, among other things, produce untruthful content] can be particularly harmful as models become increasingly convincing and believable, leading to overreliance on them by users. Counterintuitively, hallucinations can become more dangerous as models become more truthful, as users build trust in the model when it provides truthful information in areas where they have some familiarity."
Large Libel Models: An AI Company's Noting That Its Output "May [Be] Erroneous]" Doesn't Preclude Libel Liability
[An excerpt from my forthcoming article on "Large Libel Models? Liability for AI Outputs."]
Correction re: ChatGPT-4 Erroneously Reporting Supposed Crimes and Misconduct, Complete with Made-Up Quotes?
My Friday post erroneously stated that I got the bogus results from ChatGPT-4; it turns out they were from ChatGPT-3.5—but ChatGPT-4 does also yield similarly made-up results.
Large Libel Models: ChatGPT-3.5 Erroneously Reporting Supposed Felony Pleas, Complete with Made-Up Media Quotes?
[UPDATE: This article originally said this what ChatGPT-4 doing this, which was my error. But, as I note below in an UPDATE, ChatGPT-4 also erroneously reports supposed criminal convictions and sentences, complete with made-up quotes.]