Ad image

A Lawsuit Against Perplexity Calls Out Fake News Hallucinations

3 Min Read

Perplexity did not respond to a request for comment.

In a statement emailed to WIRED, News Corp CEO Robert Thomson compared Perplexity unfavorably to OpenAI. “We celebrate principled companies like OpenAI that understand that integrity and creativity are essential to realizing the potential of artificial intelligence,” the statement said. “Perplexity is not the only AI company abusing intellectual property, nor is it the only AI company we pursue with vigor and rigour. We have made it clear that we would rather persuade than litigate. But for the sake of journalists, writers, and companies, we must fight back against the politics of content theft.”

However, OpenAI faces its own accusations of trademark dilution. in New York Times vs. OpenAIThe Times claim ChatGPT and Bing Chat attribute fabricated quotes to the Times and accuse OpenAI and Microsoft of damaging their reputations through trademark dilution. In one example cited in the lawsuit, the Times alleges that Bing Chat claimed that the Times referred to red wine as a (moderately) “heart-healthy” food, when in fact it did not. I am doing it. The Times claims that actual reporting has debunked claims that moderate drinking is good for health.

“As we made clear in our letter to Perplexity and in our lawsuit against Microsoft and OpenAI, it is illegal to copy news articles and operate alternative commercial generative AI products,” said NYT Director of External Communications Charlie Stadtländer. says Mr. “We applaud this lawsuit by Dow Jones and the New York Post. This is an important step toward ensuring that publishers’ content is protected from this type of abuse.”

Matthew Sugg, a professor of law and artificial intelligence at Emory University, said AI companies would face “immense hardship” if publishers prevailed in their claims that hallucinations could violate trademark law. There is a possibility that you will face it.

“It’s absolutely impossible to guarantee that a language model won’t hallucinate,” Sugg says. In his view, the way language models work by predicting which words will sound correctly in response to a prompt is always a kind of illusion, only in some cases sounding more plausible than others.

“We only call it a hallucination if it doesn’t match reality, but the process is exactly the same whether we like the output or not.”

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Exit mobile version