Major U.S. Newspapers Sue OpenAI, Microsoft Over Alleged Unauthorized Use of Articles for AI Training
December 23, 2024The New York Times emphasizes that AI-generated misinformation could harm their reputation and erode trust among their readership.
The newspapers claim they are entitled to compensation for the use of their content without permission, which they argue threatens their subscription-based revenue model.
These lawsuits underscore growing concerns that AI chatbots may disseminate misinformation by generating false news articles that are falsely attributed to credible sources.
In response, OpenAI contends that any regurgitated content originates from older articles on third-party sites and asserts that they are actively working to enhance their models to mitigate such issues.
This legal action follows a previous lawsuit filed by the New York Times against OpenAI on December 27, 2023, concerning copyright infringement related to ChatGPT.
On April 30, 2024, a coalition of eight major U.S. newspapers, including the New York Times, initiated a lawsuit against OpenAI and Microsoft, alleging unauthorized use of their articles to train AI models.
Instances of concern include ChatGPT producing fabricated product recommendations and news articles under the names of Guardian reporters, who did not write them.
OpenAI has characterized the lawsuit as lacking merit and accused the New York Times of providing incomplete reporting regarding the regurgitated content.
OpenAI CEO Sam Altman has publicly stated that the New York Times is on the 'wrong side of history' in this ongoing legal battle.
Both parties present strong arguments, and the outcomes of these cases could establish important precedents for future AI content training and copyright disputes.
The results of these lawsuits may significantly influence how AI companies develop their models and raise critical questions about accountability for misinformation generated by AI.
Summary based on 1 source