blog post
OpenAI Faces Potential Legal Showdown with the New York Times over Copyright Violations
OpenAI might soon find itself at the center of a potential landmark legal battle with The New York Times. Following a recent update in the Times’ terms of service that prohibits AI firms from utilizing its articles and images for AI training, reports indicate that the media giant might take legal action against OpenAI. If validated, this could lead to a complete reconstruction of ChatGPT’s dataset and exorbitant fines, amounting to $150,000 for every piece of violating content.
Sources familiar with the matter informed NPR that the Times is exploring legal avenues to safeguard its intellectual property rights, prompting concerns over the future of AI training models like ChatGPT.
The potential lawsuit comes on the heels of another controversy, wherein celebrated authors, including Sarah Silverman, accused OpenAI of copyright infringements.
While ChatGPT has been a focal point, other AI tools like Stable Diffusion have also faced scrutiny over copyright concerns. However, the primary issue for the Times is not just about protecting its intellectual property. There are growing concerns that AI tools, drawing from the Times’ rich content, might emerge as direct competitors, generating answers based on the original content produced by the Times’ team.
Earlier this month, the Times explicitly forbade the use of its content for software development, including AI training. Speculations are rife that this move is to fortify its position in ongoing negotiations with OpenAI. An initial licensing agreement that would have had OpenAI pay for using NYT content now seems to be on shaky ground.
For OpenAI to come out unscathed, they would have to assert a “fair use” stance, arguing that their AI models do not compete with original content providers like the Times. Drawing a parallel, experts cited the Google Books case of 2015. Google emerged victorious as its excerpts from books didn’t pose a threat to the books’ market. However, the Times’ stance is that AI tools like ChatGPT might reduce their online readership.
The Times leadership has previously expressed concerns over AI tools undermining their intellectual property rights. In a memo circulated in June, high-ranking officials of the Times highlighted the challenge of ensuring AI companies respect their intellectual rights and branding.
The Times isn’t the only media house treading cautiously in the AI domain. Last month, the Associated Press entered into a licensing agreement with OpenAI, although details remain under wraps. The Associated Press, along with other media houses, is working towards framing standards for AI usage in journalism, reflecting a broader industry sentiment.
Earlier this year, the News Media Alliance rolled out principles for AI, emphasizing negotiations between AI firms and publishers for content usage. The outcome of a potential OpenAI-NYT legal battle could set a precedent for how AI tools and media entities coexist in the future.
Author
Steve King
Managing Director, CyberEd
King, an experienced cybersecurity professional, has served in senior leadership roles in technology development for the past 20 years. He has founded nine startups, including Endymion Systems and seeCommerce. He has held leadership roles in marketing and product development, operating as CEO, CTO and CISO for several startups, including Netswitch Technology Management. He also served as CIO for Memorex and was the co-founder of the Cambridge Systems Group.