As mentioned in yesterday’s editorial, there’s a significant copyright case on the bubble regarding artificial intelligence [or AI] models.
Venerable 172-year-old news outlet The New York Times is taking court action against OpenAI, the owner of AI model ChatGPT, and Microsoft, which has invested more than $15 billion in OpenAI.
The Times claims that “millions” of its articles have been used without its permission to train automated chatbots that now compete with it as “reliable sources of information”.
The lawsuit alleges that when ChatGPT is asked about current events, the AI model will sometimes generate “verbatim excerpts” from Times articles that cannot be accessed without paying for a subscription – with the result that it is losing subscription revenue as well as advertising clicks on its website.
As such, it’s claimed that the two tech companies are effectively attempting to “free-ride on the Times’ massive investment in its journalism”.
“There is nothing ‘transformative’ about using The Times’s content without payment to create products that substitute for the Times and steal audiences away from it.”
The term “transformative” here is important, as it’s defined under US copyright law as adding “something new, with a further purpose or character”, which is “more likely to be considered fair”.
This is the first time a major news publisher has sued AI creators, although there are a number of other copyright cases on the go, including Getty Images pursuing AI image generator vendor Stability AI for copying and processing millions of its images.
Although the lawsuit doesn’t seek a specific amount of damages, the Times estimates they’re in the “billions of dollars”.
Observers suggest the fact that OpenAI is valued at more than $128 billion is what prompted the Times to pull the pin on negotiations with the tech company in favour of the legal route.
They also note that a ruling in this case would be hugely significant in terms of clarifying copyright protection and fair use of AI technology and data, but it’s more likely the tech companies will seek to settle, because you don’t entertain questions you might not like the answer to.
The smart money appears to be on the Times being keen to settle too and – presumably on the basis that the horse has bolted and if you can’t beat ’em, you best join ’em – to ensure it gets an ongoing slice of AI-related revenue by following Getty Images’ lead and partnering with a major AI vendor to use its journalistic content to train up models or build an AI application.
While that’s understandable from a bottom-line-driven corporate perspective, when it comes to the rise and rise of AI it’s hard not to think there’s a great deal more at stake than profit-sharing.
That’s certainly the view of singer, songwriter, and novelist Nick Cave, who recently railed against ChatGPT as a threat to what it means to be human.
“ChatGPT is fast-tracking the commodification of the human spirit by mechanising the imagination. It renders our participation in the act of creation as valueless and unnecessary,” Cave wrote in response to a fan’s question about using AI models as a creative shortcut.
“It is our striving that becomes the very essence of meaning. This impulse – the creative dance – that is now being so cynically undermined, must be defended at all costs, and just as we would fight any existential evil, we should fight it tooth and nail, for we are fighting for the very soul of the world.”