This year will mark three years since The New York Times sued OpenAI and Microsoft for copyright infringement, and although the outcome of the case could be a milestone in clarifying whether AI vendors can use large amounts of creative content to train models without obtaining permission from creators, the case is still pending.
The New York Times’ lawsuit highlights the difficulties and challenges of adjudicating AI lawsuits and provides a backdrop for what 2026 will look like in the battle between creatives and AI vendors.
In the suit, filed on Dec. 27, 2023, the Times accused OpenAI generative AI and its then main backer, Microsoft, of using millions of its copyrighted articles without permission to train its models. The Times claimed that OpenAI’s generative AI tools competed directly with its publishing model, the largest newspaper and news website in the U.S. In response to the Times, OpenAI claimed that the news publisher was stifling innovation.
The Times’ suit was among a host of other AI lawsuits filed against AI companies for “stealing” creative works, including music and artworks, and using them to train AI models. In 2023, imaging model vendor Stability AI was sued by several companies, including Getty Images, for copyright and trademark infringement. Author and comedian Sarah Silverman also sued OpenAI for using her books to train its large language models (LLMs). Other authors sued generative AI vendors, including Anthropic, decrying, once again, the act of using their creative pieces as a training ground for AI models, all without permission.
Each lawsuit created ripple effects with the key motivation for authors, publishers and musicians seeking permission and even some sort of compensation before their work is used to train an AI model that could make them irrelevant, or in the case of the New York Times, threaten their core business.
More than two years after the Times sued OpenAI and Microsoft and three years after Getty Images sued Stability AI, there is still no definite conclusion. And it is unlikely these cases, which are still being litigated, will reach a conclusion this year. It is probable, however, that the publishing alliances that began to form between creatives and model makers will reappear.
More Emphasis on How LLMs Transform Material
However, the question of what constitutes fair use when AI model makers are training these models will depend on the metamorphic power of the technology. Fair use is the legal doctrine that allows one party to use copyrighted material without the owner’s permission for purposes such as news reporting, research, or other uses that serve the public interest. Meanwhile, the concept of transformation holds that if AI transforms original training data into something entirely new or different, then it is legitimate. While no precedent was set, in the Anthropic case, the judge ruled that fair use can be upheld because Anthropic transformed the data.
“In 2026, courts are likely to clarify how they distinguish ‘transformative’ training from substitutive uses, especially when models are general-purpose rather than direct competitors,” said Kashyap Kompella, CEO and founder of RPA2AI Research.
The judicial system was already considering the transformative power of AI technology in 2025. For example, in June, U.S. District Judge William Alsup ruled that, due to the transformative power of LLMs, fair use was a plausible argument in the case of a group of authors versus Anthropic. The judge, however, decided to let the case go to trial because of Anthropic’s use of pirated books. In September, Anthropic agreed to pay the authors $1.5 billion, an amount that is considered one of the largest copyright settlements in U.S. history.
The settlement shows that while fair use will continue to be an important marker and argument point in these AI lawsuits, in 2026, courts will focus more on how data is gathered, such as whether the data is pirated, or if the training data violated some forms of contractual agreements, Kompella said.
More Legal Settlements
The Anthropic settlement also indicates that more settlements could be coming in 2026.
“A single large settlement resets expectations across the plaintiff bar and litigation-finance ecosystem, increasing pressure to resolve cases once core facts are established,” Kompella said.
Model makers are not the only ones considering whether to settle; many publishers and authors will also use the Anthropic settlement as a criterion for whether to go to trial.
The key reason most are pushing for settlements is that a trial has major implications not only for the case at hand but also for the entire industry.
“A trial is a risk for everyone, and the risk is that you could set a bad precedent for yourself and for the rest of the parties that are aligned with you,” said Michael McCready, owner of McCready Law in Chicago.
The challenge is that if creatives win, it might lead to financial distress or even bankruptcy for some AI companies, particularly those without strong financial backing, such as big AI vendors like Anthropic. And if AI vendors win, creatives such as publishers, musicians, authors and artists get nothing.
“There really is so much at stake here,” McCready said. “It is worthwhile for both sides to come to a negotiated agreement.”
However, not all cases will settle. “At some point, someone is going to take this all the way, and we will have the first definitive decision on how these issues will be addressed in the future,” he continued.
One publisher that will likely settle in 2026 is The New York Times in its suit against OpenAI and Microsoft, said Michael Bennett, associate vice chancellor for data science and AI strategy at the University of Illinois Chicago.
“My notion there is that so many pieces of journalistic content are alleged to have been infringed by OpenAI and Microsoft, and it’s likely, there’s a decent chance, at least, that’ll be an incentive, against the backdrop of the Anthropic settlement,” Bennett said. “That’ll be a huge incentive for OpenAI to accept settlement terms that work for both parties.”
He added that many vendors are considering not only the legal battles they face in AI lawsuits but also their reputation.
“Large AI companies specifically, need to be concerned with the potential legal risk, right the intellectual property based risks of these suits, but they also need to be concerned with the potential tarnishment of their brands when they are alleged to have pirated, stolen, used without permission or compensation, others creative works for the purposes of training their systems,” he said. But a settlement will depend on what a vendor can financially afford, Kompella said.
More Licensing Deals, But No Collective Industry Standard
This means that in 2026, there could be more partnerships and AI licensing deals. The number of Licensing deals has been growing, including the New York Times’ deal with Amazon last May, reportedly worth $20 million to $25 million. Another major deal is Google’s deal with Reddit to use Reddit’s user-generated content for training its Gemini models.
Other AI vendors are creating programs that group different publishers together. For example, the Perplexity AI Publisher Program includes partners like The Los Angeles Times and a revenue-sharing model that pays publishers when the AI search vendor’s AI search chatbot uses the publishers’ content for an AI-generated response.
New business opportunities created, ironically, by former legal disputes, could well continue to arise later this year.
“Certain companies that claim to have been infringed by large AI companies and their training efforts have gone on, in several instances with AI companies, to create new business enterprises,” Bennett noted. One notable example is Getty. After suing Stability, the stock image provider, created its own AI product called Generative AI by Getty Images.
Despite the increase in licensing deals, it is unlikely that there will be a collective agreement between AI vendors and creatives on a scale similar to the compensation model that currently supports the music industry.
In 2001, the music-sharing site Napster agreed to pay $26 million to settle lawsuits over illegal music sharing. The deal later failed after Napster filed for bankruptcy and a judge blocked an acquisition deal with Bertelsmann, the Germany-based multimedia conglomerate, but it laid the groundwork for the compensation model that protects the music industry today.
The AI industry is not ready for that kind of model, and it will be challenging for creatives, Bennett said.
“I would be surprised if we saw something that scale, that ambition, in large part because of the wide range of materials that have very likely been sampled and or simply straight up appropriated in order to train the models,” he said. He added that AI model makers have used many types of written texts to train the models, and not all those texts receive the same level of protection. For example, journalistic writing receives less protection than works of fiction.
“Those differences would make it difficult for a very large group of content creators to get together,” Bennett continued. “I would not expect millions of people, or tens of millions of people, or anything like that. But you could imagine something smaller, 1000s of people.”
On the other hand, there could be a consensus in the creative world around “enforceable dataset transparency, scalable licensing for high-value corpora such as publishers, music catalogs, and stock libraries, and output-side guardrails like provenance tools, watermarking, and restrictions on artists,” Kompella said.
Emphasis on Bigger Issues
It is also highly likely that 2026 might be the year when intellectual property lawsuits recede somewhat as the tech world and government regulators turn their attention to other, bigger problems affecting society due to the use of AI, Bennett said.
These include generative AI’s impacts on employment, education and energy production.
Another type of lawsuit that could arise is over algorithmic bias. One current suit is Mobley v. Workday, in which Derek Mobley claims that Workday’s screening tool disadvantaged his job application. Another case, which exposed a different type of bias, involved two black women, Mary Louis and Monica Douglas, in Massachusetts, who sued SafeRent Solutions, a rental company, in 2022, because of the alleged bias in its algorithm against Black renters. The rental company later settled for $2.275 million.
“Many bias cases end through operational remedies — audit constraints, monitoring, usage limits — rather than a sweeping ‘AI is unlawful’ holding,” Kompella said.
Regardless of the type of AI lawsuit, what is evident is that some clarity should emerge about AI technology that uses creative works, said James Cooper, a professor at California Western School of Law.
Currently, local and regional jurisdictions are being forced to decide what is allowable in most of these cases. While many are waiting for how the lawsuits will play out, many say the legislative branch of government needs to get involved, to create effective regulatory frameworks. AI is moving fast, and it is time that our regulators do their jobs rather than our respective societies having to rely on the judiciary to deal with this rapidly evolving technology,” Cooper said.
He added that while the courts have been shouldering most of the work in sorting out complex issues of IP ownership, lawmakers and regulators need to do more to provide binding guidance that all vendors and creatives can follow.
However, politics could impede any strong moves by Congress, McCready said.
“There’s a lot of interests at stake here, and to find consensus on anything these days is nearly impossible, so I don’t see Congress touching this with a 10-foot pole,” he said.

