News

In a test case for the artificial intelligence industry, a federal judge has ruled that AI company Anthropic didn’t break the law by training its chatbot Claude on millions of copyrighted books. But ...
Anthropic told the court that it made fair use of the books and that U.S. copyright law "not only allows, but encourages" its AI training because it promotes human creativity.
Judge William Alsup determined that Anthropic training its AI models on purchased copies of books is fair use.
Anthropic used millions of books to train its AI, enraging authors, but a judge recently ruled in favor of the tech company, ...
A federal judge has ruled that Anthropic's AI training on copyrighted books qualifies as fair use, a significant win for the AI industry. However, the ...
To train its AI models, Anthropic stripped the pages out of millions of physical books before immediately tossing them out.
This week has seen two high-profile rulings in legal cases involving AI training and copyright. Both went the way of the AI companies ...
An Anthropic spokesperson said the company was pleased that the court recognized its AI training was "transformative" and "consistent with copyright's purpose in enabling creativity and fostering ...
In a test case for the artificial intelligence industry, a federal judge has ruled that AI company Anthropic didn’t break the law by training its chatbot Claude on millions of copyrighted books.
Anthropic told the court that it made fair use of the books and that U.S. copyright law "not only allows, but encourages" its AI training because it promotes human creativity.