Anthropic Defends AI Training as Fair Use, Challenges Music Publishers' Claims of Irreparable Harm
Anthropic has responded to music publishers' demands for an injunction against their AI model Claude, arguing that using copyrighted lyrics for AI training falls under fair use and hasn't caused irreparable harm to publishers.
Anthropic logo on screen display
Key points from Anthropic's response:
- The company argues publishers haven't demonstrated "irreparable harm," which is required for an injunction
- Any potential damages could be addressed through monetary compensation rather than injunctive relief
- An injunction would significantly impair their AI model's development
- The use of copyrighted works for AI training serves a transformative purpose, qualifying as fair use
- Anthropic emphasizes the public interest in allowing AI development to continue
The dispute began when Concord Music Group and other publishers filed a lawsuit alleging copyright infringement in the training of Anthropic's Claude LLM. In response, Anthropic has maintained transparency by publicly releasing its system prompts and positioning itself as an ethical AI company.
This legal battle extends beyond music publishers, as authors have also filed a class action lawsuit against Anthropic for allegedly misusing their work in AI training.
Alex Albert, Anthropic's head of developer relations, has indicated the company will continue to disclose information about system prompts as they evolve.
Anthropic logo on black background
3D blue AI text on abstract