Anthropic Claims Fair Use Protection for AI Training Data When Used as 'Intermediate Step'
Anthropic, recently facing a music industry copyright lawsuit, has submitted comments to the Copyright Office defending its AI training practices as potentially constituting fair use under specific circumstances.
Anthropic logo on notepad
The AI company argues that its assistant Claude was trained in a "quintessentially lawful" manner, specifically stating that copying protected works as an intermediate step to create non-infringing output can qualify as fair use. Anthropic explains that their training process involves copying information for statistical analysis, which they consider non-expressive use of copyrighted material.
Key points from Anthropic's position:
- Training process makes copies solely for statistical analysis
- Copying serves as an intermediate step to extract unprotectable elements
- The use is transformative and unrelated to the original work's expressive purpose
However, this stance faces strong opposition. ASCAP explicitly counters this view, stating:
- No scenario exists where unauthorized AI training on copyrighted works constitutes fair use
- Such use is not transformative
- Commercial purpose drives each unauthorized use
The debate occurs amid broader developments in AI regulation:
- EU negotiations on the AI Act include calls for mandatory training disclosures
- Recent launches of AI-powered features on major platforms like Spotify
- Multiple ongoing lawsuits against AI companies over training data usage
Anthropic logo on black background
Universal Music, Concord, and ABKCO are currently suing Anthropic for alleged widespread copyright infringement, highlighting the growing tension between AI development and creative rights protection.