Anthropic Defends AI Training Methods, Cites Robust Safeguards Against Copyright Infringement in Music Publisher Case

Anthropic Defends AI Training Methods, Cites Robust Safeguards Against Copyright Infringement in Music Publisher Case

By Marcus Bennett

December 24, 2024 at 11:15 PM

Anthropic is actively opposing major music publishers' preliminary injunction request regarding their AI chatbot Claude's alleged copyright infringement of protected musical works.

Circuit board with AI processor

Circuit board with AI processor

The publishers seek two main injunctions:

  • Remove protected works from Claude's training data
  • Block protected lyrics from appearing in Claude's outputs

Key points from Anthropic's opposition:

  1. Fair Use Defense
  • Using copyrighted works to train LLMs constitutes fair use
  • Training data usage is "transformative" under fair use doctrine
  • Any potential damages could be compensated monetarily
  1. Technical Context
  • Claude learns from "trillions of tiny textual data points"
  • Training data likely includes some copyrighted works
  • Research details predated Claude's commercial release by nearly a year
  1. Protective Measures
  • Implemented "broad array of safeguards" to prevent copyright violations
  • No reasonable expectation of continued infringement
  • Disputes claims of ongoing market and licensing harm

Supporting the opposition, Anthropic co-founder Jared Kaplan provided detailed testimony about Claude's training process and technical specifications.

The case (5:24-cv-03811) remains ongoing, with reports suggesting a significant portion may be dismissed in the near future. The outcome could set important precedents for AI training data usage and copyright law.

3D blue AI text on abstract

3D blue AI text on abstract

Anthropic logo on black background

Anthropic logo on black background

Related Articles

Previous Articles