After previously being accused of using pirated books to train its artificial intelligence models, Apple now faces yet another class-action lawsuit. In this latest case, Susana Martinez-Conde and Stephen Macknik, both neuroscience professors at the State University of New York Downstate Health Sciences University, allege that Apple unlawfully used their copyrighted works. The lawsuit claims that Apple employed so-called “shadow libraries” and web-crawling software to access pirated materials — including the professors’ own publications — to train its AI models.
This new lawsuit follows another class action filed just a month earlier by a separate group of authors, who similarly accused Apple of using published literary works without permission to train its Apple Intelligence AI system.
Apple is not alone in facing such allegations. Other major technology firms — including OpenAI — have encountered comparable lawsuits. For instance, The New York Times has also sued OpenAI over similar copyright violations. Although the AI industry remains in its early developmental phase, these legal battles are already shaping what could become defining precedents. Earlier this year, Anthropic agreed to a $1.5 billion settlement with more than 500,000 authors, resolving claims over the unauthorized use of copyrighted materials in AI training datasets.
Apple’s latest legal challenge underscores a persistent issue at the heart of AI innovation — the legitimacy of training data acquisition. As content creators and publishers increasingly assert control over how their works are used in AI development, technology companies are being forced to reassess the legality and ethics of their data practices.
Given the scale of Anthropic’s settlement, it is evident that such lawsuits carry the potential for massive financial repercussions, potentially reshaping the strategic direction of AI development across the industry.
Looking ahead, it is likely that more tech companies will seek to prioritize compliance in training data usage, entering into licensing agreements with authors, publishers, and content owners to legally obtain materials for model training — thereby mitigating future legal risks.
Yet, this shift may also drive a substantial rise in the cost of AI development, raising important questions about how increased training expenses might impact the pace and accessibility of future AI innovation — an issue the market will be watching closely.
- A Landmark Deal: Anthropic Settles Class-Action Lawsuit with Authors Over AI Training
- Apple Sued for Training AI on Pirated Books
- Jury Rules Against Google in $425M Privacy Lawsuit
- Hackers make poisoned Final Cut Pro specifically to target Mac users
- Meta Sued for Training AI with 81.7TB of Copyrighted Data