Press "Enter" to skip to content
Credit: xb100 on Freepik

Generative AI vs. Copyright: The Fight for the Future of Creativity

The next frontier for copyright skirmishes is here, pitting tech behemoths against intellectual property stalwarts over an irksome question: When a machine conjures a melody reminiscent of a chart-topper, who, precisely, collects the royalties?

On one side stands Horacio Gutierrez, Disney’s chief legal officer and formerly Spotify’s general counsel. He sounds a clarion call against generative AI’s encroachment on human artistry, suggesting that the AI boom threatens to flatten the livelihoods of genuine creators in the name of innovation. The implication is that Silicon Valley’s appetite for consuming culture whole may cause a creativity vacuum.

In the opposing corner are AI proponents such as Mark Lemley, whose work, “How Generative AI Turns Copyright Upside Down,” published in The Columbia Science & Technology Law Review, suggests labelling AI training on copyrighted works as thievery misreads fair use. AI, in his view, is not copying but learning, not stealing but remixing. It is, in essence, a legal shrug.

Lemley posits that copyright itself needs a rethink given how generative AI upends it. To this, Gutierrez and a chorus of working creatives might retort that the law isn’t upside down; it’s the logic that permits the wholesale ingestion of books, paintings and songs for commercial gain without a nod to their creators that is askew.

The U.S. Copyright Office, navigating these turbulent waters, has offered its own measured commentary. It affirms that existing copyright law is sufficiently flexible to accommodate new technologies, but critically, human authorship remains “essential” for copyright protection.

Works generated by AI can be copyrighted only if a human author has exercised sufficient control over their expressive elements. This includes a human-authored work perceptible within an AI output, or where a human adds to an output creative arrangements or modifications. But providing prompts is not considered sufficient human control to confer authorship.

The office also weighs in on the training of AI models, acknowledging that while it often involves reproducing copyrighted works, the legality hinges on the doctrine of fair use. This is a fact-specific inquiry. It considers factors like whether the use is transformative, the nature of the work, how much it is used, and the impact on the market or value.

The core tension: protection vs. progress

The chasm between Gutierrez and Lemley boils down to a fundamental tension: Does copyright law primarily exist to protect creators and their economic interests, or should it adapt aggressively to facilitate technological progress and new forms of creativity?

Lemley’s analysis, with its academic edge, suggests that generative AI strains copyright’s foundational doctrines – the idea-expression dichotomy and the substantial similarity test. He argues that creativity increasingly resides in posing the right questions, not necessarily in generating the answers. While asking questions can be creative, much of the work traditionally rewarded by copyright is performed by the AI tool, and that output may not be protected.

This inversion means that traditional tests for infringement may need to be cast aside or applied in separate ways. For Lemley, clinging to old legal frameworks in the face of such a change in thinking risk stifling innovation and delaying the inevitable. He implies that the disruptive technology argument has historically won out, and AI should be no different; the challenge, he suggests, is for copyright to catch up to reality, not for reality to bend to copyright’s existing confines.

Gutierrez, meanwhile, contends: Why not uphold the existing framework? Copyright already possesses rules, not least among them the injunction against unauthorized copying. He argues that the music industry, through years of litigation and adaptation, has learned that protecting creators is paramount to a thriving ecosystem. He warns of risking a “body count” of human artists if AI exploits their work without compensation.

Gutierrez suggests that we should enforce what we have before dismantling the entire edifice. Rather than issuing a blank check to AI companies, a licensing regime – one where creators retain the power to assent or refuse having their work absorbed into the machine – might be a more equitable starting point. For Gutierrez, the focus isn’t merely on the technological marvel, but on the social and economic consequences that will befall unbridled technological advancement. He sees a clear threat to the very incentive structure that underpins human creativity.

The difference in their perspectives highlights distinct philosophies: Lemley, from the perspective of technological evolution and economic efficiency, seeks a legal framework that accommodates and even encourages the expansive AI capabilities. Gutierrez, from the creative industries’ vantage point and individual artists’ rights, emphasizes the need for enforcing current protections adapted to ensure fairness and prevent exploitation.

Beyond the legal technicalities

Lurking beyond the legalistic dance between these two perspectives lies a more profound cultural question: Will the future be built on consent and compensation, or on the convenient fiction that internet scraping equates to originality?

Gutierrez’s warning, one that some in the tech industry and academia may prefer not to hear, is stark: Unchecked exploitation risks a creative vacuum. If AI is to augment creativity rather than supplant it, then we must protect the wellspring of that creativity – human ingenuity.

Lemley’s insights are undeniable. But the unspoken truth, articulated by Gutierrez, is that allowing AI to steamroll copyright isn’t just a legal technicality; it is a deliberate choice, one that now serves the architects of the algorithms more handsomely than those who have painstakingly built our cultural landscape.

The tension between their views underscores the broader societal debate: Is innovation a license to disregard established rights, or can it flourish within a framework that respects and rewards the human element of creation?

A sensible middle ground: A proposal inspired by the EU

I was fortunate to have moderated a Q&A discussion with Guitierrez at a recent event after he made remarks about confronting AI with the obligation to protect creative industries’ rights. That discussion sparked my interest in weaving these legal and cultural threads together.

My thoughts here draw from Gutierrez’s call for an opt-out mechanism. The structural solution for copyright law should be to mandate AI companies to disclose the training data and empower regulators to implement and enforce opt-out rights. These changes would move beyond abstract principles and create real protection for original work.

The European Union offers a compelling model. The EU AI Act and related directives require developers to disclose their training data and respect copyright’s boundaries. Although the EU contemplates a consent-based approach in addition to an opt-out mechanism, I believe using that standard sets too high a hurdle for AI implementation.

A consent-first regime risks stalling development and driving up compliance costs. A more balanced approach may be to offer copyright holders a defined, enforceable right to opt out – putting the responsibility on rights holders to protect their work rather than forcing AI developers to obtain permission.

This framework preserves innovation while restoring creators’ control. It rejects the notion that progress must come at culture’s expense or rely on the tech industry’s goodwill. By grounding AI in a system of accountability and transparency, we would ensure that copyright evolves without losing its purpose – to protect the people behind the culture AI now mines.

Why some would choose to stay in

While my proposal calls for enforceable opt-out rights and mandatory disclosures, it’s important to acknowledge that not every copyright holder will – or should – choose to opt out. For some, remaining in the training pool offers strategic advantages.

Some creators may welcome the opportunity for greater exposure. When AI models generate content inspired by existing work, it can renew interest in the original material – driving traffic, sales or cultural relevance, much like musical sampling has done for decades. Others may see long-term value in allowing their work to shape the direction and quality of AI-generated creativity, ensuring that future tools reflect their style, voice or values.

There’s also a practical angle. For copyright holders with large catalogs or less commercially vulnerable materials – such as technical manuals, academic work or niche content – the risk of economic harm is probably low. But by participating, authors could find future licensing opportunities or simply broaden their footprint in an evolving digital ecosystem.

The point of an opt-out framework isn’t to shut the door on AI – it puts creators back in control of whether, when and how their work is used. A system that respects choice doesn’t preclude collaboration; it’s encouraged on fairer terms.

Some may oppose the opt-out framework

Critics of an opt-out framework argue that it presumes AI training infringes copyright and undermines fair use. They point to cases like Google Books and Oracle’s interoperability code as examples of lawful transformative use. But AI training operates on a vastly different scale.

These systems absorb millions of expressive works – not to index or interoperate, but to generate derivative content. Courts have not yet decided whether that’s fair. Until courts draw clearer lines, creators should have the right to opt-out. This isn’t about presuming infringement – it’s about protecting autonomy while the law catches up.

Others say opt-out is too complicated to implement. Who runs the registry? How do you handle co-authors or disputed ownership? But we already manage complex rights infrastructures: think Digital Millennium Copyright Act (DMCA) takedown databases, Creative Commons licenses or domain name registries. These aren’t perfect, but they function – and tech companies with billions in resources are well-positioned to adapt. Complexity is no excuse for inaction when creators’ rights are on the line.

A third argument claims opt-outs will weaken AI by limiting access to data, especially in high-stakes fields like medicine or science. But this fear is overstated. Developers can still access public domain content, licensed datasets and works whose owners don’t opt out. If models break without access to copyrighted material, that’s a design flaw – not a failure of the copyright regime or the opt-out. And in sensitive domains, it’s more important – not less – to ensure transparency and lawful sourcing.

Finally, some warn that regulation will push AI training offshore, leading to regulatory arbitrage. But avoiding minimal standards isn’t a solution – it’s surrender. Opt-out gives creators a modest tool to control how their work is used. And many creators benefit from training deals that flow from voluntary participation, not AI silently exploiting copyrights without letting anyone know it’s happening.

An opt-out system doesn’t stifle innovation – it creates a baseline of respect. It empowers creators without shutting down developers. And it lets the market evolve around disclosure and a decision, not coercion. That’s not a threat to AI – it’s a foundation for sustainable progress.

Toward a copyright framework for the AI age

The generative AI revolution demands a choice: Will we let technology reshape the creative economy on its own terms, or will we guide innovation to serve human flourishing? The disclosure and opt-out framework offer a balanced path. It recognizes AI’s transformative power and ensures that creators maintain meaningful control over their contributions.

This approach doesn’t freeze copyright law or hand AI companies unchecked access to cultural works. Instead, it builds a system where innovation and creative rights move forward together. We must get this right. By committing to transparency, accountability and creator choice, we can turn the AI era into a cultural renaissance, grounded in respect for the human imagination that fuels it.

Author

×