Press "Enter" to skip to content

Agentic AI Poses New Legal Risks Beyond Copyright

TLDR

  • The shift from generative AI to agentic AI introduces new liability questions around autonomous actions like signing contracts or executing trades, requiring legal frameworks beyond current IP and compliance concerns.
  • 2026 marks a critical inflection point as major AI regulations move from discussion to enforcement, with the EU AI Act’s high-risk provisions and Colorado’s comprehensive AI law taking effect.
  • Key fair use cases to watch include the consolidated New York Times v. OpenAI proceedings and the Thomson Reuters v. Ross Intelligence appeal, which could set circuit-level precedent on AI training practices.

As the AI industry pivots from tools that assist humans to autonomous agents that execute multi-step tasks, legal experts warn that existing frameworks focused on copyright and compliance won’t address the emerging risks.

“The focus has been a lot on generative AI,” said Rohith George, partner at Mayer Brown and co-author of the law firm’s comprehensive guide, “Technology Transactions: Getting IP Right.” “Everybody’s talking about it, but the next phase is more agentic AI where it’s the AI tool actually executing steps.”

The set of agentic AI risks differ. “It’s not just about hallucinations,” George explained in an interview with The AI Innovator. “When an AI agent signs a contract autonomously, or makes a trade, or causes a data breach in some way, who’s liable? Is it the developer? Is it the deployer who puts it into practice? Is it the company who’s supposed to be supervising it? … How do you measure how well it’s doing its job, if it truly is an autonomous agent, and what is the risk or liability?”

The scope of risk expands dramatically as companies begin outsourcing core business functions to AI agents. George explained that as organizations delegate critical operations such as payroll to agentic AI solutions, “it’s less about IP (Intellectual Property). It’s more like, how do we allocate risk if …  things just stop working, or if funds go where they shouldn’t have gone.”    George believes the risks will be pervasive.

From theory to practice: The 2026 imperative

While agentic AI represents the frontier of legal uncertainty, 2026 will serve as a watershed moment for existing AI regulations as they transition from policy discussion to practical enforcement.

“From a compliance standpoint, we’ve been talking about these various AI laws for a year or two,” George noted. He frames the legal trend as “going from talking about it to actually making sure … our clients are implementing all the appropriate requirements to ensure compliance, because they do come with various potential heavy fines for not complying.”

The EU AI Act’s high-risk provisions take effect in August 2026, requiring companies to implement risk management systems, maintain technical documentation, keep records, ensure human oversight and conduct conformity assessments. Critically, these obligations extend beyond EU borders to any multinational company using AI that affects EU users.

Colorado’s AI Act (C.R.S. § 6-1-1701 et seq.), which George describes as “the first comprehensive set of guardrails” in the U.S., takes effect in June 2026. The Act imposes a duty of reasonable care on developers and deployers to protect consumers from algorithmic discrimination, requiring impact assessments, risk management programs aligned with NIST frameworks, and transparency notices before consequential decisions affecting employment, housing, lending, healthcare, and other critical services.

Enforced exclusively by the Colorado Attorney General under C.R.S. § 6-1-1706, violations constitute unfair trade practices carrying civil penalties of up to $20,000 per violation under the Colorado Consumer Protection Act.

George notes that it could set a template for other states, much like California’s privacy law did.

The state of AI and copyright law

On the intellectual property front, some principles have crystallized while others remain contested. Human authorship remains the standard under U.S. copyright law, meaning purely AI-generated output cannot be copyrighted, according to the U.S. Copyright Office. Protection requires “some meaningful human creative control.”

This creates practical implications for technology contracts. “If AI-generated outputs and content aren’t necessarily copyrightable under U.S. federal copyright laws, then it becomes increasingly important to sign and allocate ownership of any outputs through contract, essentially,” George explained.

The fair use question − whether training AI models on publicly available data constitutes fair use − remains unresolved, with several key cases advancing through the courts. George highlighted two to watch in 2026.

First, the consolidated Southern District of New York proceedings − including The New York Times Co. v. Microsoft Corp., No. 1:23-cv-11195-SHS, along with related cases involving Daily News LP and the Center for Investigative Reporting − represent “probably the most comprehensive case on text model specifically, text model training versus like image training.” While a jury trial in 2026 remains uncertain, summary judgment rulings could emerge.

Second is Thomson Reuters Enter. Ctr. GmbH v. ROSS Intelligence Inc., No. 1:20-cv-00613-SB (D. Del. Feb. 11, 2025), which carries particular weight as questions have been certified for Third Circuit review, “which would be … precedent-setting in a way that the district courts might not have been,” George noted.

Data hygiene gains new urgency

The emergence of AI as an independent consumer of data has fundamentally changed how companies think about data rights. “I think many companies are realizing that their data has value, because now there’s an independent purchaser interested in buying that data, and so there’s like a valuation associated with it that arguably, you can find,” George said.

This shift is driving renewed focus on what George calls “data hygiene” − the provisions under which companies license data both inbound and outbound. Companies are scrutinizing whether they’re granting overly broad rights to their data through service providers and partners, while also ensuring they have sufficient rights to data from customers and third parties.

George explained that even companies not directly selling their data to LLM providers − like Reddit has done − still benefit from strong data rights. “It’s a competitive advantage, in many cases, the type of data that you’re able to collect and the rights you’ve gained to use that data,” he said.

Hurdles to revenue-sharing at scale

While some content creators have successfully negotiated direct licensing deals with AI companies, broader revenue-sharing models face significant practical hurdles. The challenge lies in attribution: determining which training data contributed to any given AI output.

“The model is certainly feasible. I mean, it’s certainly interesting in theory, but I think figuring out how to implement it would be pretty challenging,” George said. The complexity differs depending on how the AI uses the data.

“It’s a little bit easier when it’s just retrieving data,” he explained, because retrieval creates a clearer connection between source and output. “But using it to train is much, much, much more difficult to kind of allocate” because the model synthesizes massive datasets in ways that obscure individual contributions.

As the legal framework for AI continues to evolve, George emphasizes that established practices for human accountability don’t yet exist for autonomous AI systems. “We’ve been handling this for decades in terms of when a person does something for you, like we’ve figured out how to allocate responsibility, how to allocate liability, you know, how to measure performance through SLAs. We just haven’t gotten there with agentic AI, and I think that’s a really interesting area where the market’s going to develop norms and trends on how to actually handle this type of scenario.”

For organizations navigating this landscape, the message is clear: 2026 represents a transition from theoretical preparation to practical implementation, with new risks emerging even as old questions remain unresolved.

Author

  • Melissa Winblood

    Melissa Winblood, an attorney with 34 years of legal experience, is the founder and CEO of Counsel and Code, a consultancy providing practical AI implementation guidance to attorneys and law firms. A recent graduate of the University of Texas McCombs School of Business's intensive AI and Machine Learning program (4.19 GPA), she learned to code, build LLMs and deploy models. With this rare combination of deep legal expertise and hands-on technical skills, she translates complex AI capabilities into practical applications that help legal professionals work more efficiently and stay ahead of the curve.

    View all posts

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

×