AI Copyright: Rules, Risks & What Marketers Actually Need to Know

AI copyright law is shifting fast. Here's who owns AI-generated content, what the lawsuits mean for your business, and how to protect yourself in 2026.

L
LoudScale
Growth Team
13 min read

AI Copyright: Rules, Risks, and What Your Business Actually Needs to Do About It

TL;DR

  • The U.S. Copyright Office confirmed in January 2025 that AI-generated content can only receive copyright protection when a human author determines sufficient expressive elements, meaning purely AI-generated work is unprotectable under current law.
  • AI copyright lawsuits more than doubled in 2025, jumping from roughly 30 cases to over 70 active federal lawsuits, with the $1.5 billion Bartz v. Anthropic settlement signaling that infringement claims carry real financial weight.
  • The biggest practical risk for businesses using AI tools isn’t getting sued for infringing someone else’s copyright. It’s producing content you can’t legally protect, leaving your marketing assets, copy, and creative work exposed to competitors who can copy it freely.
  • Marketers and content creators should build a documented human-authorship workflow for any AI-assisted content they want to own, and this article provides a risk matrix for deciding how much human involvement different content types actually need.

I spent the first three months of 2025 assuming my team’s AI-assisted blog posts were “ours.” We’d prompt, we’d edit, we’d publish. Ownership felt obvious. Then a competitor republished two of our articles almost word-for-word, and when I called our attorney, she asked a question that genuinely rattled me: “Can you prove a human made the creative decisions in those pieces?”

I couldn’t. Not in any way that would hold up.

That experience sent me down a rabbit hole that’s now consumed the better part of a year. And here’s what I’ve learned: most of the advice out there about AI and copyright is written by lawyers, for lawyers. It’s accurate. It’s also almost useless if you’re a marketer, a founder, or a content creator trying to figure out what you can and can’t do with AI tools right now. The number of pending AI copyright infringement cases nearly reached 70 by the end of 2025, and the stakes are only climbing.

So this isn’t a legal textbook. This is the practical framework I wish someone had handed me 12 months ago. You’ll walk away knowing which AI-generated content is protectable (and which isn’t), where the real litigation risk sits, and how to structure your workflow so your content actually belongs to you.

The One Rule That Governs Everything Else

Here’s the rule that matters more than any other: copyright requires a human author. Full stop.

In March 2025, the D.C. Circuit Court of Appeals affirmed this principle in Thaler v. Perlmutter, ruling that a work created entirely by AI cannot receive copyright registration. Dr. Stephen Thaler had tried to register an image produced solely by his AI system called the “Creativity Machine.” The court said no. Not because the image lacked creativity, but because no human being made the expressive choices.

Then in January 2025, the U.S. Copyright Office doubled down with Part 2 of its AI Report, laying out where the line sits. Shira Perlmutter, Register of Copyrights, put it plainly:

“Where that creativity is expressed through the use of AI systems, it continues to enjoy protection. Extending protection to material whose expressive elements are determined by a machine, however, would undermine rather than further the constitutional goals of copyright.”

— Shira Perlmutter, Register of Copyrights, U.S. Copyright Office

Think of it like a power tool analogy. A nail gun doesn’t make you less of a carpenter. But if you set up a robot arm to swing hammers all day with no human directing the design, you didn’t build that house. The Copyright Office sees AI the same way: use it as a tool, and you’re fine. Let it run unsupervised, and you own nothing.

Human authorship in copyright law means that a real person made the creative and expressive decisions that shaped the final work, not just that a person pressed “generate.”

Pro Tip: Simply writing a prompt does not qualify as “human authorship” under U.S. Copyright Office guidance. To protect AI-assisted work, you need documented evidence of human creative decisions: editing, selecting, arranging, or substantially modifying the AI output.

Where the Lawsuits Actually Stand (And Why It Matters for You)

You might think the lawsuit wave is just a problem for OpenAI and Anthropic. It’s not. The outcomes of these cases will directly shape what your business can and can’t do with AI-generated content for the next decade.

Here’s a quick snapshot of where things stood as of early 2026:

DevelopmentDateWhy It Matters
Bartz v. Anthropic: $1.5 billion settlementSeptember 2025Largest copyright settlement in U.S. history. Anthropic paid ~$3,000 per pirated book used for training.
Kadrey v. Meta: Fair use rulingJune 2025Court sided with Meta but warned AI training often won’t qualify as fair use. Judge flagged “existential threat” to creative markets.
Disney + Universal vs. Midjourney: Output-focused caseJune 2025First major film studio AI lawsuit. Targets infringing outputs, not just training data.
Disney $1B investment in OpenAI: Licensing deal for SoraDecember 2025Shows that licensing, not litigation, may be the long-term model. 200+ characters licensed.
Warner Music settles with Udio and SunoNovember 2025Opt-in licensing models emerging for music AI. Artists keep control.

What pattern do you see? The smart money is moving toward licensing. But the lawsuits aren’t slowing down. Morrison Foerster noted in February 2026 that cases are shifting from disputes over training data to disputes over AI-generated outputs themselves. That shift is what matters for anyone publishing AI content. It means the question is no longer just “was the model trained legally?” It’s “does this specific output infringe on something?”

And here’s where it gets tricky for users: who’s liable when an AI tool produces something that looks a lot like an existing copyrighted work? Is it the developer who trained the model? The company that built the product on top of it? Or you, the person who typed the prompt? Right now, nobody knows for certain. Most AI platform terms of service quietly shift that liability to you, the end user.

The Risk You’re Probably Not Thinking About

Everyone worries about the same thing: “Will I get sued for using AI content?” But I think that’s the wrong fear for most businesses.

The more common and more damaging risk? You publish AI-generated content, a competitor copies it, and you have zero legal recourse. Because you never established that a human was the author.

This isn’t hypothetical. It happened to me. And it’s baked into the law right now. The Copyright Office has made it clear that prompts alone don’t provide sufficient human control over AI outputs to qualify for copyright protection. If your content workflow is “prompt, light edit, publish,” you may be producing assets that belong to nobody. Including you.

The U.S. Copyright Office’s Part 3 report, released in May 2025, drives this point even further. The Office recommended that licensing markets be permitted to develop freely rather than through government mandates. But it also made a point that retrieval-augmented generation (RAG) systems, which are the backbone of tools like Perplexity AI, present serious fair use problems because their outputs frequently contain material that’s too similar to the original source. If your AI workflow relies on RAG-based tools to generate content, the infringement risk is higher than you might expect.

So you’ve got risk on both sides. On one hand, your AI-generated content might not be protectable. On the other, AI tools might be producing outputs that infringe someone else’s work. That’s a copyright sandwich, and your business is the filling.

I couldn’t find a practical decision tool for this anywhere, so I built one. I call it the AI Copyright Risk Matrix. It maps two variables: how much human involvement your content has, and what type of AI tool you’re using.

Content ScenarioHuman InvolvementCopyright Protection LikelihoodInfringement RiskRecommended Action
AI writes the full draft, you publish as-isNone/minimalVery low (likely unprotectable)ModerateDon’t do this for anything you need to own
AI generates a draft, you substantially rewriteHighStrong (human expression dominates)LowDocument your edits. Save before/after versions
AI generates images with generic promptsNoneVery lowLow-moderateFine for social media, risky for brand assets
AI generates images mimicking a specific style/characterNoneVery lowHIGHAvoid entirely. See Disney v. Midjourney
AI assists with research/outline, human writesHighStrongLowBest practice for protectable content
RAG-based tool summarizes external sourcesLowVery lowHighCross-check for originality. Rewrite substantially

The core principle: the more human creative decision-making you can document, the more likely your content is both protectable AND defensible. Think of documentation as your receipt. Without it, you can claim you built the thing, but you can’t prove it.

  1. Save your drafts. Keep the raw AI output alongside your edited version. Version control isn’t just for developers anymore.
  2. Log your creative choices. A simple internal note: “I restructured sections 2-4, rewrote the intro, added the case study from our Q3 campaign, and cut 40% of the AI draft.” That’s evidence of human authorship.
  3. Check your tool’s terms of service. Some platforms grant you commercial rights to outputs. Others don’t. And some, like Microsoft’s Copilot Copyright Commitment and Adobe’s Firefly indemnification, offer intellectual property indemnification for enterprise customers. That’s worth paying attention to when choosing your tools.

Watch Out: If your AI tool’s terms of service don’t explicitly grant you commercial usage rights to generated outputs, you may be building marketing campaigns on content you don’t legally control. Read the ToS. Yes, actually read it.

What the EU Is Doing (And Why U.S. Businesses Should Care)

If you only sell in the U.S., you might think European regulations don’t affect you. That assumption is getting riskier by the month.

The EU AI Act’s obligations for general-purpose AI models took effect in August 2025. Among them: transparency requirements around training data and copyright compliance. Then, in February 2026, the European Parliament pushed further, proposing a licensing regime that would require GenAI providers to obtain licenses for any copyright-protected works used in training, complete with an itemized list of every work used.

Here’s the part that should make U.S. businesses sit up: the European Parliament proposed that EU copyright law should apply to any GenAI model placed on the EU market, regardless of where the training happened. If adopted, a model trained in the U.S. on works protected by EU copyright could face liability in Europe the moment it’s available to European users. The Parliament’s compromise amendments even suggest that non-compliant providers “should be barred from operating within the Union.”

What does this mean for a marketer in Chicago or a SaaS founder in Austin? If the AI tools you use get restricted or restructured to comply with EU rules, your workflows change whether you sell in Europe or not. The tools themselves will change.

How to Protect Your Content Right Now

You don’t need to wait for the courts to finish fighting. There are things you can do today that protect you regardless of how the legal battles shake out.

Funny enough, the best protection strategy isn’t complicated. It’s just intentional. Most teams I talk to are already doing some version of this, they’re just not documenting it.

  1. Treat AI as your research assistant, not your ghostwriter. Use AI for outlines, drafts, data gathering, and brainstorming. Then write the final version yourself. The Copyright Office is clear: using AI to assist creation doesn’t bar copyrightability. But the human has to make the expressive choices.
  2. Create an internal AI use policy. Specify which tools are approved, what content types can be AI-assisted, and what documentation is required. This also protects you if an employee accidentally generates infringing output.
  3. Run originality checks on AI outputs before publishing. AI doesn’t “know” when it’s reproducing copyrighted material. It just predicts the next token. A plagiarism check won’t catch everything, but it catches the obvious stuff.
  4. Choose tools with IP indemnification when possible. Enterprise plans from Adobe Firefly and Microsoft Copilot offer copyright indemnification for commercial outputs. Free-tier tools almost never do. That difference matters.
  5. Watch the output-liability cases. The Disney v. Midjourney case and similar output-focused lawsuits will tell us who bears responsibility when AI produces infringing work. Those rulings, expected as early as summer 2026, will directly affect how much risk users carry.

The teams that build documentation habits now will be in a far stronger position than those scrambling to reconstruct workflows after a dispute. Trust me on this one.

If setting up these workflows feels overwhelming, or you’d rather have a team handle it, LoudScale works with brands to build AI-assisted content programs that are both effective and legally defensible.

Not if it’s purely AI-generated with no meaningful human creative input. The U.S. Copyright Office’s Part 2 AI Report from January 2025 confirmed that copyright protection requires a human author to have determined the expressive elements of the work. Using AI as a tool is fine, but writing prompts alone doesn’t qualify as sufficient human authorship. If a human substantially edits, arranges, or modifies AI output, those human contributions can be copyrighted.

The answer is genuinely unsettled right now. Most AI platform terms of service place liability on the end user for ensuring outputs don’t infringe existing copyrights. But lawsuits like Disney v. Midjourney are testing whether the AI developer bears responsibility for models that readily produce infringing outputs. Morrison Foerster observed in February 2026 that this question of output liability, involving the developer, the product company, and the user, is one of the most complex unresolved issues in AI copyright law.

What was the Bartz v. Anthropic settlement and why does it matter?

In September 2025, Anthropic agreed to a $1.5 billion settlement in Bartz v. Anthropic, the largest copyright settlement in U.S. history. Anthropic paid approximately $3,000 for each of the 482,460 books it downloaded from pirate libraries to use as AI training data. The settlement matters because it proved that copyright infringement claims against AI companies carry massive financial consequences, and it set a benchmark for how much training data is “worth” in legal terms.

Does the EU AI Act affect U.S. businesses using AI tools?

Yes, potentially. The EU AI Act’s general-purpose AI obligations took effect in August 2025, and the European Parliament is proposing even stricter transparency and licensing requirements for GenAI providers. The proposed rules would apply to any AI model available on the EU market, regardless of where training occurred. If the tools you rely on must comply, their capabilities, data sources, and pricing could change, affecting your workflows even if you never sell to a European customer.

Should businesses stop using AI for content creation?

No. That would be an overreaction. The Copyright Office has explicitly stated that using AI to assist in the creative process doesn’t bar copyrightability. The key is keeping a human in the loop for creative decisions and documenting that involvement. Businesses that treat AI as a drafting tool (rather than a finished-product machine) and follow basic documentation practices are well-positioned under current law.

L
Written by

LoudScale Team

Expert contributor sharing insights on Marketing & Legal.

Related Articles

Ready to Accelerate Your Growth?

Book a free strategy call and learn how we can help.

Book a Free Call