You Can’t Copyright AI-Generated Content. Here’s What Publishers and Indie Authors Need to Know.
- Catherine Forrest
.jpg/v1/fill/w_320,h_320/file.jpg)
- Apr 15
- 4 min read
Updated: Apr 16
The law is clear. The courts have ruled. The Supreme Court just declined to hear the appeal. If you’re publishing AI content without significant human work, you’re building on sand.
The Post That Blew Up
I like including tables with my LinkedIn posts. I think they give readers a little extra reason to stop scrolling. Last week, I posted a simple one. Four rows. A few hundred words. Nothing special—just a rundown of where copyright law and AI intersect.

It’s been seen over 50,000 times.
Most of the response has been thoughtful. Publishers asking good questions. Folks admitting they didn’t know. People in other fields adding nuance.
And then there are comments from people who really, really want me to be wrong.
I’m not wrong. But being wrong about this has real consequences—for publishing houses with big budgets and for indie authors who can’t afford to lose what they’ve created.
So let me say this once, clearly, for both audiences.
What the Law Actually Says (No Spin, No Fear)
The U.S. Copyright Office has issued multiple guidance statements. The most recent (March 2023) says this:
Where the AI determines the expressive elements of its output, the generated material is not protected by copyright.
The courts have agreed. In Thaler v. Perlmutter (2023), Judge Howell wrote that copyright law “has only ever protected works of human creation.”
Last month, in March 2026, the Supreme Court declined to hear the appeal.
This is not a gray area. This is not “let’s wait and see.” This is the law in the United States, right now, today.
What This Means for Publishing Companies
You’re probably using AI to speed up content production. Reports. White papers. Market summaries. Maybe even books or journal articles.
Here’s the question your leaders aren’t asking and don’t want to think about: Are these assets actually ours?
If you publish AI-generated text without significant human modification, that text has no copyright protection. Anyone can take it. Republish it. Compete with you using your own content. And you have no legal recourse.
That user agreement your lawyer wrote? It can’t create IP that doesn’t exist. You can charge for a license to use the output. You can’t stop others from copying it.
An agreement only binds the person who agreed to it. Copyright law protects your IP. Those are two different things. If your customer—innocently or otherwise—emails your work to a friend, your agreement doesn’t bind the third party. They never agreed to it. They’re in receipt of a work of the public domain. If they choose to sell it, they have every right to do so.
How do you make sure your intellectual property is secure?
Use AI as a tool, not an author.
Keep humans in the loop making creative decisions.
And if you want to keep your data—and your copyrights—truly secure, consider local-first AI that never sends your content to the cloud.
I build tools like that, by the way. Ask me how.
What This Means for Authors
You're writing a book. Maybe you’ve used AI to brainstorm, outline, or even draft chapters. That’s fine—AI can be a tool. It's not a monster.
But here’s what the AI bros won’t tell you: If you publish AI-generated text without heavy human rewriting or editing, you’re not just risking quality. You’re risking your rights.
Imagine spending months on a novel. You publish it. A week later, someone else republishes your exact words under their name. You try to file a DMCA takedown. They respond: “This is AI-generated content. The original author has no copyright claim.”
They might be right. And you might have no legal standing to stop them.
How can you make sure your manuscript is protected?
Use AI to get unstuck, not to write for you.
Revise and edit everything.
Make the creative choices yourself.
Keep records of your creative process.
And if you’re confused about what you can and can’t use AI for, ask someone who understands both the tech and the law.
It’s me. I’m that someone. Let’s talk.
Why I Walked Away From My Publishing Job Over This
I was the highest-ranking publishing executive at the research firm where I worked. Good title. Good pay. Amazing team.
When leadership proposed a policy encouraging AI-generated content without meaningful human review, I explained the copyright risk. I walked them through the Copyright Office guidance. I showed them the court cases.
We disagreed on the level of risk.
I respect that they made a different call than I would have. They have a business to run, and I have principles I won’t compromise.
So we parted ways. Amicably. Professionally. No drama. No bridges burnt.
But I walked away with a clear mission: Help publishers and authors learn to use AI safely and responsibly without losing their rights.
The Bottom Line
Here’s that table again in case you missed it on LinkedIn.
There are people who will tell you that any resistance to or criticism of AI makes you a Luddite. That you’re scared of the technology. That you’re fear-mongering because the future is a scary place.
And there are people who will tell you this tech is the root of all evil. It’s building data centers, driving up energy prices, and training on stolen data.
You don’t have to listen to any of them.
What I’m Building
I’m building local-first AI tools for publishing. They run on your hardware, not in someone else’s cloud. Your data stays yours. The inference costs are tiny and the token costs—there are no token costs, actually. You get the efficiency of AI without the legal exposure.
I’m a publishing professional who knows the law, builds the tools, and thinks you deserve better than a flimsy EULA.
The polymaths out there can tell you it’s fear-mongering but the law doesn’t care about their feelings. Your work deserves real protection.
.png)

Comments