It’s time to reframe the copyleft threat posed by coding agents from low-quality PR spam to copyright concerns. A recent analysis lays out the argument that maintainers are merging code that, under the logic in play today, may not be copyrightable at all. If the code isn’t copyrightable, the GPL can’t bite.
The US Copyright Office’s current position is that LLM outputs are uncopyrightable because prompts alone don’t provide sufficient human control over the result. That turns AI-assisted commits into public-domain islands inside copyleft repos. Over time, those islands can connect. Then your “GPL project” becomes, in practice, a pile of code that competitors can reuse without reciprocity.
That should change how practitioners think about using agents in open source. Instead of asking “did we review the code?” maintainers need to consider “what, exactly, are we licensing?”
Copyleft works because it forces sharing on derivative works through existing copyright mechanisms. Stallman’s trick was to use the law to create a safe zone of cooperation where companies can contribute without getting undercut by a closed fork. The source material makes a persuasive case that AI outputs punch holes in that zone. If a chunk of your codebase is public domain, it’s not governed by your copyleft terms, so downstream users can lift it and ship it closed.
Your usual engineering instincts don’t save you here. A maintainer might say: “We reviewed it, we tested it, we merged it under a human name. It’s ours.” That doesn’t match the Copyright Office’s framing. The monkey selfie case offers the precedent. David Slater set up the camera, framed the shot, and chose the best photo, but because a macaque pressed the shutter, he couldn’t claim copyright. Prompting an LLM, reviewing its output, and committing under your name follows the same logic: you directed the process, but that’s not authorship. Testing and reviewing aren’t creative input. And modifying 10 lines in a 1,000-line changeset gives you copyright only over those 10 lines, not the other 990. That’s a hard pill for software people because we’re used to “works” being validated by function. Copyright doesn’t care if it compiles.
This changes what maintainer responsibility means. Accepting AI-generated contributions is becoming more than a quality and maintenance decision; it’s a governance decision with license integrity on the line. If you maintain a GPL/LGPL/MPL project and you’re not tracking the provenance of large AI-generated patches, you’re not being permissive. You may be diluting the one enforcement tool your license depends on.
I’m sympathetic to maintainers here. The incentive gradient is brutal: AI code is fast, contributors are happy, and the backlog shrinks. But if the source material is right, that short-term win is a long-term loss. Copyleft projects won because they created predictable rules for cooperation, not because they shipped the most pristine code. Code that’s public domain by default undermines that predictability.
The implications get worse for new projects. The source material focuses on AI code eroding existing copyleft repos, but follow the logic one step further: a project written entirely by AI has no copyrightable material at all. Every open source license is a copyright license. No copyright, no license. The LICENSE file in the repo becomes decorative. This isn’t just a copyleft problem. An MIT-licensed project written entirely by AI may not be able to enforce its attribution requirements either. The practical difference is that permissive licenses lose little (you could already do almost anything with the code), while copyleft licenses lose everything. As more projects are bootstrapped by agents, we’ll see an increasing number of repos where the license is a social signal rather than a legal instrument.
So how do you change your thinking around this?
First, treat agent-generated code like a new input class, not “just another contributor.” At minimum, projects should require contributors to disclose when substantial portions are LLM-generated, because the legal status is now part of the artifact.
Second, stop pretending this is only a legal department problem. If your project’s value proposition includes “you can’t take this private,” then provenance belongs in your engineering workflow: contribution templates, review checklists, and maintainership norms.
We’ve been tracking how the agent supply chain keeps surprising teams in ways that existing processes don’t catch. Model provenance was the first gap; code provenance is the next one. If you lead a team that depends on copyleft software, don’t assume the copyleft moat stays deep while you automate contribution. When the ecosystem starts merging uncopyrightable code at scale, the strategic difference between copyleft and permissive licensing narrows in the worst possible way: not by policy choice, but by entropy.
Coding agents don’t just change how we write code. They change what open source is, unless maintainers make license integrity an explicit part of how they accept contributions.