# Graydon Hoare's Dark Timeline Shows What We're Not Pricing In
*April 12, 2026*


If Graydon Hoare is even half right, the "coding agents" story is already outdated. Writing code was the story of 2025; 2026 is about offense. Vulnerability discovery is now cheaper than maintenance. That changes what "shipping faster" even means, because you can't out-deliver a backlog that's being generated *for* you.

In [his journal entry](https://graydon2.dreamwidth.org/322732.html), Hoare makes the claim that LLMs got better at *breaking* software than writing it, and that the transition felt sudden. "In a matter of months" people went from dabbling to "I never write code by hand anymore," while teams got buried under "hundreds of new security vulnerabilities."

That's not a tooling upgrade. It's a unit-economics inversion.

For practitioners building with agents, your takeaway is that your throughput is no longer gated by how fast you can implement features; it's gated by how fast you can *absorb adversarial findings*. If exploit discovery becomes "you only need to be right sometimes," and defenders need to be right always, then your agent stack can make you more productive and still make you lose.

The agent workflow of 2026 isn't an 8-hour autonomous coding session. It's closing the loop between agent-generated diffs and security reality: triage, reproduction, prioritization, patching, and verification under sustained load. Hoare describes himself writing less "main logic" and more "LLM-supervisory bits," plus responding to vulnerabilities. That maps to what many teams are quietly discovering: the work migrates upward into specification, review, and incident response, not into leisurely high-level design.

It also explains the social fallout he reports (issue trackers closed due to "slop," maintainers quitting, forks, dependency rollbacks, licensing fights). When inbound changes and inbound vuln reports become unbounded, "open" stops meaning "anyone can contribute" and starts meaning "we need a firewall." Projects don't fracture because people got emotional; they fracture because maintainership becomes an always-on security operations job, and most open source projects were not staffed for that.

We wrote yesterday about how [AI-generated code creates legal backdoors in GPL projects](https://aeshift.com/posts/2026-04-11-ai-code-is-hollowing-out-open-source-and-maintainers-are-looking-the-other-way/) by diluting copyrightable material. Hoare's account shows the operational side of that same pressure: maintainers aren't just facing a licensing integrity problem. They're facing unbounded inbound volume that turns maintainership into a full-time security operations role.

So here's the take I'm willing to defend from his evidence: if you're adopting coding agents and you don't redesign your security intake and maintenance capacity, you are creating a larger blast radius for the exact same team size.

What to do differently:

1. **Treat vulnerability throughput as a first-class KPI.** Not "number of CVEs" (vanity), but time-to-reproduce, time-to-patch, and patch correctness under regression. If you track incident response SLAs, apply the same discipline here: dedicated rotation, structured intake, and a revert rate you actually watch. If those aren't improving, more agent-written code is just more surface area.

2. **Build a slop filter.** Hoare's example of closing issue trackers is extreme, but the need is real. You can set up the basics today: require repro cases via issue templates, rate-limit submissions from new accounts, and run deduplication against open issues before anything hits a maintainer's queue. If you can't cheaply reject garbage, you'll drown before the real findings arrive.

3. **Move agent usage toward constrained edits, not unconstrained generation.** The more you can force changes into small, reviewable diffs with explicit invariants, the less you pay downstream when the vulnerability hunters come calling, human or AI.

Hoare ends in confusion and grief, but we can extract an operational takeaway: the winning teams in "LLM time" won't be the ones who write 100x more code. They'll be the ones who can survive 100x more pressure on correctness.

