# Your Coding Agent Has a Supply Chain Problem
*March 23, 2026*


The problem isn't that Cursor built on Kimi. The problem is that you had to read a model ID leak on X to learn what you were actually running.

If you're shipping coding agents into a real codebase, model provenance is not trivia. It's a dependency. And dependencies need changelogs, constraints, and clear ownership.

Cursor launched Composer 2 promoting it as ["frontier-level coding intelligence"](https://techcrunch.com/2026/03/22/cursor-admits-its-new-coding-model-was-built-on-top-of-moonshot-ais-kimi/) but didn't mention that the model was built on Moonshot AI's open-source Kimi 2.5. An X user noticed identifiers pointing to Kimi in the code. Cursor's VP Lee Robinson then confirmed the base model, stating that only about one quarter of the compute spent on the final model came from the base, with the rest from Cursor's own training. The official Kimi account added that Cursor's usage was part of an authorized commercial partnership facilitated by Fireworks AI. Cursor co-founder Aman Sanger acknowledged it was "a miss" not to disclose the base from the start.

That admission matters for practitioners: you can't evaluate reliability, risk, or fit if you don't know what the system is.

"Built on top of" is doing a lot of work here. Cursor is asking teams to accept two claims at once:

1) *The base doesn't matter much* (only a quarter of the compute; benchmarks are "very different").
2) *The base matters enough to start there* (otherwise, why do it?).

Both can be true. But if they're true, you disclose the base model anyway, because the base affects the shape of failures: multilingual behavior, refusal patterns, memorization risk, and the long tail of weirdness that shows up only after a tool is embedded in CI and developer workflow.

TechCrunch flags the geopolitics of a U.S. company building on a Chinese model. But the practitioner concern is more immediate: enterprise review and supply-chain policy. If your internal approval process includes vendor questionnaires, data-handling requirements, or country-of-origin scrutiny for critical components, "we'll fix that for the next model" is not an answer you can pass to procurement.

Cursor says its Kimi usage aligns with the license and notes the commercial partnership via Fireworks AI. Licensing compliance is table stakes. The issue is operational transparency: when the foundation is undisclosed, you can't map where policy decisions need to happen. Is your security team evaluating Cursor, Fireworks, Moonshot, or all three? What changes if the base model changes? What if a future release swaps the foundation entirely and you only learn about it from another X post?

Think of it as the model equivalent of a software bill of materials. Teams should start demanding a basic standard from coding-agent vendors:

- Name the base model at launch. Don't bury it in a retroactive tweet.
- Publish a model lineage note per major release: base, training deltas, and what materially changed.
- Document what's contractually stable, including model availability, region, and deprecation timelines.
- Provide a consistent evaluation mode so benchmark comparisons across releases mean something.

Cursor says they'll "fix that for the next model." They should fix it for this one too.

We've been tracking how the agent supply chain keeps surprising teams in new ways. Two weeks ago, the [Pentagon's blacklisting of Claude](https://aeshift.com/posts/2026-03-10-anthropic-sues-pentagon-over-alleged-ai-blacklist-on-claude/) showed that government action can vaporize model access overnight. Last week, [OpenAI's acquisition of Astral](https://aeshift.com/posts/2026-03-20-thoughts-on-openai-acquiring-astral-and-uvruffty/) showed how toolchain coupling creates lock-in through soft mechanisms rather than licensing changes. Today's episode is a third facet: you can't manage what you can't see. Political risk and toolchain coupling are hard enough when you know what you're running. When the foundation is undisclosed, you're not even playing the right game.

Coding agents are becoming production infrastructure. Provenance is part of the reliability story. If a vendor won't tell you what it's built on, you're not buying intelligence. You're buying surprise.

