Cursor and SpaceX: In search of a complete loop

Cursor and SpaceX have entered an agreement to co-develop coding and knowledge agent models together. With SpaceX having the right to acquire Cursor this year for $60B or pay them $10B instead.

In coding–which is perhaps the path to general agents–being a top lab requires owning both the compute to train new models and capabilities and the product to recursively inform that process.

Both believe they are falling out of orbit. Combined they can complete the loop. Alone neither can. With this structure they can cut through a gordian knot.

The new meta

What does it take to be in the pantheon of top companies in this AI wave? Not just for a moment in time, but in the fullness of time.

There’s often debate about whether companies should be at the model or product layer. But putting aside ideal starting conditions, the growing belief is that both are eventually required. Pressure drives convergence, and this is the new meta.

A company is just a process that hopefully compounds as it scales and improves in its ability to serve customers. A company is not any specific model or product it launches. It is the machine that builds the machine. The best AI labs must establish confidence that they understand how to repeatedly build models and compound at them. And to do so requires both product and model in tandem.

Achieving the best performance requires codesigning them together. The model is trained on the harness. And only by owning the product and end customer relationships can the harness be designed to fill in the gaps in the models and then, over time, train those learnings back into the model.

The model is the product. But the harness is required to iterate the model to where it needs to go. As the model improves, the frontier of what it can be used for when supplemented by product expands too.

Being a top lab requires a dynamic equilibrium of the best models to drive your products and the best products to inform your models.

And coding is where this is playing out first.

In search of a loop

It was Anthropic that hit upon this meta with coding models and Claude Code, and every other lab has been running to catch up.

Coding (along with chat) is one of the only categories we have found that generates more than $10B in revenue a year. And it shows no signs of stopping.

But it is not just its revenue that makes coding important. It is that coding suggested this compounding agentic loop between model and product. What today takes a combination of many model calls, tools, and other work is exactly what’s needed to teach tomorrow’s models to do simply. There is no end in sight to how far this process can be taken in coding. And increasingly labs believe the path to general agents (itself a massive market or AGI depending who you ask) runs through coding models and this same process.

Being able to build state of the art coding models and products is table stakes for competing in this arena. And every lab is waking up to that.

Cursor was early to the coding market, the first to hit traction. But since then Anthropic has released Claude Code, which has taken off, and OpenAI has found its footing with Codex. Both now understand it’s essential to own the product side of coding. And the other labs are following suit, with Google and xAI both reorganizing around coding models and the need to own the product surface area.

Cursor is in an interesting position. On absolute metrics they are crushing it, growth continues unabated at ~$2B run rate. And yet Claude Code and Codex have both overtaken them, and it is increasingly clear that competing in this market requires them to build their own models. It’s rare to both be doing amazing and have everyone wonder if you are the walking dead.

Cursor must train its own models. And they’ve done this, starting with post-training an open source model in Composer 1, then extended pre-training and post-training for Composer 2, and now beginning to pre-train their own models from scratch. But it is one thing to build budget models with better margins and another to compete head to head on the state of the art. The compute expense is in the billions–if you can even get the compute.

If Cursor believes it can compete at the highest levels but will see its position degrade without matching the AI labs on compute and model training, then Elon and SpaceX are their perfect complement.

Elon merged xAI into SpaceX. Since then, its research leadership has been entirely hollowed out. A morbid joke is that it’s been like Iranian leadership: every day a new head of research is battlefield promoted, and every next day they are gone.

In recent months Elon has become convinced of the importance of coding models, moving from a small team working on them to it being the entire lab’s priority. But building a coding model from scratch without any of the data or harness is hard to bootstrap. Even more so without research or product leadership.

xAI has tremendous compute capacity, with plans to scale it as much or more as any of the other labs. Everything datacenter-related says xAI should get stronger every year, but it is clearly underperforming that compute capacity. It has the cheapest cost of compute perhaps not just because it is so good at building datacenters but also because no one is using them. It has been a lab with neither product/research directions nor heads. And it is not converging on its competitors–it is falling behind.

Cursor is running out of time. xAI is in a race against time. Together they solve each other’s problems.

Cursor gives SpaceX the research and product leadership that has shown it knows how to build in this space. It’s not a sure thing, but no team outside of the labs or China has done more. And the product immediately solves xAI’s coldstart problems around data and harness. Meanwhile, SpaceX gives Cursor the compute to compete long-term on both model training and inference scaling.

Between them they have the complete loop. Neither does alone.

Why this structure

A simple compute deal could never work. Elon’s top goal is to bootstrap a state of the art coding agent. Renting out xAI’s compute to Cursor to do exactly that but only getting money and no model out of it would make no sense.

It could have been structured like the Midjourney/Facebook or OpenAI/Microsoft partnerships: stay separate, but share rights to whatever models and weights get developed. Midjourney is especially illustrative because Facebook found itself in a similar position to Elon–needing to buy acceleration. Facebook will build its own image and video models eventually, but paying Midjourney acquisition-level money as a licensing fee gets them immediate access for their own products and a base to accelerate their own training.

This doesn’t work for SpaceX and Cursor. You can’t separate the model and the product. Even if they shared a license to the model, they can’t share the product once the term is up. And there will be far more to do in expanding the product surface area and pushing those learnings back into the model. They have to move as one, which they can’t do if they’re in conflict over building competing products. If grok and cursor were going to occupy totally separate markets this might work — but they know the two will become highly convergent.

So they have to be aligned on working together, which requires an acquisition, or at least an option on one.

Cursor might not prefer to be acquired, but they do care about being a true top lab. That’s possible with SpaceX and unlikely on any path of staying fully independent.

Then there’s the question of why do this call option structure instead of a straight acquisition.

The simplest explanation is IPO timing. A contractual acquisition would have to be folded into SpaceX’s IPO process, which is well underway. Structuring it as a call option lets SpaceX commit enough to lock Cursor in without forcing the acquisition into the IPO registration. Everything else this structure does is a bonus.

But this structure has other advantages as well.

The structure lets both sides get credit for the upside they believe in and is strictly better than their alternatives.

SpaceX believes its valuation will be much higher once it IPOs. They’ve priced it at double the current valuation and it could go higher. Cursor believes it should be valued like a company that can train SOTA models and compete long-term with the top AI labs, especially drawing on SpaceX’s resources.

The acquisition trigger being later in the year allows both of them to vindicate their beliefs and get credit for them. If they did a deal today SpaceX might have choked at paying $60B, and what price they would pay Cursor would not accept. But the option to pay $60B later lets them derisk it over the year, and according to their calculations the equivalent equity value today would be $30B or less. By the end of year we’ll have seen both Composer 3 and SpaceX IPO.

The structure also solves a problem that keeps coming up in AI acquisitions. Companies have traditionally acquired startups for their existing products. In AI, labs increasingly acquire for the founders and team. In a market that moves this fast, the product becomes obsolete quickly. The team is what matters, and the product is mostly proof the team can build.

HALOs emerged for exactly this. If you only want the team, not the product, you structure the deal to just take the team. But HALOs don’t solve the bigger problem: teams that were hungry on the outside often aren’t once they’re liquid employees. That didn’t matter when you were acquiring products and user bases. It matters a lot when you need the team to keep shipping.

The call option structure solves this. SpaceX and Cursor can work together fully while Cursor still has to run like an independent company–because they may very well be. SpaceX gets time to build internal capacity and plan for integration and transition rather than discovering post-close that key people are coasting or gone. It’s a structure we’ll likely see more of.

The structure also dominates both sides’ alternatives.

Cursor’s alternative was raising $2B at a $50B valuation to train their own models. $2B would let them train more, but still leaves them far behind OpenAI and Anthropic on capital and compute. They wouldn’t be peer-level competitors. Even if the xAI deal falls through, Cursor will have gotten the capital and compute to train at a scale they couldn’t have otherwise, plus $10B dilution-free.

SpaceX’s alternative was bootstrapping a coding model and product from scratch. Beyond the uncertainty of recruiting and organizing a team to do it (which xAI has so far failed to demonstrate it can do), they’d already be spending $10B between founder-level equity packages and compute. If Elon likes the Cursor team for that function then spending that $10B on Cursor with the right to buy them outright is a better use of the same money.

This is the first deal where two sub-frontier labs plausibly combine into a frontier contender. It probably won’t be the last.

End notes

  1. Until recently, I would have expected OpenAI and Cursor to be the natural fit together. OpenAI was under-indexed on coding product for a long time, and Cursor had the same need on the model and compute side. They even share a core investor in Thrive. In the end timing never lined up. OpenAI didn’t appreciate the importance of coding agents as a product and path to general agents in time to pay a price Cursor would take. And now they certainly feel set in with the pillar Tibo and Codex have built.
  2. Elon is not the most reliable counterparty. I imagine many lawyers spent a lot of hours in rooms tightening the words in this deal — to prevent even more lawyers from spending even more hours in rooms six months from now.
  3. Andrew Milich and Jason Ginsberg joined xAI from Cursor a month before this deal was announced, to build up xAI’s coding efforts. One day I’d love to hear the exact timeline. Reminds me of CAA and their pursuit of Greta Gerwig. Perhaps Backwards Deployed Engineering is more important than FDE.