Narrative Distillation

The shifting sands of tech IPOs

If the IPO process is like a debutante ball, the top investment banks are akin to a finishing school. They help gussy up companies, teach them proper manners like how to do GAAP accounting, and bring them around to call on prospective investors and eventually debut to society.

This is the role investment banks have played for decades, but in recent years this dynamic has begun to break down. Not in all sectors—in most sectors investment banks still occupy the same role—but in tech, the importance and role of investment banks has shrunk and commoditized. 

Historically, raising capital was difficult and public market investors had little awareness of the companies going public. Investment firms were not focused on tech companies. And especially for enterprise startups, retail investors had no exposure or familiarity with them. In this environment, it was the idiosyncrasies of the capital markets that mattered, not the uniqueness of each company. In this model, the company is not that special. Instead, what matters is the standardized process for making the company fit the mold investors expect from an investment asset. Companies may have potential, but they don’t know how to introduce themselves to the investor community in the public markets. 

In every marketplace one’s power is proportional to their value added to the transaction. While investment banks view themselves as having an important role in guaranteeing the quality and rigor of which companies are ready to IPO, that seems less true today.1 Banks used to be gatekeepers because markets needed to be told which companies were good. However, discovery is no longer the constraint. Companies are more known and thus the relative leverage and importance of the sell side is falling. And as investment banks increasingly manage only the logistics of the IPO process, they become less important in dictating its terms.

The best founders have figured out that owning their narrative gives them meaningful leverage. Founders and companies can increasingly communicate their narrative in a direct and compounding way to investors. And the roadshow is a progressively smaller component of investors’ views on the company.2 What makes SPACs and Direct Listings notable is not their cost structure, but that they allow companies to much more directly market their IPOs.3

i honestly wrote this entire piece only as a setup for this joke. and i have no regrets.

The ability for companies and public market investors to connect more directly has never been easier. Many of the tech equity funds now do both public market and private pre-IPO investing (and increasingly even earlier stage). Firms like Tiger, Coatue, Durable, and D1—or even T Rowe Price or Fidelity—don’t need to be introduced by bankers to companies. They already have been tracking companies for years and want to know the founders and their companies directly. Founders have the ability to directly build a relationship with the investors that will anchor their IPO. In fact, letting that relationship be primarily mediated by the investment banking process adds friction to the process and is less effective.

Most tech startups are also way better known by the time they go public. This is especially true for consumer startups. Companies like Airbnb, Roblox, Robinhood and Coinbase are widely used and known. When your product is used on a regular basis by investors (or their families) and has been covered extensively by the media, the incremental investor roadshow meeting is less important. Direct daily experience using the product is better marketing than any roadshow presentation. Even non-consumer companies are increasingly well known. By the time companies go public today, they have orders of magnitude more traction than companies going public decades ago did. But also investors just care about tech more. With companies able to IPO at $50B rather than $500M, they are more important and known.

Another characteristic of the current landscape is that the revenue multiples companies can get in the public market have a very wide range. Look at the multiples we’re seeing: everything from 2x to 2000x. This is true both for small SPACs and large direct listings. What separates these companies is how effective they are at conveying a compelling narrative to the public markets.

When the multiple range is so high, the difference between an alright and amazing IPO is a function of people’s belief and confidence in the future potential of business. It can be speculation. It can be based on traction. But also can be on the founders competence in explaining how to think about business today and why that sets it up not just for consistency and predictability but for continual compounding.

As a founder in a world where capital is easy to get, what matters is how to explain yourself, distill the company, and get public markets to understand you in the right way.

Narrative already drives venture fundraising

If this description of the public market sounds familiar, it should.

Where the public markets are heading should be no surprise—it’s what is already done in the early stage venture ecosystem.

In startup fundraising over the last decade, rounds have grown in size, and more importantly, bifurcated into different classes of companies. There are orders of magnitude across the range of valuations for companies with the same level of revenue. Some companies raise at 5x while others can raise at 300x+ ARR multiples. Among pre-revenue companies, the spread of valuations is even greater.

More crucially, in the venture ecosystem, there is universal acknowledgement that founders should drive their fundraising process and pitch directly to investors. A half century ago, companies used to hire bankers to help them raise capital. Today, no VC would take a company doing that seriously.

Being able to best convey the progress and promise of a startup is the job of the CEO. No one has better context on and ability to change the business. And no one is more responsible for conveying that not just during fundraising, but every day—to everyone.

There are three types of fundraising pitches: narrative, inflection, and traction raises:

  • Narrative pitches are driven by a compelling story of what could be
  • Inflection pitches are driven by secrets discovered. The company has hit some inflection point that, if investors were astute enough to understand, would make them realize now is the ideal risk-adjusted time to invest
  • Traction pitches are driven by the results of what’s already been done. The company could be a black box and investors would invest solely off the metrics
the original ngmi. see i am hip with the kidz terminology

The dirty secret is that there is no such thing as traction pitches anymore. Because as every company knows—our best days are always yet to come.

Ironically, the companies with the best traction want to be given credit for their future potential the most. No one wants to rest on the laurels of the past. And achieving the highest multiples requires having a narrative of why even more is to come.

Fundraising and IPOs are natural loci for companies to take a step back and shape their narrative. They are natural points for companies that are often mired in the day-to-day to think hard about their business from a multi-year standpoint. However, while fundraising is a good prompt for companies to think about their narrative,4 the importance of carefully distilling a company’s narrative is increasingly ubiquitous. 

there’s a lot going on in this chart. I was told to cut down graphics so i just jammed them all into one gif. malicious compliance.

Narrative leverage in an anomalous world

The more static and predictable the world is, the less narrative matters. Historically most businesses followed clear precedents and fit neatly into paths. If starting a restaurant, the potential range of revenue and costs is known. There are few surprises, so it’s easier to look at the current state of the business and know what it will look like in a few years. There is little volatility.

Tech startups radically break this mold. By definition they will be unrecognizable in five years, whether that’s because they are a unicorn or because they are extinct.

One essence of the tech industry relative to others is the ability of tech companies to precisely select their atomic units and where they sit within their ecosystems, leading to hugely different outcomes.

Empirically, one way to see this is in the widening spread of valuation multiples. Every week, we hear of another company raising at high valuations, but more importantly wild revenue multiples. At the series A and B rounds, we’ve seen multiples of 100x+ or even 200-300x+ ARR become regular occurrences. They’re not common; most companies’ rounds are still raising nowhere near this multiple. But they do exist: there is a subset of companies that are able to raise at orders of magnitude higher revenue multiples. The spread in revenue multiples is widening. Companies with the same amount of revenue increasingly get wildly different valuations. This is the power of narrative, in contextualizing the snapshot of a company’s performance.

There are a number of reasons narrative is becoming more important:

Large dynamic range of outcomes. Startups have a huge range of outcomes. They can be worth nothing and a decade later be worth billions. And this spread is expanding. The largest tech companies are now worth trillions. More topically, over the last decade the distribution of outcomes for most tech IPOs has increased by an order of magnitude. Enterprise investors, for example, used to assume that beyond the once-in-a-generation company, most enterprise IPOs were hard capped at single digit billions in market cap. The entire business model of enterprise venture investing was built on this assumption. And it is no longer true.5 When the potential range of outcomes of companies has many orders of magnitudes within it, then one’s confidence level in the probability distribution of outcomes becomes incredibly important.

Back-weighted LTV. Increasingly tech companies don’t make the majority of their revenue from the first interaction with customers. In each sector this is done via a different approach, but fundamentally SaaS, freemium, Open Source, etc are all examples of this. This is one of largest trends in tech over the last few decades and is worth further exploration*. Revenue being a lagging metric isn’t bad, but it means that understanding the value of a company requires a rigorous understanding of the leading metrics that will drive future revenue. This again means that a company’s ability to explain to others how to think about the business and why they are so confident in the inevitability of future revenue based on current product metrics is crucial.

Sequencing to Multi-Product / Platform. With enough scale, there is a truth about modern public tech companies: you either die a single product company, or live long enough to be multi-product or a platform. This is the inevitable path for almost every successful company, but the likelihood of success is hard to tell while the company has a single product and has never tried to make the leap to multi-product or platform. The best companies get valuation multiples that give them credit in advance for future business model sequencing This again puts the onus on the company to explain why they can become multi-product and what it will mean.

Compounding loops. Finally, we are still very early in our understanding of how to quantify and predict the returns of compounding loops. Network effects, economies of scale, and still unnamed types of loops all can have a very disproportionate impact on long term value of a company—but there is no simple way to infer it from an early snapshot of performance. Companies must work on explaining the compounding loops they are building, what leading metrics to look at to see their potential, and why they will be so powerful.

Self-fulfilling prophecies: When is narrative justified

Narrative leverage in tech is most commonly understood in the sense of Steve Job’s Reality Distortion field.

Personally, I like to think of this narrative leverage as a form of PE ratio (Price to Earnings ratio). I sometimes call it the PR ratio (Perception to Reality ratio).

How do we know if narrative leverage is good? It’s easy to think of many companies where the reality of the company didn’t live up to the hype. Some of these were fraudulent and illegal, like Theranos. But tougher are the ones that lie somewhere on the spectrum of over promising. There’s a blurry line between the need to project confidence in order to gather all the resources to make the hype a reality and lying about things that will never happen. And while there’s no perfect way to separate these, how should one think about that spectrum and about the appropriate amount of narrative leverage a company should have?

It’s also important to note that narrative is not just stories founders make up isolated from reality. At their best they are distilling for outsiders truths already known internally. These can be explaining the leading cohort metrics or early signs of TAM expansion potential that give confidence.

Here is where I think the analogy to PE ratios is useful.

In the public markets, the PE ratio of a stock is the ratio between its market cap and earnings. This is a reflection of how high investors will value a company relative to its current earnings.

If a company’s value is the net present value of future cash flows, its PE ratio in a rough sense is a proxy for how confident investors are of high future cash flows.This is a simplification, but you can generally view a company with much higher PE ratio than another company with the same earnings as one investors think will have higher growth and future cash flows. Over time the rate of Earnings growth will help catch up to the expectations, bringing the ratio down—or investors will continue to believe, keeping the ratio high.

When we talk about revenue multiples of startup funding rounds, this is the private market equivalent of PE ratio.

What is the benefit of a high PE ratio? It is cheap cost of capital. It allows companies to raise capital with the benefit of getting credit for what they will be in the future. It is a loan pulled forward from the future.

Is a high PE ratio good? The answer is not a simple yes.

A PE ratio is good if there is appropriately high ROIC (return on invested capital) by their usage of it. If getting a loan on a future promise allows you to deploy it effectively to better live up to those aspirations, then it was not just worth it—the aspirational perception helped catalyze and make its own prediction inevitable.

That is some kind of magic, creating something from nothing.6 An ouroboros eating only itself yet somehow growing. Perhaps modern time travel is our ability to take a loan out from our future success to ensure we achieve it.

PE ratios are a promise continually renewed—and they can be warranted or misplaced. When companies’ earnings cannot keep pace and live up to these expectations, we see the price and PE ratio eventually fall to reflect this, even more so where companies don’t have the ability to take advantage of their PE ratio and the lofty expectations of them to improve the business now.

Outright frauds like Theranos cannot live up to their valuations. Their PE ratios are bad since with or without the benefit of a high multiple, they can never grow into their valuation.

More complicated are companies like Tesla. For years there were vicious debates over whether Tesla was over or undervalued, with vitriolic takes from both sides. What made the question hard to answer was that they were both right. Elon takes on industries where new approaches can work as the industry cost curve improves, but require massive and cheap access to capital for extended durations to work. Elon is not unique in this dynamic. It can also be seen in fields like AI research.

Thus Tesla’s PE ratio is in many ways self-fulfilling. If Tesla could get people to extend the access to capital it needs for long enough it will be successful. If it could not, then it would have collapsed. Ironically, this means that far from Elon’s antics being distracting, his ability to maintain these high PE ratios might be the most important driver of the company’s ability to succeed.

But this general concept is not unique to the public markets, or to money. In some sense, even people have a social capital PE ratio. PE ratios in the public markets are just one instance of a more general concept of having some view of what something can become—and giving them today the benefit of that future tomorrow accordingly.

Narrative leverage is the PE ratio of a company. Not just for cheaper cost of future capital, but also for everything else companies care about too. Cheaper cost of recruiting, customer development, and perhaps most importantly—internal coordination.

Venture as social staking: the future is founders

Modern venture itself is not just about money and the cost of capital. Most founders would give a discount in pricing to the top VC firms. What the top VC firms are selling today isn’t money—they’re lending their own brand to startups. Having Sequoia invest will lend the stored PE ratio of Sequoia to the portfolio company. This will help give them a cheaper cost of capital, recruiting, and customer acquisition.

LPs may care that a firm has good returns, but that’s not intrinsically relevant to founders. It only matters insofar as those returns translate to stronger brand value or direct relationships that can be used on behalf of the founder’s company. This is why venture today exhibits power laws with the top firms attracting disproportionate returns.

In today’s ecosystem, however, companies are increasingly able to have as much, if not higher, narrative leverage than VC firms. The top companies—and especially their founders—are more known than their VCs. At the extreme end, Patrick Collison has much more ability to attract investment, customers, and hires than any of the VCs on his cap table. Every year there’s an increasing number of founders at each stage who have higher brand leverage than what VCs can bring to the table. This is both due to the growing primacy of founders as well as the relative stagnation venture has had beyond brand network effects.

employees is on this chart. oh you can’t see it. oh…how weird.

And founders have so much more surface area to compound this brand leverage than their investors can. Founders can uniquely refine their narrative hand in hand with building the business to make it best resonate with all their prospective investors, hires, or customers. They also have the full resources of their company to bring that narrative to bear.

Narrative distillation is a core part of company building

The largest trend in every function within companies is that they’re being pulled internal. Engineering was the first. The birth of modern software companies began when companies first understood that engineering wasn’t a back office job to outsource, but a core part of the primary job of a company. Core, not commodity.

Endogenous compounding is increasingly the foundation of all modern successful companies. In a world where it is hard to be successful and unknown, all external channels increasingly get arbitraged. Companies that discover some novel market or promising acquisition channel quickly find themselves joined by many competitors. And the outsized returns they briefly got fall back down to earth under the weight of competition. It is internally compounding advantages that fight the gravity of this reversion to the mean. This is why we talk so often about network effects & economies of scale. Because like any polynomial equation, as scale rises & approaches infinity, only the highest order bit matters. And it is the aspects of the company that are internal to its organization or ecosystem that can most compound unimpeded by the outside world.

While engineering was first, it is not unique. Every function whose returns on iteration are high and non-commodity will follow the same path.

The transition from marketing to growth was this exact same process. Traditionally marketing was something done after the work on the product was already complete. Companies would finish the product and throw it over the fence to the marketing team. The easiest way to know if a function is core or commodity is 1) whether the function is identical at other companies, or unique to the particulars of their company and 2) whether it has feedback loops in the company or purely uses external channels.

Modern growth teams are impossible to remove from the core flow of their companies. In fact, they are fused to core product and engineering. How can you do growth without it being inextricably tied to the core flows of the product?

Brand marketing is still important, but on a relative basis it is increasingly shrinking compared to paid acquisition and more importantly core product driven distribution. If you think about bottoms up, product driven SaaS companies or viral social networks, they are examples of how impossible it is for traditional marketing to compete with the product itself. The best companies understand that distribution is a first party concern when thinking about a product, not some checkbox to finish after.

In “Why Figma Wins” I wrote about how design is undergoing this exact same transition. Design at the best companies cannot be relegated to artists told what to make after all the decisions have been made. They must be part of the core decision making throughout the entire process and all its iterations. This doesn’t just fall on the companies. It also means designers must accept more responsibility. The best designers want to be at the table. And they understand that they must not just think at a creative level, but also in how their design process and output shapes the core business. The best designers not only do this, they relish it.

The same is happening to the narrative of companies. Increasingly, narrative isn’t primarily about external framing. It’s not something done after the work has been completed.

Adobe has continually shown over the last few decades how core managing the narrative is to getting the support and coordination of investors and employees as the company makes fundamental shifts to their business model. Whether that be in adding new products, transitioning to the faster internal cadence of a SaaS company, refactoring into a cloud-first infrastructure and pricing model, or the myriad other endeavors Adobe has undergone from building printing software to the full expanse it is now.

Those shaping the narrative must intimately understand how employees, investors, and customers think about the company. Refining and expanding the narrative is entwined with the company’s progress. Narrative is shaped by each iteration of a company’s processes and products. And in turn a company’s evolving narrative shapes how it focuses its processes and builds its products.

Founders are responsible for holistic narrative distillation

Too often we focus on how much money a company can raise. But money is rarely an ends. Instead, it’s a resource to spend to functionally derisk the company. Historically, capital was the scarcest resource. Venture capital as an industry was built and structured around capital scarcity as the most important blocker on company success.

But increasingly it isn’t scarce anymore. And it certainly isn’t the main blocker for many of the top companies. Talk to top tech companies today and raising capital is ironically one of the easier aspects of building and derisking the company. Hiring and retaining a talented team is far harder. Acquiring and retaining customers is harder. Understanding and getting the team coordinated on what to build is harder. Oh, and did I mention that hiring and retaining a talented team is far harder?

This is the CEO’s job: to raise and allocate the capital needed, but also to build a team capable of building the product needed and getting distribution. All while understanding what the company needs to build and helping the team understand and orient around it.

Narrative leverage is not just an advantage on the cost of financial capital. And not just a PE ratio with the financial markets. It exists in the leverage with all stakeholders both external and internal. It’s what makes prospective employees excited to apply and work for your company—despite all the tech companies fighting for talent. It’s what gives your customer confidence you will not only not go under—you’ll be focused on building a product that’ll continue to blow them away.

And perhaps most importantly, it’s what makes the team understand not just what the company looks like today—but what it could look like in five years. And makes employees able to see beyond their role, to how they fit into the larger picture of the company’s strategy.

Who’s in charge of that narrative? The answer is complicated and different depending on the audience.

From the employees’ perspective, it’s internal comms. From the customers’ perspective, it’s marketing—or perhaps the product itself. From the investors’ perspective, it’s investor relations.

i buy loops in bulk now.

But at most companies, these are primarily teams who manage how the narrative is distributed and shared. They are rarely the ones shaping and iterating on it, especially where it must bend the direction of the company itself.

There is no team that owns the narrative of a company. No team that determines its atomic concepts. This is why we often see large disagreements within a company on how to think about itself.

Some of these differences are natural. After all, customers care about different aspects of a company than its investors. And employees in different roles may have good reason to be focused on different timescales. But too often, the disagreement is unintended and harmful.

At most companies, only the CEO or founders can shape and reshape narrative.

Top companies already recognize primacy founder led narrative

Suggesting CEOs should prioritize this narrative distillation and go direct to their audiences isn’t idle prognosticating. The top CEOs already do care, and spend significant time on it. 

The company that was first and best at building their brand is Stripe. There may be no company with higher narrative leverage than Stripe and the Collison brothers. From its earliest days, Stripe has excelled at this. 

Stripe has long since grown into much more. But in its early days a significant amount of its value was simply in its ability to get great engineers to work on payment integrations and internationalization. Today, working on developer-first API companies may be sought after, but that did not used to be true. If a company tried to get its best engineers to work on internationalization of payments, they’d just refuse. Or quit. Stripe was able to get great engineers to work on these distinctly not high profile areas. And that alone, is worth a lot.

It’s not that the Collison brothers set out with some deliberate master plan to build a brand, culture, or personal reputation that would attract developers to work on Stripe. More likely, is that they filled a structural hole around payments. Payments needed a company that was developer first and engineering driven, and only founders with the predispositions of the Collisons could attract and build that kind of team in a space that those engineers would have otherwise dismissed.7

KK Note: Even today, the ability to get strong engineers to work on a problem engineers normally don’t want to work on remains a very strong formula for returns.

The Collison brothers may not have started with narrative in mind. But they have been quick to understand and capitalize on it. Stripe’s brand leverage among prospective employees in tech is incredibly high.

Stripe’s slogan, “Increase the GDP of the internet” points at a far loftier vision and more ambitious goal than the mundanity of payment processing. And this is reinforced in both much of Patrick Collison’s projects outside of Stripe as well as initiatives like Stripe Press.

And if Twitter is an increasingly strong channel for hiring and customer acquisition of tech startup customers, Stripe is the most dominant brand among tech Twitter. So much so that there appears to be an entire genre of Twitter content that is new Stripe employees tweeting about their onboarding experience.

You can increasingly see other top companies shifting to invest more in their company and founder brands. Shopify and its CEO, Tobi Lutke, are a good example of this.

In the last few years Tobi has become much more visible publicly. He goes on podcasts, hangs out in Clubhouse, does AMAs while streaming videogames on the Internet, and much more. If allocation of attention is the best proxy for prioritization, Tobi has strongly signaled his view of the importance of building personal brand and shaping Shopify’s narrative.

Increasing brand awareness and expanding the CEOs’ reach is real leverage. Reinforcing that Shopify is a tech company doesn’t hurt their multiple in the public markets, but cheap cost of capital is likely not what limits Shopify. Like all tech companies of this scale and success, their ever present constraint is recruiting. Having a strong brand and easy access to capital helps, but all their competition have that as well. The red queen race for talent is unceasing. Especially as Shopify has expanded its executive hiring outside of Canada to the US where it is relatively less known.

Founders have a unique ability to build brand for their companies. But of course companies must build it beyond them. Shopify has many other initiatives, like a studio producing TV shows and movies on entrepreneurship and starting an esports team. 

Shopify is not alone. Spotify is now making podcasts about how they build their product, and Daniel Ek is doing interviews on podcasts and blogs. Twilio is launching a magazine for their customers. And of course Elon is…being Elon.

It’s an advantage today. And will be table stakes tomorrow.

Final thoughts

Narrative is the other side of the coin of functional derisking. If a company is a series of functional derisking loops, then narrative is the leading edge of what is to come.

my editor told me no one would understand this chart. so it’s for me, not you.

Founders want credit not just for what they have already done, but what they are going to do: launching new product lines, changing their business model, becoming a platform. They want to pull forward credit for these future developments to the present to help make them inevitable.

Even more so, they want their team to have synchronicity around what is most important for the company’s future and how to prioritize and make tradeoffs.

What’s the difference between future investors and potential hires thinking a company is distracted and unfocused versus inevitable and defining? It’s in the coherence of the company’s logic for each sequencing of steps and how legible that narrative is made to them.

And in today’s market it is increasingly the founders who are able to distill and manage the overall narrative. This is only increasingly as companies undergo significant business model changes as they scale and the capital markets treat startups more as a fungible asset class.

Product market fit is just narrative distillation for customers. It only makes sense that this same process is as crucial for investors and employees, too. And just as we have spent so many years reinforcing the primacy of founders focusing on product market fit—and the process of how companies converge on it—so too must founders take distilling their narratives for all audiences equally seriously.

Appendix: Making companies that matter

Recently8 I tweeted that I was glad to see Discord hadn’t sold and that there’s some list of companies I hope never sell. While I do think it would have been underpriced, this isn’t the reason I don’t want them to sell.

Companies like Discord are not important because of the returns they may have. There is no shortage of companies that can drive returns. There are far fewer that can change their industries. And Discord is not alone—there is an entire rising generation of companies this applies to. Companies like Figma, Canva, Flexport, Benchling, and others are all at the cusp of getting to meaningful scale within their industries. (KK note: I don’t think I need to worry about any of them selling. But yes if you are a founder of any of these companies please don’t sell).

In prior essays I wrote on how we should judge venture firms not on their returns but on the value they added above replacement to companies. This is true for companies as well. Companies should be evaluated on the value they add above replacement.

Many companies simply occupy a structural hole in the market. In a world where they did not exist, some other company would simply have occupied their slot, without loss of generality. These companies may have significant profits, but they don’t matter. In some sense the profits were going to be realized regardless, and should not be attributed to their contribution. Great companies pull forward the future. They introduce solutions or business models that would otherwise take many more years to come about.

And the most defining companies change their industries’ trajectories and hurtle their ecosystems into shapes that otherwise wouldn’t have been seen at all.

There are only a few dozen companies at a time that have line of sight to being defining companies in their industry. While correlated to profitability, this isn’t about their ability to generate money. It’s about the gravitational force they will exert, re-orienting their industry into a new structure and alignment and proving out new business models whose structure will be replicated by all companies to follow.

This is the most compelling narrative that a founder can create around their company: that they have bigger ambitions than just succeeding as a business, that they have a chance to change the nature of business itself. For a select few, it’s not just a pipe dream—it’s the truth.

Endnotes

[1]: The current public markets are like a bar, and the investment banks are the bartender trying to regulate how many IPOs get served. But investors don’t want to be held back from more IPOs, whether they have had too many or too few drinks. And at this point investment banks have given up on trying to regulate this. Whether investors have had one drink, or one too many drinks, is an exercise left to the reader. Or rather how leveraged long tech beta the reader is.

[2]: If we are mutuals and you are planning to IPO soon. First, congrats. Second, please let me convince you to not let the investment banks run your IPO process the traditional way.

[3]: Direct listings have financial benefits relative to traditional IPOs, but I think these are secondary. And for example, modern DPOs seem to shift who gets the preferential pricing from the investment bank’s clients to a friendly hybrid fund that gets the pre-IPO floor setting round. This is a positive shift, but more incremental than transformational. It is the shift to companies owning their own narrative and marketing that is overlooked but most important.

[4]: Ironically, the hotter and more founder friendly the fundraising environment is, the less it is a fitness function forcing narrative clarity from founders. For many founders lucky enough to have VCs throwing money at them, there is no longer anyone but themselves who can force them to really refine their narrative.

[5]: Over a long period of time, the form of the venture industry is dictated by the scale, expected value, and risk distribution of the ecosystem of startups. If that distribution shifts, so too will the venture industry.

[6]: No, seriously. That is true magic.

[7]: Paul Graham has a great essay about schlep blindness. In it he posits that the reason we see founders avoid building companies addressing schlep problems is an unconscious avoidance of unpleasant work. This may be true, but I suspect the larger reason is that we don’t value solving these schleps appropriately. Historically the social capital in working on these areas was far less than on public facing products. Which made it hard not just to get excited about personally working on them, but also to hire teams and bring on investors. The best solution then is not ignorance as Paul Graham suggests, but rather a collective reappraisal of schlep industries. Which seems like what has happened over the last decade, as areas like b2b SaaS have become desirable. Also suspect the bigger macro driver is demand side fragmentation and growth. But shhh.

[8]: Given how long ago I started this essay “recently” is no longer accurate.

Acknowledgements

Many thanks to Keila Fong for all the help editing this piece.

Additionally, thanks to Kane Hsieh for advanced gif manufacturing assistance.

All graphics in this piece were created with Procreate and Figma. An integration between these two might have a target audience of only me. But I would love it. Many many thanks to Rogie for creating this amazing Procreate Import for Figma plugin. I’ve been very excited to take it for a spin, and it is life changing. Being able to import all the layers of a procreate file into Figma allows for so many more powerful combinations of the two tools. If you read my blog you don’t need me to repeat my love for Figma and what its plugin ecosystem enables. But this really is a perfect example to me of the power of Figma’s plugins—and even more so its community. Also, the power of twitter. heh.

Edit: Thanks to Nitesh for keeping me accountable on holding the line against title case hyperinflation

* Further pieces to be written on these subjects**

**I’m probably lying about this. To you, but mostly to myself.

Reforge Fall 21 Applications Now Open- Apply Today

If you are reading this blog post (or email) you definitely already know about Reforge. For the past couple of years, I’ve worked closely with Casey Winters (CPO at Eventbrite), Fareed Mosavat (ex-Dir Product Growth at Slack), and the Reforge team on a cohort-based program called Advanced Growth Strategy. Applications for the Fall cohort are now open.

If you’ve read any of my writing, you know that I like loops. One of the most powerful forces in building tech companies is the use of compounding Growth Loops. Advanced Growth Strategy dives deep on Growth Loops and how to use them to build and execute a compounding growth strategy.

Have I mentioned that I like loops.

Image

You must be a Reforge Member to participate. Reforge Membership includes participation in a cohort-based program and on-demand access to content from all programs, weekly releases, deep-dive sessions, and a community of vetted peers to help you execute what you learn.

[Apply to Join Reforge]

I’ve included more information below. You must apply prior to September 10th to participate in this cohort. If you have questions, email [email protected].

Also, if you do sign up for AGS course. Please let me know.

Kevin Kwok

Advanced Growth Strategy

  • Week One: Understanding Different Types of Growth Loops – You will learn the 20+ different types of growth loops, their variations, key execution factors, and how to identify them for all different types of products through modern examples and cases.
  • Week Two: Building Your Growth Model – You will answer the question – How does our product grow? – by building a detailed qualitative growth model that pieces together with your current and future growth loops.
  • Week Three: Identifying Your Growth Constraints – Once you have a qualitative growth model, you will identify the point in the model that is constraining growth. You will build a quantitative growth model and use it to analyze your different growth loops, scenarios, and goals.
  • Week Four: Improving Your Growth Loops – Learn methods to diagnose problems early at various steps in your loop and unlock the most common constraints. Explore the step-by-step processes used by elite growth leaders to optimize the different levers of growth loops via loop expansion and loop sequencing.
  • Weeks Five: Implementing Your Growth Strategy – Step out of your silo and connect your growth strategy to the larger scope of the business. Learn the best tactics for setting expectations and aligning stakeholders. Gain insights from top leaders on how to build presentations and future qualitative models.
  • Week Six: Evolving Your Growth Strategy – As your customer base and business change, your strategy will need adjusting. But how do you know when to make the change? Learn systems and best practices for evaluating change and identifying constraints, leverage points, and ceilings to your growth.

Fall Programs Begin October 4th

In the fall cohort there will be 14 total programs across Product, Growth, and Marketing all built and led by experienced executives. To join the Fall cohort, you must apply before September 10th. Once accepted, seats are first-come, first-serve. Here is the full slate:

Reforge Membership: What You Get

Reforge is the first-ever career development membership focused on high-growth tech practitioners, and is built entirely around the idea of driving impact for your business and for your career. The Reforge membership combines cohort-based programs, with a year-round experience to help you learn, grow and drive meaningful impact in your career.

Live Cohort-Based Programs

Real problems require real depth. Membership includes participation in two cohort-based programs per year where you will go deep on how to solve a key problem in a 4-6 week, part-time, virtual format, guided by an executive.

Each week in a program, you can expect:

  • Around 3 hrs of deep content covering actionable frameworks and systems.
  • An expert-led case study with a featured guest to apply what you learned to a real-life situation.
  • Connecting with vetted peers solving similar problems through discussions and feedback.
  • Hands-on guidance by an Executive-In-Residence who has done the work before.

All Year Access to Tech’s Best Content & Curated Community

Your personal growth doesn’t stop with the program, it just begins. As a Reforge member you can:

  • Access all content across every Reforge program (over 14+ programs!)
  • Complete step-by-step projects to help you implement and execute what you learn.
  • Attend weekly workshops and deep-dive sessions by Reforge Partners.
  • Receive weekly releases of projects, examples, and cases.
  • Connect with vetted peers solving similar problems.

[Apply to Join Reforge]

Members Love Reforge

You’ll be joining other top operators from companies like Airbnb, Spotify, Stripe, Dropbox, Zoom, HubSpot, Reddit, and LinkedIn who consider Reforge one of the best professional investments they’ve made.

  • “Even for experienced practitioners, Reforge crystalizes a practical and easily explained series of handy frameworks and immediately gives you a way to approach almost any product in any industry.” – Scott Worthington, Dir of Product Marketing at Peloton
  • “Reforge has the best content of any educational resource that I’ve encountered. It’s the best way to accelerate your career in tech.” – Bangaly Kaba, Former Head of Growth at Instagram, Instacart
  • More reviews here…

Who Do I Learn From?

In addition to the experienced executives that build and lead our programs, you’ll hear examples from the industry’s best. Past featured guests have included:

  • Bangaly Kaba, ex-Head of Growth @ Instagram, Instacart
  • Dun Wang, CPO at Calm
  • Melissa Tan, ex-VP Product at Ro, ex-Head of Growth at Dropbox
  • Darius Contractor, Head of Growth at Airtable
  • Jiaona Zhang, VP Product at Webflow
  • Kelly Mayes, Sr Dir Product at Roblox
  • Adam Fishman, CPO at Imperfect Foods, ex-Patreon and Lyft
  • Crystal Widjaja, ex-SVP Growth at GoJek
  • Vaibhav Sahgal, VP Consumer Product at Reddit
  • Justin Bauer, EVP Product at Amplitude
  • Bela Stepanova, VP Product at Iterable
  • Angel Steger, Dir Product Design at Facebook, ex-Dropbox and Pinterest
  • David Bakey, ex-VP Consumer at Harry’s
  • Zindzi McCormick, Head of Activation at Slack
  • Kieran Flanagan, VP Growth/Marketing at HubSpot
  • And many more…

[Apply to Join Reforge]

New video series, Closing the Loop

If you follow my blog, then you likely follow Eugene Wei on twitter. Eugene and I have started a new video series, Closing the Loop.

You can find it here on Youtube.

Some of the topics include:

  • Graph Design and the Limits of Social Graphs
  • Unconstraining Emergent Creativity
  • Decentralization, Creators, and Platforms
  • The Power of the Internet and China
  • Social Platforms
  • Algorithmic Design, Tinder, and Platform Governance
  • And a bunch of other random stuff

I’d pretend we’re doing it because people say our posts are too long. But I checked the word count of the transcript, and it was 15k words which is significantly longer than most of my posts. So I guess you won’t really be saving much time.

We’re hoping to go into more follow up discussions on many of the topics we write about. As well as have some guests on.

Fear not, I’m not shifting away from writing. In fact, have been editing a new post I’m hoping to publish soon. It just turns out this is coming first because…video is a lot easier than writing. At least the way we are doing it with no second takes. We’ll see when I can publish. The process of making videos has also been fascinating. Whether here or on Twitter, I’m guessing will have some post-mortems on many parts of that process to share.

How to Eat an Elephant, One Atomic Concept at a Time

How Figma and Canva are taking on Adobe—and winning

In 2010, Photoshop was ubiquitous. Whether you were editing a photo, making a poster, or designing a website, it happened in Photoshop. 

Today, Adobe looks incredibly strong. They’ve had spectacular stock performance, thanks to clear-eyed management who’ve made bold bets that have paid off. Their transition to SaaS has been seamless, for which the public markets have rewarded them handsomely. And they’re historically one of the best companies at M&A; their product lineup is a testament to their ability to acquire new product lines and integrate them well into their multi-product ecosystem. Perhaps most importantly and least appreciated, they have dramatically sped up the cadence of their internal product development process and feedback loop. Like Microsoft, they have successfully shifted from a legacy company operating on an annual (or longer) release schedule to a truly cloud company shipping updates at a sub-weekly pace.

Nevertheless, there are a few segments of design where they’re no longer the market leader. Companies like Figma, Sketch, and Canva are examples of products that have been able to become top products despite Adobe’s ubiquity in all things design. Figma showed up in Adobe’s annual report for the first time in 2019. They reprised in 2020, and I’m not uncertain they will continue to be in it going forward.

How should we understand these market transitions and why these young companies are able to thrive, even against a strong incumbent like Adobe?

These companies have distinct atomic concepts from Adobe. The primitives that their products are built around are fundamentally different from those of Adobe’s product lineup. It’s these different fundamental atomic concepts that turn Adobe’s advantage of an established product and existing userbase into a weakness that hinders their ability to counter these upstarts. The opportunity for these new atomic concepts to thrive is driven by the new use cases and types of users unearthed during market transitions.

Understanding the phases of market transition and what drives them is a universal process worth examining.

New use cases: designing for digital

For most markets, there are advantages to being an incumbent. Markets converge as companies arrive at the preference frontier of customers. This leaves little potential energy for new startups to take advantage of.

Market entropy is good for new entrants.

It’s not impossible to break into a market by brute force, but it’s hard. Very hard. Most successful companies, especially startups, have found tailwinds to harness that help pull them forward.

Changing customer needs are the largest source of entropy in markets. When customer needs rapidly change, there is less advantage in being an incumbent. Instead, legacy companies are left with all the overhead and a product that no longer is what customers want.

There are many causes of changing customer needs. Often there are new and growing segments of customers with different use cases. Existing products may work for them, but they aren’t ideal. The features they care about and how they value them are very different from the customers the legacy company is used to. Companies resist changing core parts of their product for every new use case since it’s costly in work, money, and attention. But every once in a while, what was once a small use case grows into one large enough to support its own company.

Other times the scale or dynamics of a market shift enough to make a product no longer work despite having been a great fit. Companies are often caught flat-footed by these situations because what they have done successfully for years suddenly starts to falter—and they aren’t sure why. Ebay is a good example of this. Their decentralized auction model was very good in a nascent internet economy when there was a scarcity of items being sold online. Once ecommerce became commonplace, price and speed became much more important factors and Ebay’s decentralized model was at a disadvantage. Amazon was much better at building economies of scale in this post-liquidity ecosystem.

Another source is when the customers themselves change. Often the function of a tool remains the same, but the type of user changes. These new types of customers often have different things they care about and resulting product needs.

The internet drove entirely new design use cases. Photoshop was built for editing photos and images. It’s a powerful tool that operates at the pixel level. However, many of these new uses weren’t about image manipulation. Images were a component—not the essence—of the job users were trying to accomplish.

For some users, this was designing digital products. Designers at software companies or any company with a website wanted to create the websites and software products they worked on. This is less about image manipulation and more about designing the UI and UX of these digital products. Vectors are more important than raster graphics. The complexity and process of designing these high-value designs also got increasingly more sophisticated. These designers worked with teams of other designers and non-designers. Their designs are part of a larger product development process and what mattered wasn’t just making a design, but how that the entire process could be improved to make collaboration easier and handoff of designs better. Iteratively.

The complexity of the designs and the components in the resulting code became more complex, too. The need for their tools to have a higher-level understanding of the components and variants became more important. It’s increasingly useful for designs to understand the same concepts and abstraction levels as the HTML and CSS in the resulting end product.

For some users, this was designing content for social platforms, digital ads, or even wedding invitations. These were often made in Photoshop, but again, pixels are the wrong abstraction level. Images are not the sole component; they are just past of a larger design that includes graphics, text, and more. Similarly, the customers are very different. Many of the people now doing what is, in essence, design work don’t think of themselves as designers. They just have a very specific thing they want to create, with the least friction possible.

The internet dramatically scales up the volume and type of new use cases for design. In many ways, this helps Adobe. With platforms like Instagram, the number of people editing photos has expanded by many orders of magnitude. While editing on platforms like Instagram may have increased significantly, Adobe has been a huge beneficiary of the internet and the shift to cloud—and their stock price is a testament to this.

[KK Note: Platforms like Instagram strapping editors onto their social platforms and eating into Lightroom from the bottom up is well worth its own discussion. And perhaps someone will convince Mike Krieger to do the definitive piece on that.]

Software may be eating the world. But it’s also building new worlds? I’m going to need a refresher on remembering the Andreessen Horowitz talking points

This is even more true in video. There are orders of magnitude more video creators as the ability to record video has become ubiquitous and the platforms where video is the default format have grown. Even more striking, many of the dominant video platforms—like Youtube—are purely distribution focused. They don’t even have any editing capabilities. Instead, companies like Adobe end up being large beneficiaries of this need.

[KK Note: Platforms like Youtube still having not built any semblance of an editor into their platform is *also* well worth its own discussion. I’d say we’ll never know what could be, but then I look at TikTok and all is right with the world.]

But Adobe hasn’t captured it all. And in many of these new emergent use cases and customer types, Adobe has lost the lead to new startups.

Tapping into the right level of abstraction

The best products map to how customers think about their workflow. They match the abstraction level of their customers: not too high that it’s unusable, but not too low that it’s hard to use easily or extend in more complex ways.

They choose the right atomic concepts.

These are the core concepts around which the entire product is built. They not only align with how customers think of their workflow, but often crystallizes for customers how they ought to. Great atomic concepts are honed and then extended and built upon in more complex compounds that…well for lack of a better word…compound.

Similar companies often have slightly different atomic concepts that end up making them meaningfully distinct. Photoshop is focused on pixels and images. Its focus is on editing images and pictures. And its functions operate by transforming them on a pixel level.

Illustrator is similar, but it operates on vectors, not pixels. This is a higher level abstraction. Neither is better or worse, they are just more suited to different use cases. Photoshop is better for modifying images, while illustrator is built for designs where scale-free vectors are best.

Sketch, like Illustrator, is vector based. But is designed for building digital products which means things like operating at a project level. It is not individual designs, but crafting entire products and user interfaces—and the needs for repeatability and consistency inherent to that.

Figma builds on Sketch’s approach, but also includes a greater focus on not just projects but the entire collaborative process as the relevant scope. Similarly, it also treats higher level abstractions like plugins, community, and more as equally important concepts.

Canva is similar to Photoshop and Illustrator, but its users aren’t designers who care about low level tools. Instead Canva’s core atomic concepts are around the different templates and components to help them easily accomplish the job they are doing. And the designs they are working on are not quite at the project level of making a digital product. They are canvases that include images and design.

There are many more axes, but they don’t fit in this stupid 2D chart

Atomic concepts are fundamentally linked to the core loops of a company. Expanding or changing these loops often involves adding to a company’s vocabulary of atomic concepts or adding them together in more complex ways.

Emergent use cases and new customer types lead to new ideal atomic concepts. These new workflows and different customers have different priorities than existing customers. How they think about their problems and weight possible solutions is different, even if often the end output has similarities. Of course, astute readers will pick up that causality is reversed here. New types of customers are a good proxy for where to pay attention. But it is actually the changed atomic concepts that are what make startups a compelling contender against incumbents in the space.

Customers don’t care about your technical architecture or internal org structure. When these no longer align with the job they are trying to do, then all the sprawl of the company becomes harmful, not helpful. These are the core bedrock that are much more difficult for a company to change mid-flight. Everything that makes an established company strong is built on top of this foundation and will fight back against changing them. Take Blockbuster and its reliance on physical stores and late fees. People often fall into the easy narrative that incumbents are asleep at the wheel. That they are too stupid to see the coming threat. This can be true but it isn’t the most common reason. Contrary to popular belief, many execs at Blockbuster not only saw the threat Netflix posed, but also the opportunity for Blockbuster to have claimed the mantle Netflix now holds. They even spun up a team to take Netflix head on. But what made retail stores and late fees so powerful and profitable for Blockbuster is also what made them so hard to displace. Every move to prepare Blockbuster’s core for a digital future was resisted by execs who generated more revenue, store operators who were livid at being cut out, and Wall Street investors uncomfortable with turning a consistent business into a high risk venture.

Rare is the company that can change its core atomic concepts. It’s why companies like Amazon are so impressive and so daunting. Startups thrive by finding asymmetric angles on incumbents that they are unable to follow. What is safe from a company with no sacred cows?

Understanding the core abstraction levels of a company is hard to understand from a distance. Which is why looking for emergent customer types with different needs is a useful substitute.

Figma bet on collaborative product design

Sketch was the company to first understand the market opportunity in designing digital products. Launched in 2010, Sketch was built entirely for designing the UI and UX of these products. Its atomic concepts were those best for digital products: vectors and projects. These were also what made it hard for Adobe to compete with their pre-existing product line.

In a classic innovator’s dilemma, Sketch’s best feature against Adobe was that it dropped everything that wasn’t best for making digital products. This allowed it to focus only on creating the best experience for vector-based digital design. Unlike Photoshop, it was vector based. And unlike Illustrator it was built with larger complex projects as the focus rather than specific isolated designs.

In retrospect, Sketch stopped at a half measure. Designers creating digital products did need vector-based design tools. And Sketch also understood that they were working on more complex projects vs one off designs that needed better project-first features. But these designers were also often working on teams—both with other designers and, more importantly, with non-designers. They weren’t designing in isolation, but as part of a larger process.

Sketch, like Adobe before it, lacked in this area. Everything from Sketch’s technical architecture and desktop based product to its pricing model and platform structure were a poor fit for this collaboration. The demand for these features could be seen in the messy ways that companies hacked together solutions to this and the many products that sprung up to fill these holes. Companies like Zeplin, Sympli, and Invision grew out of designers’ needs for better ways to coordinate with the other designers, PMs, and engineers they worked with. Sketch’s plugin system, like Adobe’s, felt more bolted on than core to the platform.

When Figma first started, it was more directly a Photoshop competitor. Over its first two years, though, they shifted their focus specifically to designers working on the UI and UX of digital products as they talked to more potential users. Building out the product to enable collaboration uniquely was key to these designers. Doing this was non-trivial. The technical challenges to do so were very hard, though Figma was well set up due to Evan Wallace’s technical prowess and specific knowledge in new technologies like WebGL. Building for collaboration to its fullest extent has led Figma to rethink almost all of the company—leading to new pricing models, distribution models, and sharing form factors.

For those interested in reading more on Figma, I have a prior post that can be found here so will avoid rehashing many of the same observations. Figma’s success came as it honed in on this growing use case of complex digital products built by larger teams of designers and non-designers—and in finding the atomic concepts that were uniquely needed for this new skew of users.

As discussed in Why Figma Wins, over the last few years this is most visible in their expansion into larger enterprise customers. Large companies have the same (if not greater) need for design tools that are built for the collaboration in their org as small startups or smaller teams within them. However, the set of features and tools they need around this look very different from a small team. When Figma started, it found its fit first with small teams, but as entire large companies started to look at it seriously it needed to understand how to think about collaboration and building a design tool not just at a team level—but at the scale of an entire company. 

Canva bet on marketing design by non-designers

With the rise of digital platforms like Facebook, Instagram, and Youtube, marketing and advertising have increasingly shifted online. Online advertising has many differences from traditional advertising. Most notably, it is much faster paced—and often more targeted. Companies now do many small variations on the same campaign: testing which versions do best, making personalized versions for different customer cohorts, and adjusting them to the different required form factors of each ad platform. The traditional process of having a few large campaigns each year looks increasingly archaic. The cadence was a function of the primary channels being areas like TV and print, where campaigns are costly so only a few large campaigns can be run a year. As the channels shift, the campaigns, tools, and teams adjust to match the new dynamics.

The fast and the furious

Increasingly, marketing teams don’t need whole design teams working on each campaign. Rather, they want tools that made it easy for them to adjust their marketing designs in small ways—like being able to format it for both their instagram ad as well as their Youtube banner. The background of the person needed to do this changes, too. Instead of hiring design agencies, companies bring this work in house, both because more of the work can be done by non-designers and because the pace of iterations makes working with an external agency too slow.

Once again I am asking you to be impressed by my multimedia use of graphics, drawings, and logos

Marketers and people posting on Instagram don’t think of the design work they want to do in terms of pixels. It’s the wrong abstraction level. They aren’t trying to directly edit the photos themselves. The photos are just an aspect of the specific goal they have in mind. They think of it in terms of the aesthetics and purpose of the design—not just the images but also the text and graphics and more.

Photoshop can do everything they want, but it is too low level. Photoshop’s atomic concepts are images and pixels. Editing at the pixel level is perfect for photos and image manipulation. Canva operates at a higher abstraction level—the one its users care about. Canva designs start with their purpose in mind, whether that’s designing a pitch deck, an Instagram post, or a wedding invitation. Canva has templates and layouts built for that specific purpose, while making it easy for users to add their own creativity, whether by putting in their own photos or using any of the many graphics and components made by the community.

This need is even more felt by SMBs and teams who can’t have a full design team work on every project. Canva’s lightweight editing with easy templates and process for making many small changes like formatting for different social platforms made it ideal for these customers.

This also allows Canva to extend its platform around these molecular levels. Canva’s distribution is driven in large part by their SEO. Unsurprisingly, the very same use cases people use Canva for are what people looking for design tools want to do and search Google for. With their product and templates built around these use cases, it’s easy for Canva to expose that externally and have lots of templates and examples ready to go for potential new users looking to do a specific design. Everything about their user acquisition and onboarding is built around the specific use cases people have and Canva’s atomic concepts. They are built around the functional workflows people have, whether that’s making a Twitter background photo, a wedding invite, or a keynote presentation. And Canva is committed to making that as easy as possible.

There is something very illuminati about this pyramid and sun. You heard it here first

Defensibility through becoming a platform

As they’ve grown, Canva has expanded their ecosystem by creating marketplaces and communities around templates, layouts, fonts, and more. Most users don’t want to build from scratch. With Canva’s marketplaces there is an entire ecosystem of pre-built components they can use, both free and paid.

Canva having this strong ecosystem of add-ons is very powerful. Add-ons allow Canva to address the huge scale and varied needs of all its customers, far more than one company could ever do on its own. This makes it possible for each customer to use Canva in a way that will be personalized for exactly the use case and aesthetic they care about.

I did not repurpose the first chart. No one will believe you. shhhhh
There is nothing sadder than the fact that no one will build a Procreate x Figma integration JUST. FOR. ME.

Creating free and paid add-ons have long been a staple for most design tools. However, they haven’t been tightly integrated into the product, adding friction for users. In contrast, Canva builds add-ons seamlessly and directly into the product, making it easy for users to access them directly and leading to higher usage. Treating these marketplaces as first parties has a number of additional benefits. Beyond increasing the value of the product, it also cements platform network effects for Canva. A growing community of creators monetizes by selling add-ons for Canva; this reinforces Canva as the tool to use with the most robust ecosystem.

There is entire category of ecosystem loops that no one seems to talk about. Ecosystem loops deserve love too

This is just one example of how companies can use platform network effects to extend and defend their beachhead. There are few sources of defensibility stronger than the cross-side network effects of platforms. It makes it hard for any new competitors to get traction. Without a large enough user base, a new platform can’t attract developers to build on top of it. As a result, new competitors also lack the ecosystem of add-ons to meet all the needs of and attract users. This is why platforms are so enduring. They allow companies to scale the needs they meet beyond what’s possible for a single company and they create chicken and egg problems for any competitor hoping to follow.

Extending this playbook to other spaces

Design isn’t unique among fields. All these same factors that are driving new and large use cases in demand are similarly arriving in most fields, especially in all forms of digital content. It’s inevitable we will see many of these same changes happen to video as they have in design and photography, though the specific use cases and needs that emerge will look different.

The most active area obviously undergoing this market transition right now is the broader productivity space. Over the last few years, many of these new companies (Airtable, Notion, Coda, Roam, Retool, Webflow, and Loom, to name a few) have seen remarkable early traction. But it’s also hard to delineate what the exact spaces are within productivity and collaboration and which companies cluster together in which buckets. Many of the companies have lots of product roadmap overlap as they each navigate the amorphous high-dimensional space of customer types and needs.

Even for those companies with early success, many have yet to crisply define the atomic concepts they’re betting on and to position themselves accordingly. Which are competitors with which? Who are their customers and which use cases will be the most important workflows to build around? What factors will determine which companies succeed and centralize their markets?

Companies have trouble navigating these questions because customers themselves don’t think precisely about what they really want. These companies have the opportunity to change how customers think about their own workflows. The best companies introduce better atomic concepts and help push their customers forward. Strong enough products will have ecosystems around them whether or not the companies actively manage it. The best companies don’t just benefit from these ecosystems, they build their platforms to enable and direct these ecosystems in ways that empower their customers more.

Figma is beginning to expand its scope with new initiatives like plugins and communities. These are not the only ones I expect we’ll see (and there’s one that I’m particularly excited to see how they tackle) but they are core ones. As discussed more in Why Figma Wins, if these work they help expand the ecosystems around Figma, enabling users with new abilities and ways to engage with each other. An ecosystem also creates both defensibility and extensibility for Figma.

Beyond design and productivity, many companies today are right at the crux of these decisions. Getting a product’s core loop to work is a tremendous effort and very rare. For those who do, they are then faced with the question of what comes next.

These companies can (and have) comfortably gotten to single digit billions in valuation on their core products. If they want to go public or be acquired, they can do that. But they are also at the point where they can catch their breath, take a step back, and think about what the next decade of their trajectory looks like and what would be next in their roadmap’s sequencing if they were ambitious. For most of them, it will involve fundamental expansions of their atomic concepts. Going multi-product or becoming a platform is the key to compounding into significantly more meaningful companies.

For all the discussion on strategy, running an actual startup is often more a test of tactics and execution than strategy. One of the few exceptions to this is when companies are making new additions to their most core loops. Pre-product market fit is the most common of these moments. But the transition from a single product to a platform (or multi-product) is another common one that most successful companies experience.

Figma and Canva are examples of companies going through this expansion, but they are far from alone. Across the industry you can see a cohort of tech companies at this stage. Companies like Notion, Airtable, and Flexport are all beginning their explorations of the next major expansion of their products and platforms. While not done, they have been successful in building out their core product. As they think about their ambitions for the next decade, they will have to extend their product in fundamental ways.

Final thoughts

Often the smell test of a company is how easily it can be dimensionally reduced. It’s like some variant of Kolmogorov complexity. How few core elements can maximally explain it? People fairly push back that companies are intrinsically messy and cannot be compressed in this way. It is often true that VCs and outsiders simplify their view of companies in ways that are easier to remember but useless in practice. The flaws in this dimensionality reduction aren’t reasons to ignore it—they are the reason it is important.

As a founder, nobody is going to understand the full nuance of your company like you will. Everyone else does see a simplified, compressed, and sadly imperfect shadow of your company. Founders repeatedly underestimate the degree to which their products are complex and opaque to outsiders, because they have it fully loaded in cache. They have seen every iteration and revision and imagined in painful detail all the alternate lives their product could have lived.

Most users never talk to someone at a company. Even if they do, the vast majority of their interactions with a company are with the product. Your users know nothing about how your company operates. They don’t see all the late night whiteboarding sessions and careful deliberations that led to the specifics of each feature they use or the many iterations that were tested and rolled back and refined. They often only understand half of how your product can be used, much less your vision for how it should be used as it matures. And your future potential users don’t even know you exist.

As product becomes the driver of most interactions with a company, external gatekeepers and proselytizers like journalists and bankers become less important. Instead, it’s the clarity of a company’s product and product—and founder—driven distribution that become most key. We’re still early on in companies internalizing this. 

This clarity is not just for users. It’s even more important for employees. They are the people who build complex compounds around these atomic concepts, and their misunderstandings are the root of future deviations and issues that arise. Founders get advice to repeat what matters more regularly than they think they need to. Repetition may help employees remember what’s important, but it pales in comparison to the clarity that comes from having strong atomic concepts to begin with. Like memes, simplicity is what makes them so transmissible.

One exercise I’ve often found useful for CEOs to do with their co-founders and team is to ask an important question about the company—and see how much everyone’s answers differ. People are always shocked at how much they differ from even their co-founder. It’s natural to have differences and that doesn’t even mean either person is wrong. But these unexpected differences in how to think about the company are the underlying faultlines that make it difficult to synchronize as a company on what matters and to have a common framework by which to discuss and debate important decisions.

All of this shouldn’t be misinterpreted. Very few companies come out of the womb with crisp atomic concepts. The nature of building a company is messy and complicated. Critics are right to say that many analyses over-simplify and give post hoc explanations of how to think about companies (yours truly included).

But the process of examining that complexity and finding the most lossless ways to dimensionality reduce is not the province of armchair analysts. It’s essential for founders and companies themselves to regularly do this refactoring. Just as companies build up technical debt, so too do they build up narrative debt.

Typically fundraising is a natural fitness function for doing this refactoring. For top companies this is increasingly no longer true—but the importance of this clean up has not shrunk. Whether for the sake of their users and employees—or so they can expand into becoming more complex platforms—companies must grapple with who they truly are, before they can go after who they want to be.

Appendix: Figma’s ecosystem and open source

There is a lot more that can be discussed on the platform ecosystem chart that is out of scope of this essay. This is a highly simplified chart, but it is one that comes to mind often when talking with founders of companies that are beginning to think through sequencing from single product companies to platforms. And are seeking a framework to think about their ecosystems (or analyze others) in a more structured way.

These charts can look very distinct for different companies. And even for the same company it moves over time as their user base shifts and they shape their ecosystem. Companies make intentional choices that have large impacts on what their platforms look like.

Figma is a good example of this. Unlike many platforms, Figma’s plugins and community initiatives put a large focus on being accessible to individual designers building out solutions to their own problems, whether just for themselves or to share freely with others. This focus is at odds with many other platforms that are mainly meant to be used by third party companies building products to be sold to users on top of the platform.

One impact of this is a bet on the importance of the long tail of niche use cases in Figma as seen below. There are many use cases that often are too niche to be supported as products to purchase that never are addressed in most platforms. But by making it easy for individuals or companies to build their own plugins, Figma hopes to see even these be addressed—and then shared out with the community in the way we see it often in the open source developer ecosystem.

Perfectly balanced, as all things should be

Acknowledgements

Many thanks to Keila Fong and Eugene Wei for the many discussions about this topic and help with this piece.

Additionally, thanks to Casey Winters for the many discussions about Figma and Canva. And our discussions for many years on these very topics.

Thanks also to Fareed Mosavat and Brian Balfour at Reforge. The Advanced Growth Strategy course was the origin of many conversations about Figma’s loops. And I still teach the Figma case study every semester. If interested in many of the areas in this piece, Reforge is the best place to learn them but also from people who’ve spent far more time actually putting them to practice in companies than me.

All graphics in this piece were created with Procreate and Figma. Procreate is a fantastic drawing app for iPad. If you have made it all the way through this essay and don’t know what Figma is then I don’t know what to tell you. Once again will put out into the world how much I want an integration between these two. What is the point of Figma’s platform solving for long tail niche use cases, if not to solve primarily for my long tail niche use cases.

The Mike Speiser Incubation Playbook

In Formula 1 racing, you can win a world championship as a driver with one team but then not even make the top 10 without that team’s car and infrastructure. Venture can often feel like this, too. Many top performing VCs would struggle if they weren’t on their firm’s platform. And similarly, a far greater number of VCs might be able to do well if they were just at a firm with a strong enough brand. Most special are those that are the source of their own success.

In Making Uncommon Knowledge Common, I wrote about Rich Barton because he’s one of the rare founders (or investors) with the demonstrated ability to create multiple billion dollar companies. Unpacking and learning from the few who have shown repeatable and internally compounding approaches to building companies is important.

Unlike consumer, traditional enterprise markets lend themselves more naturally to deterministic and repeatable success. There’s a small handful of VCs who have clearly shown they can succeed repeatedly and whose approaches and playbooks are legible enough to imply it’s not a fluke. Speiser is one of them.

Speiser’s portfolio includes companies like Pure Storage and Snowflake Computing. It’s worth noting that Snowflake not only IPO’d and is now at a market cap of over $60B but Speiser and Sutter Hill Ventures owned more than 20% of the company leading up to the IPO. When Pure Storage went public, Sutter Hill held more than 25%. Speiser may have the highest percentage of portfolio companies that have become multi-billion dollar companies—and that trend looks to continue with his newer companies.

But impressive returns are not solely what matters for the industry. It’s tempting to evaluate firms by their returns, and from the LP perspective that may be the correct metric. But another, and more important way to judge VC firms is by the value they add above replacement to their portfolio companies. How much do they help their portfolio companies increase their likelihood and magnitude of success? Firms do this most notably by providing capital, but also by other methods like lending their brand or directly helping with operations.

For founders, this value added is what matters. The returns of a VC firm only matter to a startup insofar as they translate into improved brand, network, or access to capital for the startup. A firm’s financial performance is a reasonable signal that they may add real value and be worth partnering with, especially since some aspects like brand strength for recruiting, future financing, and customer development are a function of perceived firm success. But to prospective portfolio companies, a fund’s returns are important only as a means, not an end.

What makes Speiser intriguing is how distinct his approach is from other VCs. The tantalizing clues suggest that he has figured something out that nobody else has: the formula for creating successful companies from scratch.

you didn’t really think there wasn’t going to be a drawing of a loop did you?

The Speiser playbook

At the core of Speiser’s approach is incubating companies, or “originating companies” in Sutter Hill nomenclature. Instead of investing in existing companies, Speiser stays solely focused on one thing: starting and building companies. Even among others who have been very successful at incubations, he is the most singularly focused on this.

bespoke artisanal charts as a service

Every year Speiser incubates around one company. The core of his model is to find 2-3 co-founders and be the founding investor. Often he takes on the interim CEO role himself for the first year or two. This has many advantages. The biggest is that it reshapes the ideal founding team profile. He can focus on getting the right top technical co-founders that will have strong views on what to build and the ability to build it—even if they are people who don’t generally view themselves as having a natural inclination to be founders. This is a significant talent arbitrage.

A better package for founders

There has been a dearth of coverage of Snowflake’s three cofounders, Benoit Dageville, Thierry Cruanes, and Marcin Zukowski in both the news and social media. Partially this is because they have not sought the spotlight. But partially, it is due to the veneration of a certain type of founder we have, those who seek the limelight of public presence and being in control of every aspect of the company.

Snowflake’s founders are cut from a different cloth. As Benoit Dageville put it “We never thought of it as building a company. We just wanted to build a cloud product. The company was an afterthought.” Yet, their product and technical decisions have been prescient in threading the narrow path to taking on Amazon and Google in the most important core markets of cloud computing.

There are people who often don’t want to be CEO, or even to start a company. They are driven by their conviction of what the future should look like, as well as their frustration with the internal dynamics they confront at legacy incumbents that prevent them from creating that reality. But they are still unlikely to start a company due to all the inertial cruft that comes with founding a company—and especially with being CEO. They want to build what matters, not set up a new corporate structure, manage fundraising, or build a sales team.

Eric Yuan, the founder and CEO of Zoom, has explained this feeling of being held back at Webex. He knew what should be built and that the Webex team could do it, but given the dynamics of Webex as a subsidiary of Cisco he was unable to get the political capital to do it. And he’s proven himself right by leaving with his Webex teammates and building Zoom. Considering he was VP Engineering at Webex and still unable to build what he thought was important should be a very discomfiting reality shock to large companies about the very real economic harm the malaise of their internal processes have caused. However, for every Eric Yuan, there are countless others that never leave and start a company. The inertial barriers are too high. They can stay at their company and struggle to work on what they know should be built. Or leave and take on more uncertainty and risk than they want.

Speiser introduces a third model that breaks through this Scylla and Charybdis dilemma. Start a company with Speiser and stay focused on what you want: deciding what to build, hiring the team you need, and building it. Speiser will handle fundraising, handling the operations generally, and setting up the sales motion and machine. Founders get much of the autonomy and upside of starting a new company while also getting support and guardrails so they can stay focused while having confidence the business is being built well.

Speiser doesn’t just take on these roles because founders don’t want to do it. There are actually aspects of company building where he should be better than the founders. Sutter Hill Ventures has the capital already, so it’s easy for them to take on responsibility for fundraising and remove that as a blocker. Instead of having to burn a lot of cycles fundraising, Sutter Hill can provide the capital. And they often do, leading multiple rounds into their companies. Or they can bring in outside investors, with the confidence that Sutter Hill can lead the entire round as a backstop if the process becomes too much of a hassle. Also, like any VC firm, Sutter Hill builds a brand that compounds their companies’ ability to raise follow on funding. At this point there are multiple firms that have made their bread and butter following on after Sutter Hill, to great success.

Similarly, Speiser is likely to have more experience in setting up companies and the initial customer development process than the founders will. Perhaps most importantly, he has relationships with customers and an established reputation that can be used to bootstrap the initial pilot conversations, which may be the point of highest leverage for these new startups.

These advantages all compound with every incremental company Speiser originates, and not just because of the typical brand network effects that venture has broadly. In many tangible ways, the spread between Speiser’s process knowledge relative to a new founder should widen with every new company.

As an industry we seem to often want to see machismo and martyrdom in founders. A decade ago it was wanting founders to be willing to mortgage their house and their kids’ college fund. Now it is founders wanting to be in charge of every aspect of companies. If founders aren’t willing to put everything on the line for the company their companies will be worse is the thought. As an ecosystem it doesn’t appear the data bears this out. Everything we do that has expanded opportunity and decreased the friction to more people becoming founders has led to huge benefits for the industry.

Just as Eric Yuan should be a massive shot across the bow for all large tech incumbents, Snowflake’s founders should be a wakeup call to venture that we have much further to go to enable and support even more brilliant people who don’t think of themselves as CEOs to bring their vision of the world into existence.

A better package for CEOs

But this isn’t the only talent arbitrage Speiser’s playbook benefits from. The interim CEO model allows another one too. As the startup does well and figures out its product market fit, Speiser eventually rolls off as CEO and finds a full-time replacement to take on the role as he takes a step back into being solely a board member.

His companies are very advantaged in finding great CEOs to take the mantle. To understand why, look at it from the perspective of an executive looking to become the CEO of a company. Like the potential founders, these executives have their own Scylla and Charybdis dilemma. They want to be CEO of a company, but they also want to join a company that has already derisked product-market fit instead of founding a new startup with all the attendant risks. However, there is significant adverse selection among Series A and B companies that are looking for an external CEO.

Now that most founders want to stay on and scale up as CEO of their companies, it often indicates the company is struggling if the board is looking to replace the founder. Even if turning the company around is doable, the internal and board dynamics are likely to be very acrimonious—with a hostile deposed founder.

There are a few exceptions, like Linkedin and Hashicorp, where the companies were doing very well and the founders wanted to bring on a CEO. That can be extremely effective, with the founders and CEO partnering to great effect. There is a lot to emulate in the dynamic and shared responsibility between Reid Hoffman and Jeff Weiner at Linkedin or David McJannet, Armon Dadgar, and Mitchell Hashimoto at Hashicorp.

But increasingly, founders wanting to bring in a CEO are the exception, not the norm, if companies only look for new CEOs while pushing out the founder.

Speiser can offer a much more attractive package.

When Speiser talks to a potential CEO he can say he’s showing them his strongest companies, because that’s true. His model is built on him leaving his role as interim CEO once they are working—and moving on to do it over again at a new incubation. Finding a new CEO is a feature of success, not a cry for help.

Many of my friends who are executives are bombarded by VCs trying to trick them into taking on their worst performing companies. After they do some light diligence and referencing, they realize that the company is months away from failure or the founder will be hoping for them to fail from day one. Executives grow to realize they should be default skeptical of any companies that VCs try to recruit them for as CEO.

Speiser is one of the few VCs who will really be pushing you towards his best companies. This gives him a huge advantage in building his relationships with the best executives, because they know he will actually be helping them find companies they’d want to be running.

And the results support this. Take Snowflake Computing as an example. After Speiser stepped down, Snowflake brought on Bob Muglia as CEO. Muglia, who was previously EVP of Software at Juniper Networks and before that President of Servers and Tools at Microsoft, was an astounding get as CEO for a two year old startup. And then last year, Snowflake brought in Frank Slootman, formerly CEO of ServiceNow and DataDomain, who may be the greatest enterprise CEO of his generation to run Snowflake.

Potential downsides of model

However, Speiser’s model is not without tradeoffs.

For prospective founders, equity is one example of this. While VCs like incubations because they are able to command higher ownership percentages, this comes at the cost of founders ending up with less ownership. For founders who would start a company no matter what and don’t feel like they need much support, an incubation model like Speiser’s will leave them with significantly less equity than they could get otherwise. Benoit Dageville, the Snowflake co-founder with the most equity, had 3.4% at IPO—less than founders often have at IPO.

For many founders, this tradeoff will be worth it after weighing how much Speiser and Sutter Hill increase the probability and scale of potential success. After all, Benoit’s stake is currently worth over two billion dollars. But for many, it may not be worthwhile.

Another tradeoff is autonomy. Just as founders and later CEOs may want Speiser’s experience and hands-on help building the company, they may regret his involvement where they differ in viewpoint. And the tradeoff of a VC being intimately involved in the business…is that they are intimately involved in the business. This is especially true where company and individual incentives may differ. Bringing Frank Slootman in as CEO, for example, could only come with the departure of Bob Muglia. I’m sure Muglia would have preferred to stay CEO, but the board decided they couldn’t pass up the opportunity to bring in Slootman. With two billion dollars in equity, Muglia may do it all again even knowing that. But there is a tradeoff in autonomy that comes with incubating a company with an investor so closely involved.

There are real tradeoffs for investors in the model as well. Incubating a company, especially as interim CEO, is significantly more work than only investing in a company. Which means an investor like Speiser can make very few investments, so the cost if any of them don’t work out is higher. By only doing incubations, an investor also misses out on being able to invest in promising companies and teams that are already formed. Finally, by primarily incubating companies, it can provide some misalignment with existing founders that may be more reticent to meet since the investor is unlikely to invest but may incubate a competitor.

Supported by structure and process

Speiser’s incubation model breaks many conventions of and assumptions about venture. They even defy the conventions of how most firms do incubations. Spending two days a week with companies consumes more time and focus than most VCs do. Speiser does fewer investments, which only works because of an implicit assumption that his companies have a much higher likelihood of success. Speiser currently has a roughly 20% hit rate of his companies achieving multi-billion dollar valuations. Comparatively, top VC firms are typically at single digit percentages. The ratio of other top firms is lower because they make more investments, but nevertheless Speiser’s hit rate is exceptional. His structure and process are integral to this. And similarly these success rates are what allow him to double down on his model. But how does he choose which spaces and companies to focus on with his scarce slots?

Underlying Speiser’s approach is a belief that ideas matter. And you can make success way more than most believe.

Technical risks and secular shifts

The recurring core of Speiser’s approach is finding a market undergoing a massive secular shift—and betting all-in on the full transition. He favors companies where the demand if it works is high—but the technical risk is high and most don’t believe it can be done yet. All investors like benefitting from market tailwinds, but it is a very different thing to bet on one well before it has manifested and to the exclusion of more conventional and existing approaches.

When Snowflake Computing was started in 2012, most investors and companies were convinced that in order to sell into large companies you had to support on-premise workloads. Hell, most customers were convinced to sell to them you had to handle on-premise. Betting all-in reminds me of something Reid Hoffman once mentioned about principles. Principles are only principles if you’d hold them even when they are costly. It is only betting fully when it comes at a real cost to the business. For example, in customers that can’t be served because they need a hybrid solution.

Ghost Locomotion, another of Speiser’s incubations, is another example of this. It’s a purely computer vision based approach to autonomous vehicles (AV).

In AV most companies take a hybrid computer vision and LiDAR approach. Google’s Self-Driving Car team and the many teams started by Google SDC alums (Aurora, Nuro, Argo, Otto) use this hybrid approach. Though LiDAR is expensive, the thesis is that for a use case as high stakes as autonomous driving it’s important to have heterogeneous sensors that can make up for each other’s shortcomings. Historically these sensor fusion approaches have outperformed those focused solely on computer vision. Of course, general consensus is that on a long enough time scale computer vision alone will work for autonomous driving. After all you and I both use a computer vision approach and are able to drive. The key question is which approach will be first to market with a system that is accurate enough.

The bet for most autonomous vehicle companies is that the progressing along the current hybrid approach will work, and that teams that have worked in autonomy will have an advantage—having seen the ceilings on performance that need to be overcome.

Ghost Locomotion, like Tesla, has a very different approach (it should be noted that Tesla also is a coupled bet on the feedback loop of large scale car telemetry data being key). They only use computer vision and machine learning—a purely software approach, rather than a hybrid hardware and software one. It is a bet that machine learning will improve at a fast enough rate to surpass hybrid approaches, or perhaps that the status quo approach cannot reach sufficient accuracy at all. It’s an aggressive view that prior domain expertise in autonomous vehicles is less important than expertise in AI and software engineering.

What approach will win in autonomous vehicles is entirely out of scope of this essay, but I point this out to say that going all-in on secular transitions is hard. It involves very real tradeoffs that will feel like they are wrong for a while and are often just wrong.

Going all-in on a market transition actually requires a more precise viewpoint on timing. Companies that do so rarely die because the market transition never happens—they die because the secular shift doesn’t happen fast enough. Understanding when the market is ready and how to help catalyze the transition is key.

Speiser explicitly seeks these secular shifts in the companies he incubates. Taking on technical risk over distribution risk. It’s the most publicly prominent feature of his investments. And you’ll see it throughout his thoughts wherever they are documented.

Snowflake Computing is all-in on cloud data warehouses. Not hybrid or on-premise data warehouses.

Pure Storage is all-in on flash storage. Not hybrid or disk based storage.

Ghost Locomotion is all-in on computer vision driven autonomous driving. Not hybrid or primarily LiDAR based autonomous driving.

Speiser’s companies take an aggressive view on transitions in the market, seeking out shifts that would create obvious and differentiated value that incumbents can’t provide, but that many don’t think will come yet. And Speiser’s companies go all in.

They are betting that the shifts will come sooner rather than later. Or perhaps more importantly, that they can be the catalyst that accelerates the market flipping abruptly towards the future.

Speiser is willing to underwrite the technical risk for years. Like many firms, Sutter Hill Ventures has enough capital to support a new company for years. But where most firms would balk, Speiser will continue to invest on that bet that the market will catch up to his view. And he has demonstrated repeatedly a willingness to continue to plow millions more into follow-on rounds of his incubations, even while they are pre-traction. Sutter Hill led the first two rounds in Snowflake, and continued to invest more money into later rounds.

Built-in rigor

As interim CEO, Speiser can bring his own rigor to the entire process from idea conception through finding product-market fit and being set up for scaling. This rigor can be seen even before a company or approach is solidified. My understanding is that Speiser meets with hundreds of potential founders and customers before deciding who to incubate a company with. This is orders of magnitude more than most VCs who meet with a handful of candidates when deciding on an incubation.

The rigor continues once the company is formed. Speiser leads the customer development process as CEO. This is key because in the early days of a startup, sales isn’t about revenue. It’s about product and market discovery, so a tight feedback loop is needed between sales and product. How to run this initial enterprise sales motion is very different from normal sales—and something few have experience in. But Speiser has lots of experience since this is the stage he repeatedly focuses on. These initial customer pilots are also much easier with a strong brand and pre-existing connections to potential customers. Through prior investments, Speiser has more relationships with potential customers to call upon and a more refined playbook for the steps of honing in on the ideal customer profile, how to structure the pilots, and more.

Most VCs fall into an uncanny valley. They won’t do work directly, so founders can’t offload work (and the cognitive load that goes with it) off their plate to their investors. But they don’t have enough context on a frequent enough basis to be able to really help shape the meta-process towards success. Speiser has both as interim CEO.

Considerations for the venture industry

Focus

Fittingly, Speiser’s approach to venture is the same as his approach to other markets: never go hybrid.

We see this in his decisions about structuring his work. Besides his investment in SumoLogic’s Series B, it appears he hasn’t made any investments beyond his incubations. There are other VCs in Speiser’s cohort with similarly impressive track records with incubations, such as Jim Goetz at Sequoia or Asheem Chandra at Greylock. But Speiser is rare in now only doing incubations.

At a first glance this appears odd—after all, there are many great companies that he didn’t incubate that his reputation could help him get access to investing in. Isn’t he sub-optimizing returns by not doing traditional investing?

The question is whether the loops of incubation and venture investing are additive to each other. In broad strokes they clearly are. After all, many of the same people who could be good founders to incubate companies with would be good founders to invest in separately, too. Much of the understanding of markets, company progress, and more are generalizable between the two approaches as well.

However, on deeper inspection they are actually not that aligned in many ways. Companies are a sequence of de-risking functional loops. Incubations are focused on the very earliest stages of these. By the time a traditional VC firm invests, companies are already set on many of the aspects that someone incubating companies must be good at.

While there is overlap in networks, in many ways there is also a tradeoff between what’s best for incubations vs. investing. By focusing solely on incubations, Speiser does not need to keep pace with the torrent of potential investments that occupies most investors’ schedules. This frees up significant amounts of time and focus.

Like a gas, the normal flow of investing can expand to fill an entire schedule. There is always a company of the week that must urgently be pursued. It’s very hard to carve out real time for incubations amid this.

While other investors incubate companies, there is little differentiation between how they handle their incubations and their traditional venture investments. Each part of Speiser’s process is tuned specifically towards incubations. From deciding which parts of the company he’ll run for first few years to the talent arbitrage. Or his process for finding founders and spaces to incubate companies around. And from changes he’s made to the incubation process over the last few years, it’s clear that he continues to work refining the process. Going forward it will be interesting to see 1) whether he can scale the number of companies he can incubate and remove himself as the bottleneck to incubations 2) if he can create more sources of value to incubated companies that compound further with every incubation he does.

Optimal firm structure is downstream of the expected value distribution of portfolio companies. If incubations have a different likelihood and magnitude of success than traditional venture portfolios, it’s inevitable the firm structures that best support them will also be different.

Visibility and brand strategy

Speiser is one of the top current VCs by returns, and after the Snowflake IPO he’ll have likely returned more than $12B in returns to Sutter Hill. That will bring the average billion dollar plus outcome in his portfolio to around 20%, and likely to rise with more companies like Sigma Computing, Clumio, and Ghost Locomotion approaching that valuation.

More important, he is one of the few with a unique and clear strategy that has clear compounding loops. Why then is he virtually unknown in Silicon Valley at large? Especially when compared relative to other investors with similar track records.

Many firms and investors rely on broad top-of-funnel brand network effects. Andreessen Horowitz’s model grows stronger as more people are familiar with Andreessen Horowitz. It’s their brand network effect that drives dealflow, name recognition, and improves their portfolios’ cost of recruiting or customer acquisition.

However, Speiser’s model doesn’t rely on these brand network effects optimized for inbound, so he doesn’t have similar pressure to broadcast his strategy and success. In fact, he avoids it.

Firms in this quadrant are most interesting to study. Firms with strategies that require mainstream awareness are much better understood—because their approach is out in the open. But firms that are successful but don’t rely on a broad brand network effect are much less understood—so there is greater potential they have discovered an un-arbitraged source of compounding returns.

This is graphic I made for myself. You can tell because it’s worse than my normal graphics. Bet you didn’t know that was possible

That’s not to say he doesn’t have brand network effects at all. Within the network of people he’d want to start companies with, he is well known—but there’s no benefit to him in being known more widely. The percentage of his meetings that are outbound is likely significantly higher than most VCs, so what matters is that if he reaches out to people they view him as credible.

Speiser’s model is far more interesting than most firms’, because it bucks the current strain of conventional venture thought. It throws our assumptions of how companies must be started into disarray by abolishing the strict Chinese firewall typically held between VCs and the operations of their portfolio companies. It believes venture firms can increase the likelihood of company success much more than people think, which runs counter to the current view of picking and access as the primary frontier of competition.

Most importantly, it understands that the function of venture is more important than its form. And it adapts its structure to best serve its purpose—improving the fundamental derisking of companies.

Future of incubation

For the most part, the venture industry is skeptical of its ability to incubate successes with any consistency. This pessimism can be seen implicitly in discussions of venture success, which revolve primarily around the knowing of and getting access to the top fundraises. And can be seen in the structure of most firms’ portfolios and follow on investments

Speiser’s success incubating, along with a few others, is important because it is an existence proof of venture’s ability to meaningfully impact companies. Over time industries tend to get overly narrow, focusing on approaches that are known to work well. This is natural, but often means that without outliers that take new approaches and perturb the equilibrium, industries can get stuck on local maxima—with no one wanting to be the first to try unproven strategies.

Speiser’s success ratio is far beyond the normal distribution of venture outcomes, and the fact that they are incubated with him as interim CEO indisputably proves his involvement is effective.

Will the magnitude of success of Snowflake’s IPO trigger a re-examination of incubations? It should. With caveats.

Speiser’s model is in stark contrast to modern norms around the optimal level of investor engagement with companies. The Chinese firewall around investor and company engagement is primarily a function of fear of investors harming the companies—whether maliciously or more often unintentionally. However, this should mean that there are high returns where there is trust between founders and investors on greater levels of context and engagement by investors.

And as a market, it’s important to have high returns on increased trust, otherwise trust becomes purely an aesthetic attribute.

More importantly, the power dynamic in tech is shifting very strongly away from investors to founders. As this trend continues and founders increasingly are less afraid of their investors, the field will increasingly bifurcate. Investors will either be less engaged and more passive (primarily contributing capital and brand) or they will engage closer with companies but be held to a higher bar of helping push progress for the companies. Over time as the leverage moves towards founders and the fear of investors being able to damage companies continues to fall, there will be more openness to new formats of investor and company engagements.

Incubations have far more degrees of freedom than investing, since investors are more closely involved in many more aspects of the company—especially at the proto-formation stage before many core decisions have calcified. If you look across the current landscape, there are very different strategies across the taxonomy of firms that incubate. Some of the investors that incubate include Asheem Chandra (Greylock), Aneel Bhusri (Greylock), Jim Goetz (Sequoia), Kevin Ryan, SciFi VC (Max Levchin’s fund), Thrive Capital, BoldStart VC, 8VC, Unusual Ventures, and many others. Just looking at this list there are many axes they all differentiate on in approach, what they think should be centralized by the incubating firm, and where they think value is generated. And the types of companies they incubate and the dynamic range of their outcomes is equally wide.

People often bucket all incubations together as a category. This is wrong as there are more variants of incubations than of traditional venture. VCs and LPs trying to do incubations without a clear viewpoint on how to press the form and structure of their approaches to best incubate will be disappointed.

In the coming year we are likely to see many moves to incubate companies. This will not be due to a change in the efficacy of incubating, but rather the shifts in market valuations—and the sharp rise in demand and valuations for earlier stage companies in certain segments.

When companies at $1M ARR can regularly raise in the hundreds of millions of dollars, the pull to invest earlier grows. Competition to invest in the companies attracting these multiples has grown so fast that investors are moving aggressively earlier and earlier in their lifecycle. In some ways this is rational as the industry has improved at inferring forward revenue predictability of companies, but in large part investors are moving to invest before all the data points on go-to-market have become clear. But as seed investing eventually also becomes very competitive and investments in new startups get marked up at high valuations, investors will increasingly look towards incubating new startups.

At some stage the constraint becomes the number of companies being founded in these areas. Incubating new companies will be one of the few ways for investors to generate proprietary deal flow.

Final thoughts

As a community, it’s easy for tech to become reflexively fixated on returns. Especially in this environment where every day brings a new company raising at astronomical valuations or going public at $75B, it’s easy to get lost in the allure of it. But lost in all of the breathless discussion is what is actually being built, what value is being added, and whether our ecosystem is improving.

Returns untethered from value creation are a temporal anomaly. Over a long enough time period, returns should accrue to where value is created. And we should be most worried when returns don’t have any tie to value created—an ecosystem with no fitness function cascades to the worst disasters. Just as companies are judged on their net present value of future cash flows, so too should we judge organizations on the net present value of their future value added.

Have you ever seen a sadder portrait of the rise and fall of empires than this drawing?

It’s easy to judge VC firms on returns. But value created above replacement is the much more interesting and leading metric, especially from a founder’s perspective. If the Ghosts of Christmas removed a VC firm from existence, how would all of its now former portfolio companies fare? Would they have raised from another VC firm and been in the exact same place? Or would they not even have existed?

At an ecosystem level this same principle applies. Companies and firms push progress by taking novel approaches that others can all learn from. Increasingly firms have converged on structure and strategy—a local maxima. Few firms perturb the equilibria. Often they fail. But it’s the pursuit of new approaches to every aspect of company building that perturb the equilibria and push progress. It’s these aspects of companies that create positive externalities for everyone.

Speiser’s approach breaks from venture convention. And with Snowflake’s recent IPO, it’s getting attention that’s long due.

In many ways, Speiser’s playbook reminds me of TikTok. The most beautiful aspect of TikTok’s business model is how every aspect of it is aligned with its network effect. Its short-form video format decreases friction and maximizes volume to feed into its network effect. Its algorithms favoring of shares and full watch-throughs are core metrics for engagement and distribution. Its duet and reply videos turn every aspect towards the creation of new videos. Its sounds, memes, and dances are all user-generated social capital marketplaces. It’s glorious to see a company willing to rethink everything to best make its form fit its function.

Speiser’s incubation model shares that intentional design and craft. Over time, each aspect of his model has been shaped to better serve originating companies—rather than just repurposing the traditional venture model for a new function. 

And it’s not lost on me that similar to the talent arbitrage I discussed Union Square Hospitality Group having in Aligning Business Models to Markets, Speiser also has one in attracting EIRs and CEOs.

Among my friends who admire Speiser’s model, one question hangs over us all. Is Speiser’s model repeatable, or is Speiser unique? Can his playbook be improved and scaled, or is it a craft that is fundamentally artisanal? Have we found a new frontier of Coasean logic to company formation that will become mainstream with time, or a single bloom that will vanish with him?

There’s reason to be optimistic. While Speiser feels like an outlier in venture, there are a number of individual partners that have incubated companies with models that feel very similar to great success. It is the intentional refinement of structure and process that seem unique—rather than the entire thesis.

Over the coming years we’ll hopefully see many investors evolve their own perspectives on Speiser’s approach and execute on it with their own modifications. There are many iterations that I’m particularly interested in seeing executed.

Of course, what excites me most about Speiser is that he is not done. In recent years, it’s clear he’s continued to push experiments on how to further refine the incubation model: how to scale it up while centralizing and compounding the value the firm provides and how to use the unique advantages of his model and financial returns to provide a more compelling package to founders while building in defensibility. And of course, also failed experiments in where the model can be expanded to.

I love what Speiser has done and continues to work toward. The true next dream in my opinion is scaling and systematizing his playbook beyond being constrained primarily by his individual effort. This will be hard—perhaps even impossible if the core of his success turns out to be his connections or some ineffable sense of taste.

But if it’s doable, it would be one of the most important developments in Silicon Valley and tech. His model is a better abstraction layer and structure for creating successful companies and it’s a process that compounds in how effective it is with every company. It’s ultimately a better way to truly drive innovation faster, rather than being merely a tollbooth on innovation.

Acknowledgements

Many thanks to Keila Fong, for the many discussions about this topic and help with this piece.

Additionally, thanks to Max Bulger, Michael Dempsey, Casey Winters, Saam Motamedi, and Zach Brock, for their discussions, edits, and help with this piece.

Underutilized Fixed Assets

Every marketplace is unique. But every successful marketplace is unique in the same ways. Historically it’s very hard to find a successful marketplace that wasn’t built on an underutilized fixed asset.

Airbnb is a canonical example of this. Someone has a guest bedroom. It’s always there, a fixed asset. But it’s unused and they make no money from it. Until Airbnb, they may not even have considered that they could make money from it. And then suddenly, with Airbnb it generates hundreds of dollars for them. It’s like an ATM they didn’t know existed in their guest room.

What Exactly are Underutilized Fixed Assets

To understand underutilized fixed assets (UFA) is to understand each component of the term.

Fixed vs Variable Assets

Fixed assets are those where the cost of them is constant, and independent of usage. If you buy a pan, it is a fixed asset. You have already paid for the pan. Whether you use it once or a hundred times, the total cost does not change. Having gasoline for your car on the other hand is a variable asset. If you don’t drive at all, you will not spend any money on gasoline. But the more you drive, the more you will have to refill your tank and by extension pay for gas.

This difference between fixed and variable assets is flipped when you look at their costs on a per usage basis. Variable assets are relatively constant in their costs per usage. While the more fixed assets are used, the cheaper their cost per usage falls. Usage of fixed assets is amortized across all usage of them, so they have a natural economy of scale.

There’s a third and more important way to view fixed assets. They can be viewed as assets that are already paid off. Any incremental uses of them are functionally free. Unlike variable assets, where incremental usage always still carries a cost.

Underutilized vs Fully utilized assets

Assets have a maximum amount they can be used. Both in terms of the frequency with which they can be used as well as the total number of times they can be used. There can be significant variance between two similar assets on their maximum utilization or even disagreement about what is a reasonable benchmark to use for max utilization, but those are in-the-weeds details. Underutilized assets are those that are not used very much, while fully utilized assets are those whose usage is close to the maximum possible.

Why are Underutilized Fixed Assets important?

Underutilized fixed assets are things with fixed costs that are not being used as much as they could be. They are important because they *can* be used more, and from their owner’s perspective all additional usage is free.

Unlocking early markets

The cross-side network effect of marketplaces is incredibly strong, but equally difficult to create. Convincing both sides of a market to to join on the promise of the other side being there is a constant struggle. And simultaneously building both sides is significantly more difficult than being able to focus on just one side of the market.

In the early days, many marketplaces have found an underutilized fixed asset to be an incredible boost to expedite building the supply side of their market.

The best way to think of underutilized fixed assets is as pure potential energy sitting in people’s homes, cars, and random tchotkes. Marketplaces take a tremendous amount of energy to get their flywheel spinning. But it’s easier when there is an external supply of potential energy that can be put to work.

Preferred Pricing

Unlike businesses, most people with underutilized fixed assets baseline their value at zero. Any money these assets turn out to be worth is money they found lying on the floor that they’re happy to receive.

This lets a marketplace bring on supply at a much lower cost of acquisition than expected. And these savings can be passed along to consumers as well.

Latent supply

Once a new underutilized fixed asset is identified, a startup can grow rapidly because there is so much latent supply of the asset initially sitting unused.

This supply tends to be mostly retail. It’s more fragmented sources of supply, with people who are less price sensitive since it’s not a business to them.

One downside with more fragmented supply is the complexity in bringing them online, making them legible, and handling logistics. This tends to work in marketplaces’ favor, however, as it’s the exact type of complexity that software is particularly good at, and allows them to unlock the underutilized fixed asset.

One way to look at marketplaces is a series of supply and demand acquisition elasticity curves. Finding a good underutilized fixed asset is a surefire way to bend the acquisition elasticity curve for a while.

Burn the Bridges

One under-appreciated advantage of underutilized fixed assets is that because they are a finite and arbitragable source of supply, it’s hard for new competitors to replicate once they’ve been discovered and tapped. It’s hard for a new Airbnb to emerge because people with empty bedrooms no longer are unsure what to do with their empty room–they use Airbnb. So a new competitor can try to go after these same potential listers, but they have to compete against Airbnb, rather than on the hosts doing nothing with the bedroom.

Case Studies

For many marketplaces the early patterns are the same.

Airbnb

Before Airbnb, hotels were the primary option available to consumers. Yes, there were some short term house rentals or communities like Couchsurfing, but these alternatives had low liquidity and trust.

This wasn’t due to lack of empty bedrooms available in cities. Every day thousands upon thousands of empty bedrooms and homes are unused by their owners. But these travelers and home owners never used to connect. There was no way for them to find each other, and even if they did, they would not trust each other.

Airbnb bridged this divide. They brought both sides online. They built trust into the platform by handling payment, customer service, and discovery, as well as by building up reviews and photos for each listing.

All of this made it possible to list people’s empty bedrooms and other underutilized fixed assets like this. These listings were often cheaper than hotel rooms since listers had already assumed the costs of their empty bedroom. They are also better positioned, neighborhood-wise, compared to hotel rooms.

Uber

Before Uber and Lyft, taxis existed to varying degrees in each city. Mobile ordering and dispatch made UberBlack originally possible. But it wasn’t until Lyft that driving was opened up to anyone with a car. Lyft’s main insight was that mapping and routing apps like Google Maps and Waze commoditized local knowledge in drivers. This made it possible for anyone to become a driver, massively expanding the pool of drivers to anyone with a car.

Ebay

Ebay was the first of the online marketplaces, and perhaps the most successful. They raised a total of $7M before going public as an already profitable company. Before Ebay, if you had random stuff in your house you either tried to sell it locally, perhaps at a garage sale, or just let it pile up in your house. By making it easy to list, review and discover items online, Ebay brought liquidity to their marketplace and made it possible for people to sell all the random unused things in their house.

New Startups

For every marketplace, in their early days they should be thinking about potential underutilized fixed assets that make sense for their platform.

For example, Hipcamp helps consumers find campsites to rent. Hipcamp adds supply by getting private landowners to list their land, and for many of these people their land was literally sitting fallow before Hipcamp made it possible for them to share it with others.

The Sequencing of Marketplace Supply

There’s something beautiful about retail marketplaces: two sided markets where both sides are just random average users.

They never last. So there’s a wistfulness to watching them, frozen for just a moment.

Look at all marketplaces as they scale, and you eventually find that the percentage of their revenue that comes from retail users on the supply side falls. Power sellers dominate Ebay. Uber is increasingly full time drivers doing it as their full-time job. Airbnb is increasingly full of individuals and now companies that buy properties and convert them into ideal Airbnb listings.

I’ve learned to draw area charts. This changes everything.

A constant debate persists around whether to embrace the professionalization of one’s platform or to stave it off as long as possible. There are deliberate product choices that can tilt the platform in either direction. These decisions are a subset of a burgeoning understanding of the life cycles of a platform, which I’ll cover more in a future essay. But few marketplaces are able to avoid this transition.

Things can only be underutilized for so long

Empty houses, random stuff in your house, and idle cars. 

There is a natural invisible asymptote to these. Eventually, the growth of marketplaces built on top of these underutilized assets slows. They are a great playbook to enter a market, but there is a finite amount of unused assets–and eventually there are no more idling assets to utilize.

Profit drives new behavior

At the beginning, people with underutilized fixed assets dominate the supply side of a marketplace. These people will always have their place on a platform, because they have such a cost advantage. But eventually, others, or even some of the same people, realize that using the marketplace is so profitable that it’s worth it even if they have to bring new supply online.

There are natural economies of scale to this. Someone creating supply on a platform full-time will be better at many things like:

  • Understanding the intricacies of how the platform works
  • Understanding how to best make money on the platform
  • How to best get listed, seen, and discovered on the platform
  • The most attractive ways to invest capital for a return on the platform

Increasingly the best hosts, sellers, and drivers on the platform professionalize. They are already making the majority of their income on the platform. And whether or not they are the majority of sellers on the platform, given their professional usage they are very likely to be the majority of transactions on the platform.

Scaling is a business

To scale beyond a certain rate is difficult using underutilized fixed assets. Besides a finite amount of them, there is a natural limit to the rate at which they grow on your platform.

Professionalized sellers are very different. As long as it’s profitable for them, they’ll do everything they can to expand to meet demand. Underutilized fixed assets are nice because they have a low effective cost basis.

Professionalized sellers are nice because they are variable and will scale as far as can be supported. Large marketplaces gradually shift to professional sellers as they try to maintain their pace of growth.

The consequences of sequencing

Companies consciously or unconsciously make decisions that embrace or reject this professionalization of sellers to differing degrees and speeds. These choices have large impacts on the growth rate and unit economics of the business.

They also have large impacts on the relationship with supply on the platform and regulation by the government. For example, if you think of Airbnb’s regulatory hurdle at a local level, local landowners renting out their bedroom is far more attractive politically than fast growing companies buying up for sale properties to convert them into short term hotel rooms.

Appendix A: Crypto

Crypto is no different. When Satoshi created Bitcoin, they identified an underutilized fixed asset that could be used to bootstrap the cryptocurrency’s security. Bitcoin was set up so that individuals could have their CPUs, which were often left idle, mine Bitcoin when not being used. For early adopters, this was great. They hadn’t realized their unused CPU cycles were valuable–but now it was like they found free money they hadn’t even known about. And in today’s dollars, those who had their CPUs mining made substantial fortunes. And for Bitcoin, the deal was even better. Security requires something computationally hard. The Proof of Work system got others to provide the compute needed to scale security for Bitcoin without Satoshi needing to personally pay significant sums of money for their own compute. These underutilized CPUs were distributed among a very fragmented and large set of early adopters. Diminishing fears of miner centralization and having the very users be the source of security as well.

However, Bitcoin is also a good example of the limits of underutilized fixed assets. Though underutilized CPUs may be the cheapest source of compute (free). Bitcoin mining is a zero sum game. One’s return is roughly proportional to what percentage of mining you are. This is ideal for Bitcoin, because it makes a competitive red queen equilibrium, where miners must keep working to bring more compute to bear at as low costs as they can in order to keep pace, much less gain ground on their competitors. What matters is not having the cheapest compute, but having the most scalable compute that is still profitable. Underutilized CPUs were cheap, but they weren’t scalable.

Enterprising miners soon realized that they could utilize their GPUs for Bitcoin mining. And machine learning startups renting GPUs from AWS soon learned about this the hard way, as miners realized there was a perfect arbitrage by renting GPUs from AWS to mine Bitcoin–and quickly tied up all of Amazon’s GPUs until their prices rose to make it not profitable anymore. Though Satoshi had intended for a distributed user base mining with their personal computers’ CPUs, there were better places along the SLA-Price curve.

Since then mining has become even more specialized and economy of scale, with mining hashrate centralized around a small number of large companies that specialize on mining. These companies create their own ASICs, which are purpose built for mining Bitcoin, and allow for significantly more mining power at lower costs than even GPUs. This has made CPU and GPU mining prohibitively expensive and, even for those with spare CPU cycles, not very profitable.

Appendix B: Food Delivery

Underutilized fixed assets are the topic of this essay. But all the other variants, such as underutilized variable assets, are important to understand as well. Food delivery is a great example of this. Many people often express disbelief that food delivery startups, have been able to get as many restaurants to sign up for them while charging large take rates (sometimes north of 30% now) from the merchants. “How do these restaurants afford it?” these skeptics ask. What these skeptics fail to understand, is that restaurants do not view deliveries the same way they view customers dining in. There are many factors that restaurants are constrained on including, ingredients, labor, kitchen capacity, and dining space. 

For walk-in diners, the primary constraint is dining space. There is an immovable cap on how many tables a restaurant has, and thus how many turns they can do a night [1]. This real estate space is a fully utilized fixed asset. So a startup bringing new diners to a restaurant is entering a zero sum game, especially during peak hours when a restaurant knows they can likely fill all their tables. If a restaurant accepts a diner from a startup and pays them a take rate, this replaces a diner that would have walked in for free. This is the reason why restaurants often don’t list their prime hours on sites like OpenTable. They know they can fill their limited number of tables, so why pay OpenTable a fee for it?

But delivery is different. Real estate space is not relevant to delivery orders. Instead the two main constraints for restaurant delivery are labor and kitchen capacity. Kitchen capacity is an underutilized fixed asset. Most kitchens can handle more orders than they handle each day, but never need to because there’s not enough space in the restaurant for more diners. Like all underutilized fixed assets, restaurant owners are very happy to have their kitchens handle more orders if makes sense.

The other constraint is labor. Restaurants may have some underutilized labor, depending on how busy they are. However, if they have any significant number of delivery orders, they likely would need to have their workers do more shifts, or hire new workers. So labor is an underutilized fixed asset up to some point, but then primarily a variable asset for restaurants.

So when a startup brings new delivery orders to a restaurant, their main question is whether the delivery will be profitable net of the variable costs like ingredients and labor of the restaurant. Other factors like real estate costs are already fixed and so not factored in by restaurants. If these delivery orders are profitable, restaurants are happy to do any and all incremental orders–and will happily pay a higher take rate in return for bringing them the customer. And if the startup brings more customers than they have workers to handle–they’re overjoyed to hire new workers, as long as the economics make sense [2].

Variable assets are great because they can scale well. However, they are far less preferable to underutilized fixed assets for a number of reasons. Their primary weakness is that they can be copied by competitors. Underutilized fixed assets when discovered are have a huge amount of stored value. The first company to properly use them can increase their value significantly. However, after they’ve burned through this arbitrage, future competitors must find a new way to get advantaged distribution fast. This isn’t true for variable assets as we can see in food delivery. The field is increasingly competitive with Grubhub, Uber Eats, and Doordash all competing in increasingly costly battles.

Endnotes

[1] In the US, most restaurants seem to view the number of turns possible per night as being fixed. Many Chinese restaurants, however, do not. Instead they view it as one part of the restaurant trilemma. Food quality, price, and service/speed are all in tradeoff with each other. Chinese restaurants often optimize for great food at low prices, opting to make it up by turning over tables as fast as possible. Often that’s why these places will get your orders before you’re seated, give you your check before you’ve finished eating, and rush you out the door the second you finish the last bite of food. While Westerners often complain about this bad service, it’s the ultimate service: all to maintain quality food at low prices.

It’s surprising that this is not common in the US. Perhaps it’s due to social norms around dining. However, perhaps this is beginning to change. One of the few reliably profitable areas of the restaurant industry, Quick Service Restaurants (QSRs) are all about improving how much throughput each location can handle.

This chart is the only reason I published this entire essay

[2] This is why delivery companies are not winner-take-all on supply. Restaurants are rarely exclusive with delivery companies, because they want any and all incremental demand they can get. In fact, it’s not enough to charge a lower take rate than your competitors, because restaurants will still accept all incremental orders they’ll make money off of–since they’re almost never over capacity. This is why delivery companies must compete to corner the market on demand if they want to be winner take all.

Acknowledgements

Thanks to Keila Fong for making sure that Kwokchain: Year 2 is not an entirely empty book.

The Arc of Collaboration

The arc of collaboration is long and it bends in the direction of functional workflows.

Why Slack is an Else Statement, there is no distinction between productivity and collaboration, and why the Slack of Gaming may be Discord but the Discord for Enterprise is not Slack.

Disclaimer: I currently use every product mentioned in this post, and love all of them.* I also used to work at Greylock and helped with the investments in Discord and Figma. There’s lots of opinions I have on both of them as well as their general spaces. But really you should talk to Dylan Field and Jason Citron. And John Lilly and Josh Elman, who led the investments in both. Because all four have shaped my thinking on productivity and collaboration significantly. And compared to the world they are still living decades in the future on how both are merging and where they are going.

*Except Salesforce, because I am not successful enough to need a Salesforce instance for my personal life.

When Slack first started growing, there were many debates over which company would own collaboration, Slack or Dropbox. Dropbox proponents argued that Dropbox already managed all the actual records of a company, and so would be the center of gravity. Slack partisans argued that Dropbox was a transitory product, and eventually companies would stop caring about individual files, and messaging would be the more important live heartbeat of a company.

Messaging, it turned out, appears to be a better center of gravity than documents. And while Dropbox (barring significant traction in its new products) seems to be fading in its centrality, what’s striking is that Slack’s victory seems hollow as well. If anything we’ve seen even *more* new companies building towards owning parts of these workflows and getting traction.

That’s not a statement on its prospects as a company, or its accomplishments. Slack, even with recent dips in its stock, is a $15B company with very impressive underlying metrics. But there’s this feeling that’s hard to shake.

If Slack won the war, and owns collaboration, why doesn’t it feel like the war is over?

Slack was supposed to be the app that became the OS, the end of the cycle on productivity. But that hasn’t happened. How should we understand what’s happening.

Slack is ubiquitous at most companies in tech (and in many other industries as well), but it doesn’t feel like it is becoming the central nervous system undergirding all the apps and workflows of its customers.

A new generation of functional apps have risen, with messaging and collaboration built directly into them as first parties. And with them it becomes increasingly clear that Slack isn’t air traffic control for every app, it’s 911 for when they fail.

Slack is the 911 for whatever isn’t possible natively in a company’s productivity apps. And though it’s improving, there are still many structural cracks. Slack is current best solution for filling these cracks. But it doesn’t fix the cracks themselves, improved processes and productivity apps are needed for that.

As the ecosystem of specialized SaaS apps and workflows continues to mature, messaging becomes a place of last resort. When things are running smoothly, work happens in the apps built to produce them. And collaboration happens within them. Going to slack is increasingly a channel of last resort, for when there’s no established workflow of what to do. And as these functional apps evolve, there are fewer and fewer exceptions that need Slack. In fact, a sign of a maturing company is one that progressively removes the need to use Slack for more and more situations.

What drives these changes in collaboration? And is there room for one app to focus entirely on collaboration–and if so, what should it look like?

To understand this is to understand that there is no distinction between productivity and collaboration. But we’re only now fully appreciating it.

Separated at Birth: Productivity and Collaboration

Productivity and Collaboration are two sides of the same coin for any team with more than one person. Work is just the iterated output of individuals creating and coordinating together.

But the two have been distinct and isolated segments historically, due to how long the feedback loops of both were.

Post-software and Pre-cloud. Collaboration is external to productivity

What really began our modern era of how to think about collaboration began with the shift to software. Digital work has significantly faster feedback loops for productivity. Software, quite simply, can produce and iterate new things at a daily if not hourly or minute basis.

Suddenly, the constraint on work became much more about the speed and lossiness of collaboration. Which remained remarkably analog. The friction of getting people your document, much less keeping correct versioning was non-trivial.

Even with the introduction of email, people could send each other files—but still had huge coordination costs around versioning.

We might as well have been using carrier pigeons

Cloud – Dropbox and Box

As the industry began to transition to cloud, companies like Dropbox and Box rose. Instead of everyone keeping their own local copies of documents, what if everyone had them pooled in the cloud. Then parts of collaboration like versioning and permissioning could be done across the entire team.

Employees could make changes directly in document, and trust it would propagate to their co-workers. In practice, there were still versioning issues to handle. But it was a significant improvement.

syncing files for designers was hilarious chaos

However, this model looks transitory in retrospect. In a pure cloud world, this atomic unit of documents seems increasingly archaic. Documents are more a constraint of a pre-cloud world. And once you assume storing them online is table stakes, the question becomes where is actual collaboration happening that then leads people to wherever they need to do work.

And core Dropbox is not a solution to this. People store their documents in it. But they had to use email and other messaging apps to tell their co-workers which document to check out and what they needed help with.

Dropbox understands this concern. It’s what’s driven their numerous forays into owning the workflows and communication channels themselves. With Carousel, Mailbox, and their new desktop apps all working to own that. However, there are constraints to owning the workflow when your fundamental atomic unit is documents. And they never quite owned the communication channels.

Slack

Slack became the place you messaged your coworkers and sent them links to the work you wanted them to check out. They began to displace Dropbox as the center of gravity for companies.

#corgi_meme should be a default channel when you create a new corporate slack

The dream of Slack is that they become the central nervous system for all of a company’s employees and apps. This is the view of a clean *separation* of productivity and collaboration. Have all your apps for productivity and then have a single app for coordinating everyone, with your apps also feeding notifications into this system.

In this way, Slack would become a star. With every app revolving around it. Employees would work out of Slack, periodically moving to whichever app they were needed in, before returning to Slack.

But productivity *isn’t* separate from collaboration. They are the two parts of the same loop of producing work. And if anything collaboration is in *service* of team productivity.

What is Slack, really?

There has been much pushback to Slack in recent years. Often centered around this feeling that Slack is distracting and not productive. As with any successful app, much of it is the gripes that come with any app that is successful enough to become a significant part of your working life. But there’s an underlying current to these critiques that I think is real but people struggle to pin down precisely.

It’s not that Slack is too distracting and killing individual productivity. It’s that your company’s processes are so dysfunctional you need Slack to be distracting and killing individual productivity.

Slack is not air traffic control that coordinates everything. It’s 911 for when everything falls apart.

Every slack message about a new document your feedback is wanted on or coordinating about what a design should look like is a failing of process or tools. Slack is exception handling. When there’s no other way to make sure someone sees and update, or knows context, Slack is the 911 that can be used.

Slack serves three functions:

  • Else statement. Slack is the exception handler, when specific productivity apps don’t have a way to handle something. This should decrease in usefulness, as the apps build in handling of these use cases, and the companies build up internal processes.
  • Watercooler. Slack is a social hub for co-workers. This is very important, and full of gifs.
  • Meta-coordination. Slack is the best place for meta-levels of strategy and coordination that don’t have specific productivity apps. This is really a type of ‘else statement’, but one that could persist for a while in unstructured format.

There is an entire separate essay to be written about meta-coordination. Which I think can have very different outcomes from functional workflows. We may be very far from formalization of meta-coordination and less concrete strategy planning. Which means unstructured text, meetings, and video calls could be the best current functional workflows for them for a while. But for our purposes of this essay will put that as out of scope.

As a company’s processes mature and the apps they use get more sophisticated, we expect to see the need to go to Slack for exception handling *decrease* over time. (Though of course, the complexity of the overall company may increase at a faster pace than this maturation, leading to a net increase in slack messages).

These three functions are incredibly important. From the perspective of owning the process of doing work, they point at interesting relationship.

Slack’s importance is inversely tethered to the rate at which functional workflows within companies become legible and systematized. Both at an operational level, and long term at the meta-strategic level.

And this makes sense. The platonic flow of productivity should minimize time spent not productive, with collaboration as aligned and unblocking with that flow as possible. By definition, any app that requires you to switch out of your productivity app to collaborate is blocking and cannot be maximally aligned. It’s fine to leave your productivity app for exceptions and breaks. But not ideal when working (and not having issue).

Functional workflows rule everything around me

Slack ironically is more similar to Dropbox than expected. The more time goes by the more it looks like exception handling being needed ubiquitously is a transitory product as we switch off of documents. After all, like Dropbox, Slack makes the most sense as a global communication channel when the workflows themselves don’t have communication and collaboration baked in natively. For documents this is true, but increasingly for modern apps this is false.

As it becomes more clear what are specific functional jobs to be done, we see more specialized apps closely aligned with solving for that specific loop. And increasingly collaboration is built in natively to them. In fact, for many reasons collaboration being natively built into them may be one of the main driving forces behind the venture interest and success in these spaces.

As these apps proliferate, there is less and less need to turn to Slack. And Slack becomes more and more about the edge cases that aren’t yet built in.

Github is a great example of this for the engineering side. Salesforce for Sales. Out of scope of this essay, but there’s lots to write about this and I’d generalize Shopify as being part of this as well.

But for our purposes, let’s use an example, Figma.

yo dawg, I heard you like chat. So I put chat in every app.

Figma

Figma is a collaborative design tool. Unlike Sketch or Photoshop, Figma has collaboration built in natively as a first party. This means the ability to comment on designs. But it means much more too. It means the ability to design together at the same time. To be able to send a live demo to someone frictionlessly and then be able to make live changes as you talk to them. It means being able to build design systems that are reusable and plugins that are shareable.

Figma shows what collaboration means when you understand that collaboration is *intimately* part of productivity. And always has been.

If you are working on a design Figma handles all communication. There’s no more need to send an updated file on Slack. Or type in feedback on Slack. Or make a change and let someone know on Slack. And as Figma increases the scope of their app and adds more team and enterprise features. Even for sharing with non-designers on the team, the need for external communication falls.

And as Figma expands into plugins, the ecosystem will continue to solve for more and more of the needs and exceptions.

Over time, our workflows align with our functional flows. And collaboration is no exception.

And Figma is not alone. More and more apps in all categories understand that collaboration should and must be built in as a first party if they want to best serve their customers. Notion, Airtable, etc all understand this. The feedback loops of collaboration get so short that they become part of the productivity loop.

The future increasingly looks like one where companies use very specific apps to solve their jobs to be done. And collaboration is right where we work. And that makes sense, of course. Collaboration *should* be where you work.

Meta-coordination

It should be noted, that meta-coordination adds nuance to this. Just as we increasingly productize the functional workflows. It allows us to start to be better at the meta-coordination at longer timeframes. Which could have standalone functional apps that specialize in these slower cadence coordination problems. Slack and Zoom are both possible answers in this regard. As are apps working in todos, project management, etc.

The efficient frontier of meta-coordination is fascinating. Over time we see productivity apps eat up the stack. Google docs is a good example of the abstraction layers of coordination.

Google docs is good at line level commenting. So for this low level of coordination it excels. Which when sending word documents was the current state of the art, felt advanced. But increasingly, feels limited for higher abstraction levels of collaboration. As apps like Figma build in deeper collaboration.

Can there be a meta-layer?

This isn’t to say that there cannot be a horizontal collaboration app that is core to the productivity workflows. But it likely cannot be blocking to productivity. It can’t be a peer level app that is standalone. Instead it must work across and within each productivity app.

Standalone messaging is not what ties all apps together. It is a peer level product that’s used where the others fall through.

However, there is a need for a layer across all the applications. A layer for things that should be shared across the apps as well collaborative functionality across them.

Slack in its current form cannot be this. If you have to switch out of a product to use Slack, then it is not the layer tying them altogether. Instead, the layer needs to exist a layer above. If everything was in browser it’d be a browser extension. But since most apps are not, it needs to be at the OS layer.

There is some mix of presence, collaboration, coordination, and identity that should be ubiquitous across whatever apps are being used. A layer more attached to the people doing work and what they’re trying to accomplish—than which specific app they’re in.

Perhaps one of the closest to this we’ve seen was Screenhero. After all, the idea of screen sharing is inherently about collaboration while working within productivity apps.

But it made the decision to be downstream of Slack, not upstream. It assumed Slack would be the central nervous system for people at work, and people would switch over to Screenhero from Slack. It traded scope for distribution. And got neither.

KK note: It was acquired by Slack in an all equity acquisition. So to be clear it was hugely successful

But there *is* a non-enterprise example of what this layer might look like.

That company is Discord.

Discord

Discord is the best analog for what should exist. For a while Slack and Discord were compared to each other as competitors. As Discord has focused squarely in gaming, and Slack in companies this comparison has been used less and less.

But this misses the main distinction between Slack and Discord.

Discord is actually two products bundled into one. It *is* a messaging app that looks akin to Slack. But it is *also* a meta-layer that runs across all games.

This is the Slack for Gaming for Enterprise. The new startup meme should be X for Y for X

Beyond its Slack-like functionality, Discord has functionality like a social graph, seeing what games your friends are playing, voice chat, etc. These have been misunderstood by the market. They aren’t random small features. They are the backbone of a central nervous system.

Active users of Discord have it on all the time, even when they are not playing games. It’s a passive way to have presence with your friends. And when your friends start playing games it makes it easy to with one click go join them in the game. Bringing your actual social graph across all games. Finally, voice chat makes it possible to talk with your friends across all games, even when you are playing the game. Like when working in a google doc, having to switch out of your game to message is a negative experience. Instead Discord adds functionality to your games even while you are focused solely on them.

We will see more companies understand and begin to work on this area.

Final Thoughts

Abstracting out of productivity and collaboration apps into the processes themselves, there’s something beautiful at how much we’ve improved and continue to improve at the process of working together with other humans.

In Making Uncommon Knowledge Common I said “One way the tech industry can be viewed, is a process by which we collectively push forward our understanding of industries and new business models.”

And perhaps a company is just a process that hopefully compounds and improves in its ability to serve its customers.

But underlying both of these is the most beautiful loop of them all. Progress is a process by which humans compound and improve on our ability to work together better for the things we care about.

Like distributed computing, it has turned out that for most of human history coordinating among humans has been a slow, intractable, sisyphean effort. In the last few decades we have seen tremendous technical breakthroughs in the latencies and tooling possible to remove these constraints. Across the world, whether in productivity apps or in national governance, there will be a transition period as our norms and processes adapt to this tightening of the collaboration feedback loop. But perhaps I remain incredibly bullish on what it means for our alignment and output as we increasingly systematize and make sense of these.

Our ability to compound together at compounding together is our most beautiful trait.

Appendix

Appendix: Distribution

Of course, an approach like Discord for enterprise will need novel acquisition loops. This type of collaboration has strong intra-company network effects at scale. But lacks trivially obvious inter-company network effects or pre-liquidity loops.

Out of scope for this essay. And don’t quite want to get into the tactics I think would be effective here. That said will note a general framework here.

If you look at most collaboration companies’ loops there are a few dimensions to categorize most of the tactics and loop sequences by:

  • Single player vs multiplayer
  • Intra-company vs Inter-company
  • App required vs no app required
  • Synchronous vs Asynchronous
  • Personal capital vs Social capital driven

A company working in this space has significant surface area for novel growth loops at each combinatorial set of these.

Appendix: Fortnite and Epic

Discord is also useful for understanding what comes after this stage as well. If you look at Discord. One potential TAM constraint is if gaming becomes 1) power law with low ecosystem churn and 2) not monetized via purchase.

Fortnite and Epic is the best example of this potential. And Epic’s playbook in launching their app store vs Steam is a case study in how a dominant enough app can move up the stack if it has enough sway over end consumer attention.

There are similar lessons for companies selling to other companies. And we’ve already seen examples of these in specific industries. So always, something to watch for.

Credits

Many thanks to Keila Fong, Saam Motamedi, Dave Petersen, and Eugene Wei for the many discussions about this topic, their help with this post, and their unceasing pressure to publish it.

Furthermore, even more thanks to Keila without whom these super professional quality stick figure drawings would not have been possible.

Though they didn’t pre-read this post, also want in particular to thank John Lilly, Josh Elman, Dylan Field, and Jason Citron. All of whom have heavily influenced my thoughts on productivity and collaboration. And compared to the world they are still living decades in the future on how both are merging and where they are going.

Making Uncommon Knowledge Common

The Rich Barton Playbook for winning markets through Data Content Loops

Preface: This is part of a longer private memo analyzing Zillow and its recent shift towards Opendoor’s model. May publish rest of memo at some later point. But wanted to share first part, on Rich Barton and Zillow’s initial rise.

Have had many recent conversations with people in tech who didn’t know who Rich Barton is. So wanted to share this both as primer on him and on the cornerstone of his repeated successes.

When Michael Jordan returned to basketball from retirement—the first time, in his prime, not the second time of which we do not speak—the whole world watched in awe. Meanwhile, the tech world just saw the return of arguably the GOAT of consumer tech, the founder of three household names in Expedia, Glassdoor, and Zillow. And hardly anyone, even inside Silicon Valley itself, paid it any mind.

Rich Barton is hardly a household name. Perhaps this is because he’s not based here, and makes relatively few investments. However, while there are more visible founders (like Bezos and Zuckerberg) who’ve built bigger businesses, market cap and notoriety aren’t the only measures of a founder. And Barton is a strong contender for the title of best consumer tech founder because of his repeated success. He’s founded three consumer companies each worth over a billion dollars with Expedia ($18.6B), Zillow ($8.8B), and Glassdoor (Said to have been acquired for $1.6B).

And he’s back, having returned to the helm of Zillow as it pivots to respond to a new wave of fast rising competitors like Opendoor.

Repeatable success is key, especially in Consumer tech which is one of the hardest areas to succeed in. Companies that sell to large Enterprise customers are relatively well understood now, and even our understanding of SaaS metrics and business model decisions has matured a lot over the last decade. The Consumer tech sector, however, remains dark magic. The playbooks are far less developed—and no one’s playbook has demonstrated the repeatability of Rich Barton’s.

There are a few consumer investors who have multiple multi-billion dollar wins. But it’s hard to name people who have founded three consumer companies worth over a billion dollars each.

To reliably successfully invest in consumer is a rare feat; to repeatedly found successful companies is virtually unheard of. Doing so suggests a founder has hit upon an underlying structural playbook that isn’t yet commonly known, or successfully replicated. And while some of Rich Barton’s techniques are commonly understood, his core strategy to catalyze his compounding loops is not.

So What’s the Playbook?

If you’re reading this, you’ve likely used Zillow, Glassdoor, and Expedia before. It’s hard to look on the internet for anything related to real estate, jobs, or travel and NOT see one of Rich Barton’s companies. Their ubiquity is stunning.

But it’s not coincidental.

Rich Barton’s companies all became household names by following a common playbook.

The Rich Barton Playbook is building Data Content Loops to disintermediate incumbents and dominate Search. And then using this traction to own demand in their industries.

Or as he puts it “Power to the People”

Much of what Rich Barton pioneered has now become mainstream. SEO/search is well saturated, and the importance of owning demand has been popularized by Ben Thompson’s many essays on (Demand) Aggregation Theory. But the cornerstone of Rich Barton’s playbook, Data Content Loops, are still underappreciated and rarely used.

Owning demand gives companies a compounding advantage, but needs to be bootstrapped. When a company is just starting out, it not only doesn’t own demand, it has all the disadvantages of competing against others that do.

In order to grow their demand high enough to become a beneficial flywheel, Barton’s companies use a Data Content Loop to bootstrap their demand and create unique content and index an industry online (homes for Zillow, hotels and flights for Expedia, companies for Glassdoor).

  • Expedia: Prices for flights and hotels that before you’d have to get from travel agent
  • Zillow: Zestimate of what your house is likely worth that before you’d have to get from broker
  • Glassdoor: Reviews from employees about what a company is like that before you’d have to get from a recruiter or the company itself

These Data Content Loops help the companies reach the scale where other loops like SEO, brand, and network effects can kick in.

Barton’s companies then use this content to own search for their market. This gives them a durable and strong source of free user acquisition, which enables them to own demand.

Power to the People: Disintermediating Industries with Data Content Loops

Barton career can be summed up by his mantra “Power to the People”. His companies take power from the incumbents and give it to consumers. Instead of trying to hoard information, they are on the side of consumers and giving them more data transparency.

Glassdoor revealed how employees really felt about companies. Zillow shed light on what any house was worth. Expedia let people see the prices and availability of flights and hotels without talking to an agent. These were knowable things that people have always talked about with each other. There are few topics adults love gossiping about more than work, real estate, or travel. And few categories as core to their net worth.

Rich Barton took these whispered conversations and made them public for everyone to see. Afterwards, everyone wondered why they were ever private.

Part of the reason was that companies benefited from this credibility through obscurity. Real estate brokers have access to significantly more data about the specific houses and the general market via a set of data sources called the MLS. Historically, only brokers had access to MLS data, which gave them leverage over their customers and entrenched their importance as market makers. Similarly, lack of visibility into companies allowed bad ones to put on a good face until prospective employees had already joined. And only large companies could pay for data from compensation research providers, giving them advantage over the potential hires they negotiated with. Many incumbents are able to intermediate their markets and unfairly gain an edge from people’s lack of knowledge. And it’s scary to be the first to buck this trend on your own.

Plus it is logistically difficult. Job applicants are unlikely to know a current employee at companies they are considering joining. And even if they did, it’s unlikely they could trust them to tell them the unvarnished truth. Employees have little incentive to say negative things about their employer, unless very close with the person asking.

This sparse commons is a classic case of natural market failure. While some incumbents take advantage of the information asymmetry, most benefit from a third party that will handle the logistics things like:

  • Verifying legitimacy of information being shared
  • Maintaining privacy of participants
  • Aligning incentives to get people to participate in contributing to the commons
  • Finding, ingesting, and curating third party data into the commons

Rich Barton’s companies became public Schelling Points. They create common knowledge in their industries from information only middlemen had access to before, from public-but-hard to aggregate data, or from information collected from users themselves. These intermediaries, whether brokers or travel agents were misaligned. They controlled what information was shared with the public, but has an interest in withholding it. Instead of pushing increasingly more and higher quality information to the public, they maintained the status quo.

Creating common knowledge creates a network effect. All companies in Silicon Valley want to build network effects, but few have followed Barton’s path despite its effectiveness. The more people use and trust Glassdoor, the more companies must take it seriously. And as users see more people contributing to Glassdoor, they can be more confident they’ll stay anonymous when they add their review. There are virtuous loops in common knowledge.

Demand Rules Everything Around Me

Search

All of Rich Barton’s companies have primarily used Search (and word of mouth) as their acquisition channel. Search is a great channel, since it can drive significant demand at low cost. Few companies can generate enough high quality web pages about their industries to fully capitalize on it, however.

The Data Content Loops of Barton’s companies let them be the authoritative public source on a subject at scale and low cost. By having super relevant information about every hotel, home, or company someone might be interested in, Barton’s companies become the ideal destination for consumers.

Over the years, he’s refined this model. Expedia aggregated all the various hotel and travel options, but others had done that as well. However, Expedia and Booking.com were among the most aggressive to understand the importance of search. If you had the top spot in search, the next best thing was to acquire more sites so you owned the next top result, and so on. Use Travelocity, Orbitz, CheapTickets, or Hotels.com? All of them are owned by Expedia. And any site not owned by Expedia is probably owned by Booking.com. This approach, coupled with dominating the paid acquisition side as well, helped them dominate.

With Zillow and Glassdoor, Barton took this a step further.

Before Zillow and Glassdoor, if you wanted to look up information about a specific home or company, there wasn’t a webpage for it. Barton’s companies created the definitive page for each house and company. Using a combination of data from authoritative sources (like all the various MLS systems) and user-generated data, they created high quality content unique to each company or listing. Being among the first to do this let them do a huge SEO land grab, which has been hard to displace since.

If you look at the sources of traffic for Barton’s companies, the vast majority of their traffic comes from search or direct. This makes their user acquisition far cheaper than any company that relies primarily on paid acquisition. It’s this ability to get free acquisition at scale that made it possible to build companies in these otherwise difficult, low-frequency markets.

Becoming a Trusted Brand

The ultimate purpose of the “Data Content Loops + SEO” strategy of Barton’s companies is to own the demand side of an industry. Expedia wants to be the first place you go when you travel. Glassdoor wants to be the destination when you’re thinking about companies to work for. And Zillow wants to be the place you go to look at real estate.

Barton’s companies take industries that are low frequency and use their Data Content Loops and SEO to acquire users for free and engage them more frequently. While most companies in real estate have super high customer acquisition costs, Zillow is able to get potential sellers even before they are ready to sell, so Zillow is already there when the sellers are ready.

Owning demand ultimately becomes its own compounding loop since becoming a trusted brand builds its own network effects. Consistently building this reputation increases people’s trust in them and makes them a go to destination.

Saturation and Sequencing

The Rich Barton playbook was particularly strong because it both understood how to find a wedge into a new market and how to transition that to a durable long term advantage at scale.

Data content loops are surprisingly underutilized by tech companies compared to how effective they’ve been. They have a natural invisible asymptote–and often diminishing returns on more data over time. Like Underutilized Fixed Assets for marketplaces, they can be used as kindling to catalyze demand and hit the minimum viable scope of more scalable demand loops.

Zillow as Case Study of the Barton Playbook

Zillow is a perfect example of the Barton Playbook. The data for estimating the price of houses had existed, and many brokers used the MLS systems to estimate it, but nobody had made that available to the masses.

Zillow changed that with their Zestimate.

By combining data from various MLS systems and other sources with their pricing algorithms, suddenly everyone could look up the value of their home. Even better, they could look up the value of their friends’ homes. Within the first day of launching, Zillow had a million people trying to check out the Zestimate. That’s an incredible feat that even today few have matched.

Envy is the best rocket fuel.

This data content loop lets them estimate the value of 100M+ houses. Driving anyone interested in the price of their home (or a home they’re thinking of buying) to Zillow. And they continued to come back. Most users might not be selling their home, but they could all check the prices of their homes, and any home they saw. But the Zestimate didn’t just drive users, it gave Zillow something far more durable.

The Zestimate became the kernel that Zillow used to create a webpage for every house. Zillow used its data content loop to become dominant at SEO for real estate. Try searching for your house on Google. I bet the first result is Zillow. And if not, it’s certainly in the top 5.

Nobody had yet indexed all the homes in the US and brought them online. While sites like Apartments.com had started to do so for rentals, it wasn’t until Zillow (and Trulia) that this was done for homes. There was fertile search real estate to grab and Zillow rushed out to claim it all using the Zestimate as its spearhead.

The Zestimate also had the network effect of becoming public common knowledge. It gave power to the people, and offered leverage against brokers. Armed with the Zestimate, sellers could push back on brokers who tried to pressure them to lower their prices. The Zestimate wasn’t backed by anything so it wasn’t secured, but it forced brokers to justify why the pricing they suggested deviated from the Zestimate. In many ways, Zillow became for homes what the Kelley Blue Book is for cars. And the more people used Zillow, the more powerful it became as an anchor in conversations with brokers. If you told your broker your friend told you the value of your house should be $1 million dollars, your broker would laugh it off. But if tens of millions of people are using Zillow and it tells you your house is worth $1 million dollars, the broker may still disagree but they have to take it seriously. Thus this data content loop has a demand side network effect that strengthens with scale.

Zillow used the advantages to own the demand side of real estate. Even before they decided to buy or sell, consumers went to Zillow. And when they were ready to become buyers or sellers, Zillow was there to help direct them to brokers.

Final Thoughts

One way the tech industry can be viewed, is a process by which we collectively push forward our understanding of industries and new business models.

Consumer will eventually be understood in many of the ways we’ve come to understand other business models like Enterprise and SaaS. Until then, founders like Barton with repeated successes are an early sign of some of the patterns and contours that can lead to repeatability.

While many of Barton’s ideas–like owning demand–have become mainstream, his use of data content loops to catalyze demand for his companies is still underappreciated.

Core to building a scaled consumer business is the unpredictable path of bootstrapping initial demand. Data content loops are one of a few strategies we’ve seen work very well for this phase of companies. And as the world increasingly shifts from supply constrained to demand driven, strategies like data content loops that empower consumers are likely to continue to be very effective.

While the focus of this essay, data content loops and Power to the People, aren’t the only beliefs Barton has advocated for.

Barton has been an early and loud proponent of the importance of:

  1. Unconstraining talent in society
  2. Raising the bar on ambition in companies

Both of which are very core beliefs among many of the people I respect the most. And also warrant much more discussion.

Aside: Ben Thompson has interview with Rich Barton. Which you should totally go read. And in general should go listen to Rich Barton whether he’s giving a talk, being interviewed or on podcasts.

The End of History?

Of course, Zillow’s story didn’t end there. It’s now the incumbent with a new startup fast on its heels. To understand how this happened and why Zillow is moving aggressively to match them, we have to look at the the strengths and weaknesses of the original Barton Playbook and how Opendoor and new competitors’ map to them.

Acknowledgements

Special thanks to Keila Fong and Dennis Tang for help with editing this and without whom it would definitely not be public consumption ready.

Also special thanks to Casey Winters, for discussing through the loops of Zillow’s business model. I’m sure it gave him PTSD to the many days we spent in a room discussing companies while building the Advanced Growth Strategy class for Reforge.
Also thanks to Sam Hinkie and Eugene Wei for discussing this topic and splitting it out for public sharing.

Aligning Business Models to Markets

Why Shake Shack and Super Duper have great Service. And what that means for tech.

Danny Meyer opened Union Square Cafe in 1985. Since then, his Union Square Hospitality Group (USHG) has expanded to nearly 20 restaurants, encompassing Michelin-starred fine dining at The Modern to casual barbecue at Blue Smoke. The USHG empire also spawned Shake Shack, now publicly traded with a market cap of almost two billion dollars and more than 200 locations. In his business memoir Setting the Table, Meyer attributes his outsized success to an uncompromising focus on employees that leads to differentiated service.

If Danny Meyer’s employee-first approach is so effective, why haven’t we seen more restaurant groups adopting it faster? Or even more service-first approaches?

This is an important question to ask, because it’s the question we keep asking ourselves in tech too. Why do some companies seem to run better than others, and why can’t others replicate them as well? Why do companies do well in their industries whether Fortnite in gaming, or Airtable, Figma, and Notion in productivity? Perhaps answering this for restaurants will shed light on it in tech.

Structurally supporting service

It’s tempting to think USHG’s approach hasn’t become common because owners and employees don’t care about service. While this may be true of some, it seems unlikely across the service industry.

Instead, to riff on Hanlon’s Razor, “never attribute to stupidity or malice that which can be adequately explained by structural alignment of incentives.” Providing a high level of service is a choice that must be supported by your business model. Few can afford the investment. But for those that can the dividends are significant.

Investing in your employees is expensive

[Thesis: turnover makes it hard to invest in training, which is a prerequisite for service]

The constraints of most restaurants’ businesses make it hard to replicate employee-first approaches. Most restaurants don’t have the employee retention or capital to improve their service significantly. The economics of their business don’t allow them to invest more in service.

When Danny Meyer says customer service is important, a prerequisite of this is being able to hire high-quality people and invest significantly in training them. And most restaurants simply can’t afford to do that.

Kenji Lopez-Alt (author of The Food Lab) has a great interview on the Freakonomics Podcast about the challenges of opening Wursthall, the restaurant he co-founded. In it he spoke about the difficulty of hiring, training, and retaining great talent:

“Finding good people is by far the hardest thing…finding great people is very hard. Even finding remotely reliable people. Even before we opened, when we were training staff, we must have lost probably 50%. 50% turnover over the course of a few weeks. Which is not abnormal.”

Imagine a startup with 50% churn. What could they even do? Forget how high the cost of recruiting might become. It would be impossible to invest in training employees, much less to maintain any standard of service. With an employee half-life of weeks, none of this is possible.

For many restaurants it’s prohibitively expensive to train employees for months. They don’t retain most employees long enough to justify the costs. According to an industry expert who’s run chains at both the fine dining and fast casual ends of the spectrum, employee churn rates at restaurants can range from 50% annually at fine dining restaurants to 70% at causal places and 110% at fast casual chains. At those attrition rates, the employees are more likely than not to be gone by the time training is done. And since employees don’t view most restaurants or chains as a place for their long-term careers, the more you train employees and help them build experience and skills, the sooner they will leave.

Could any restaurant take the leap of faith and invest in their employees? Perhaps, but it’s a risky bet. It takes significant capital to do this, with no immediate payoff. And the cost are deeper than training: if you say customer service matters, then compensation and bonus structure must reflect that.

The farm club model of talent retention

[Thesis: providing advancement opportunities is one way of combating turnover]

USHG is a constellation of very different restaurants and chains. At one end it has michelin star fine dining restaurants like The Modern and Gramercy Tavern. While at the other end it has the large chain Shake Shack. And many restaurants in between those two ends of the spectrum of pricing and scale.

Unlike many restaurant groups, this variety means Union Square Hospitality Group can hire people early in their careers–and plan for them to advance their careers from within USHG.

You can start working at Shake Shack, and then move on to managing their own Shake Shack or working in one of USHG’s more upscale restaurants. This is true both on the business or chef sides of the business.

If you do well you could go on to run a restaurant in USHG’s portfolio. Or if you wanted to open your own restaurant, you could open one with Danny Meyer as part of USHG–or start your own restaurant and have USHG as an early investor. In fact, another possibility is what the three michelin star restaurant 11 Madison Park did. It was a USHG restaurant that they sold to its general manager and head chef, who’d both worked at USHG for years.

By having a portfolio of restaurants at different scales and price points, employees are able grow their careers while staying in the family. And USHG is able to have high retention and invest more in its employees.

Back of the House is another prominent restaurant group, founded in San Francisco by Adriano Paganini in 2009. Its portfolio includes ten casual to mid-level restaurants, ranging from Belgian brasseries to Argentinian Steakhouses. And like the Union Square Hospitality Group, it also has fast casual chains, including a successful burger chain, Super Duper. Perhaps, most importantly Back of the House also has taken a farm club approach to growing talent to fuel their expansion:

The last big piece of the pie for Paganini is the big working family he’s assembled over the past two decades. If you take stock of Back of the House’s upper management, you’ll find that many staffers here began as waiters and line cooks. Director of operations Jessica Spencer-Flores got her start as a server at Starbelly. Luis Flores, general manager of Uno Dos Tacos and his first own full-service restaurant Flores, was a manager at the very first location, in the Castro, of Super Duper Burger. Giovanni Joris, a former server at Lolinda, is now GM at A Mano. And Patricio “Pato” Duffoo, who started with Back of the House as the sous chef at Starbelly, is now the executive chef of Barvale.

“This is what excites me—I see them grow and get better and smarter,” says the boss. “We share a philosophy for providing customers with good value, and it’s easy to see consistency across all our restaurants because it’s what we all believe. It’s not manufactured or forced.”

By having all these avenues to accommodate the career growth of its employees, groups like USHG and Back of the House have lower employee churn. This allows them to invest more in training their employees because they know they will be able to reap the benefits of their investment over a longer period of time. Long term, the amount invested in employees is dictated by the return captured by the company — similar to LTV/CAC and payback periods in the realm of user acquisition. These restaurant groups have found a better way to extend theirs.

This business model has an advantage in attracting talent that compounds. As these restaurant groups are able to get higher returns from their employees and thus provide larger career growth for them, they can attract stronger prospective employees who might not have considered hospitality before due to the limited career growth opportunities. And this talent loop is particularly brutal on competitors, because talented employees at other restaurants that don’t have similar growth opportunities will have an incentive to leave and join one of their portfolio of restaurants. In this way, those who do not have business models built for employee career growth will increasingly find it difficult to hire and retain great talent.

Why did the service focused model succeed now?

All of this explains why Danny Meyer’s model for USHG has advantages, but why did it particularly show up when it did?

While I wouldn’t go so far as to say it *wasn’t* possible to do a similar strategy before. I think there are many trends that point to why we will increasingly see more restaurant groups converge on this approach. There are macro tailwinds that USHG rode. And they are identical to many of the tailwinds hitting tech as well.

The Internet has radically raised the bar for in-person service

The world is becoming increasingly demand driven. Consumers have more and better choices. And have become far more informed and educated about their options too.

There used to be a paucity of options, so just being in a neighborhood could drive demand. However, as the world urbanizes and transportation gets better, consumers have an abundance of options–and being a place consumers want to come back to becomes more important.

And due to the internet consumers have more ways to help them decide what are the best restaurants to go to. While before there were only a few ways to hear about restaurants, now there is Yelp, many food blogs, and all of your friends on social media.

Before, many restaurants could expect one time customers from those who happened to be in close proximity and needed a place to eat. Like the classic Times Square restaurant that caters only to tourists in the area who will never be back again.

Now restaurants are increasingly competing for informed consumers who deliberately choose where to go (or go to again). They are aided by sites like Yelp, that aggregate ratings and reviews of prior diners–making the experience of each customer matter more. The bad experience of one customer can deter many future ones if they leave a bad review. And similarly the small details that make the night of customers can now spread in their reviews to many others.

These shifts to the demand side of the industry have made restaurants care more about quality of service. Customers are able to go anywhere they want, so great restaurants are better able to retain customers. But customers also hold these restaurants to a higher bar.

The market has reacted to this by rewarding restaurants that are focused on service. Danny Meyer and USHG model are well suited to this shift in the hospitality market. The business model is more aligned than traditional restaurants with a market that prioritizes higher quality of service and customer and employee retention.

And while it started with social. It won’t end there. With social media, it’s not just service that gets noticed. Instagram has driven the rise in importance of ambiance and aesthetic to restaurants. How well your decor and food photographs impacts how far it can spread socially. Restaurants used to hate diners taking photos of their food–now they realize there’s no better acquisition channel. There are many places that are almost entirely built around getting social distribution. In the early days creations like Dominique Ansel’s Cronut would get discussed on social media, but now there are many places that not only make unique creations–but optimize them for Instagram distribution. Like black ice cream or rainbow grilled cheese.

As the internet reaches each aspect of restaurants it makes them matter more. And this will restructure what it takes to be a successful restaurant.

Shifts in market dynamics cause new business models to flourish

Changing market dynamics, business models, and the resulting features of these companies are all tied together. Business models like USHG enable restaurants to invest more than others in service, not just talk about it. Structural changes in the demand side of the restaurant industry ripple downstream into the flourishing of new business models.

This is identical to what we see all over tech today. As the structures of markets change, the optimal business models change with them. Business models are how we align and reconcile the markets needs with the cost and human capital required to provide them. Alignment of markets and the costs to serve them is core. And as either side changes, so to do the business models that are dominant.

Case Study: Gaming (Fortnite)

Fortnite and the evolution of gaming is a good example of this. Why has Fortnite, a multiplayer-first battle royal game risen to be the most successful game–and could this have been predicted?

Gaming, like all industries, is shaped by its structure. Over the last decade fast, consistent internet connections have become ubiquitous for all gamers. This change has swept through all aspects of gaming in ways most players don’t appreciate.

In a pre-internet gaming world, it is hard to update games over-the-air (OTA). This means that all games need to be shipped with assumption that the company won’t be able to improve it further, which makes the process much less iterative than we’ve come to expect in tech.

This structure causes games to monetize via upfront game purchases. Since they can’t update or improve the game after it is bought, and often don’t have any way to maintain contact with the customer, it’s very hard to justify trying to get customers to make recurring payments. With ubiquitous internet connectivity, companies are able to keep working on their games. It’s more similar to how a tech company keeps iterating on its product, than like making and releasing a movie. And as companies keep adding value to their games, the best games can start to charge users recurring subscriptions.

But over the last few years, a different aspect of ubiquitous internet connectivity has shifted the gaming industry. Gaming has shifted from primarily singleplayer to multiplayer-only games. Multiplayer only games didn’t used to be possible. There weren’t enough players with good enough internet access. But it has now become widely available. And as more games introduced multiplayer and started to understand the dynamics of it, the utility of the top games shifted from aspects like plot or the single-player campaign, and shifted to the multiplayer experience.

But this shift to multiplayer driving the utility of games means that gaming has become fundamentally network effect driven. The utility of a game is driven by how robust its active user base is. With scale there are more games, better matching, higher likelihood of playing with your friends, etc.

So the business model of games has shifted to match this. If you’re optimizing for maximal active players and retention, then having people pay upfront, or even pay a monthly subscription, limits your user base considerably. Instead, it’s better to have your game be free to play, and monetize via optional in-game purchases. This wouldn’t even be possible in a world with physical distribution. But digital distribution has zero marginal cost and makes it viable. Fortnite used this business model change to great effect against PUBG, which charged an upfront purchase fee.

Fortnite is the culmination of these structural shifts. But the shifts to gaming haven’t ended. And the evolution will continue.

Case Study: Productivity Tools (Airtable, Figma, Notion)

These trends aren’t unique to gaming. These same second order impacts of ubiquitous internet connectivity are hitting other industries in exactly the same way.

Take the wave of productivity products that are raising at huge valuation multiples in the last year, like Airtable and Figma. And the more waiting in the wings like Notion.

Their successes have much in common with Fortnite’s rise. As these tools become online-first it’s allowed them to be collaborative-first. The utility for them is increasingly driven by the network effects of collaboration within your teams–outweighing any other features they may not have compared to legacy products.

Similarly, by being browser based distribution and onboarding become super frictionless. When I send you designs in Figma you can see them immediately. With software like Photoshop the recipient would need to download, install, and sign up for Photoshop before they could see (much less interact) with the designs. And they use a freemium approach and pricing model that wasn’t possible before. This pricing model is also key because it builds the network effects of the product in a bottoms up, product driven approach–that makes enterprise sales to those companies far more effective. And more importantly improves the sales velocity significantly.

If there were a particular area of tech most similar to USHG, it’d be the rise in startups focusing on retention and increasing share of customer wallet. As customers become more cognizant of their options and switching costs go down, companies that provide the best service are able to better compete for customers and then absorb more of their spend.

As the cost of forming startups decreases and capital availability increases we see a proliferation of options for consumers in any given category. This market supply fragmentation provides users with more options–and shifts leverage in the market towards demand.

Similarly, this increased competition makes acquisition more efficient and competitive for startups. In order to maintain an edge, companies need a proprietary advantage such as better retention or monetization to be able to compete long term. For example, this is the trend we see in direct to consumer ecommerce. Originally as new paid acquisition channels like Facebook expanded, companies could easily enter the market with little competition. But as it became easy for many companies to start and use these same channels the acquisition costs rose. Only companies that have better retention or monetization are able to maintain their spend and even outspend and force out their competitors. This has driven the rise of new business model types like subscription ecommerce.

But at a more general level, whether the shift from enterprise to SaaS or the shift from listings to marketplaces to vertically integrated services. Our business models change as a function of the structure of their markets.

Summary

So why have restaurateurs like Danny Meyer and Adriano Paganini been able to succeed where others have struggled? Certainly one part is their personal conviction in customer service. It’s allowed them to bet on investing in customer service where others wouldn’t–even before it’s proven to be correct. But adoption of this model by others has been relatively slow despite its success, so passion for service is unlikely to be solely sufficient.

Investing in employees is an asymmetric strategy for companies with the best retention and employee appeal.

Union Square Hospitality Group and Back of the House have not just realized that service is important and that requires investing in and retaining employees. They’ve structured their business models around being able to invest in employees.

Instead of hoping they can out execute, they’ve understood the changing underlying structures of their markets, and aligned incentives and business models to thrive in them.

This isn’t unique to restaurants. It’s in gaming and productivity. And every other industry.

When we understand the structural shifts in our industries we can understand the second order impacts of them that ripple down to the business models and companies that thrive. And many of them rhyme more than we think.

Credits

Thanks to Keila Fong, Michael Dempsey, Lauryn Isford, Eugene Wei, and Sam Hinkie for discussing this topic with me and their help with this essay.

Also thanks to Dan Romero for helping refine edits to this essay.

Endnote: The Second Order of Structural Systems

Finally as a note. Danny Meyer and the Union Square Hospitality Group are a good example of how we often discuss the first order cause of things, without understanding the structural systems shaping them. People reading Setting the Table often talk about being more customer focused. But they don’t understand that it’s not about trying harder. It’s about setting up a their business model to align with prioritizing customer service. And identifying spaces where that can happen.

We see this problem too often in many areas. For example, US politicians have often spoken about spreading democracy around the world. But without helping build the institutions that make democracy functional first, we often see countries have ‘democracies’ that are even worse and more corrupt than their prior governments. We must understand the underlying structural alignments of any area in order to understand how to build the right improvements to them.

Similarly, I recently read a question by Jack Altman on advice. My personal view on advice is that most people suffer from the same first order mistake. They say what they did that worked. But they don’t elaborate on the underlying structural features of their situation that would need to be true for their advice to be applicable in a new situation. Everything is topologically equivalent–so as long as the structural dynamics are the same advice would be useful to apply. But we often see people say advice isn’t useful, because nobody is discussing these structural alignments. It’s like watching someone try to uproot a plant to the desert–without paying attention to what soil, sun, and watering conditions it thrived in. And then be shocked it died.

When we try to understand systems that work, we need to talk more about the systems that support them. Both in markets and within companies.

Appendix A: Real Estate and Cost of Capital Advantages

And this model also allows for another of USHG’s less appreciated advantages, real estate. Many of USHG’s original restaurants were opened in neighborhoods that were fast appreciating in value, like Union Square and Flatiron. These created a macro tailwind to his business. And while that cannot always be predicted, by building many restaurants under one group, they’ve been able to take advantage of their strong brand to get prime real estate. Many developer groups bring in restaurant groups with demonstrated quality and customer awareness to their new developments at preferential terms. Some of these include the multiple restaurants USHG has in Battery Park City or in the Modern Museum of Art in New York City. Similarly, Back of the House often opens their locations in fast rising areas of SF, like Nopa.

This model is a company loop, because as these restaurant groups trains better employees who stay longer and are able to improve the business, they increase their profits and brand and are able to reinvest in continuing to invest further in their employees.

Another advantage of the USHG and Back of the House conglomerate model is cost of capital. Conglomeratization provides access to more and cheaper capital for new restaurants. Investing in the average restaurant has a poor expected return. Investing in proven restaurateurs improves the odds. Even better if they have an established brand and operations that can improve the likelihood of success. Being a larger conglomerate also gives the groups more sources of capital they can raise from.

By having a shared base of capital, these firms are able to pursue a portfolio diversification model with established higher end restaurants providing more stable capital base, strong consumer branding, and career growth opportunities, while experiments in fast casual business models are higher risk but can provide outsized returns when they work (Shake Shack).

Appendix B: Costco Case Study

As an example from another industry, consider Costco. Their median tenure is 4.8 years, an incredible outlier within retail. For comparison, this is much higher than Target (2.2), Walmart (3.3), Walgreens (2.8), or the abysmal Ross Stores (1.2). They have a 5% annual attrition vs the 59% industry average. This is buttressed by their business model. Unlike all other retailers, they sell everything roughly at cost, and make the majority of their revenue from annual subscription fees customers pay to be able to shop at Costco. Customers love this model with 90%+ renewal rates. And with their limited number of SKUs they are able to handle more throughput with less people, earning $600k+ per employee, around three times more than competitors like Wal-Mart or Target.

With their subscription revenue, high customer retention, and high revenue per employee, they’re able to invest considerably in employee training and compensation. Costco employees make an average of $22 per hour, significantly more than the industry average of $12 per hour. The result is a very loyal employee base that stays for years, if not decades. This high employee retention allows them to have a very high quality of service and unusually knowledgeable staff.


Selection Bias in Poker

In venture we often talk about the selection bias (both positive and adverse) in many areas—whether the companies in a specific incubator or if you offered blanket deal terms to every employee in a company, or any number of other scenarios. But because we rarely quantify them, it’s hard to have know exactly what their impact is—or have an intuitive feel for how to calibrate and adjust for them.

Was bored one night, and thinking about other areas with selection bias that’d be fun to take a look at. Preferably ones with quantifiable data to look at. And poker came to mind. We all have an intuitive sense that hands that get played in poker are better than the average hand dealt out. But most casual players likely can’t estimate how much better one should expect hands not folded to be.

I looked at a dataset of ~7k hands of poker played, and focused on the hole cards (the two cards dealt to the player that only they can see). I wanted to see how the distribution of hands that players got differed from the distribution of hands they had where they stayed in at least until the next round of cards was dealt (versus those hands where they immediately folded). You can see data source and methodology at bottom of this essay. To be clear, lots of reasons this dataset shouldn’t be taken as generalizable and precise statistics. But for my purposes, illustrative enough.

Of the ~7k hands in 26% of them the player stayed in and didn’t immediately fold. The question is how do these hands that are kept vs folded differ.

Below is the percentage occurrences of each card rank as well as the percentage occurrence of each card rank among hands kept. As you would expect, all the ranks are dealt roughly equally. However, there is a wide range among hands kept—with higher value cards showing up significantly more than lower value cards. The Ace is kept over five times more than the Two is kept.

 

This analysis is not particularly useful because we don’t think of each hole card in isolation. After all, having a matching pair of cards can be far more valuable than two higher but non-matching cards.

Instead let’s look at some common types of desirable hands. Having face cards is great as is having a matching pair of cards. Even better is having a pair of face cards. Of course there are other attractive hands like a flush or straight draw—but for simplicity we’ll focus on face cards and pairs.

Below is the probability that if the player had one of these combinations of cards—they would then play it to at least the next round rather than fold.

Again these results are not surprising but help quantify our intuition. If a player has a pair of face cards they are virtually guaranteed to not fold. In our data set a pair of face cards was dealt over a hundred times—and only once did the player fold them. Similarly, when the player had two face cards or a matching pair of cards they played them over 75% of the time. On the other hand if they didn’t have these—they were far less likely to keep their hand.

 

Most interesting is looking at the distribution of each of these types of hands among all hands dealt—compared to the distribution among hands kept.

While there is only a 13% chance for a player to get two face cards or a pair. Among hands played there is a 40% chance it’s two face cards or a pair. Let that sink in. Even though there is a very low chance of someone drawing a pair or two face cards. There’s almost even odds that anyone that doesn’t fold has a pair or face cards. If you’re playing with more than one person the chance definitely becomes greater than 50% odds that at least one player has it.

Again, nothing surprising. But interesting to be able to quantify the impact of the selection bias.

While it’s hard in the real world to quantify selection bias. There’s a lot more we could be doing to improve at this. And we should. It’s hard to adjust for it—when we don’t have a shared sense of exactly what impact it has.

 

Sidenote: Stack rank of hands by probability of being played

Turns out another useful thing is you can see a stack ranked list of each hand and the probability of it being kept vs folded. This is a pretty useful list for new players getting used to figuring out how strong their hands are.

Methodology

Surprisingly, getting Texas Hold’em data is harder than I expected. This is surprising since scraping poker sites or videos of poker seems very doable. Apparently, people used to scrape and buy poker data sets in order to get direct edge over other players by having the data on their *specific* opponents. Which may be part of the stigma or crackdown on datasets. And most free datasets have the actual hole cards obfuscated.

The dataset I used was 7k hands of poker from a Kaggle dataset. The data can be found here. Since I needed to know the hole cards, all my data is from one player—since they only disclosed the hole cards of the player collecting the data.

Lots of reasons to not over generalize from this data. Besides the data being from one user, it also doesn’t factor in hands where the player was big blind or there were no bets. These would likely skew the data even more towards only high value hands being kept.

But think the general trends it shows are illustrative.

 

Further studies

  • How do these probability distributions differ depending on the number of players. We’d expect people to only play stronger hands the more players they are in a game.
  • How do these probability distributions differ depending on how many blinds the player can afford to play.
  • How do these probability distributions differ depending on whether the player is a pro vs amateur
  • Probably more important is how we can get better selection bias data on other more important areas.
  • Honestly, poker’s great—but I’d much rather have statistics like this collected for Avalon! ESPN for Avalon. Looking at you Eugene.