20VC: OpenAI's Sam Altman, Mistral's Arthur Mensch and more discuss: Will Foundation Models Be Commoditised | Which Startups Are Threatened vs Enabled by OpenAI | Is the Value in the Infrastructure or Application Layer?
Most important take away
If you are building on top of foundation models, ask yourself whether a 100x improvement in the underlying model would excite you or kill you. Builders whose products get better as models get better will ride the wave; thin wrappers that exist to patch current model limitations will be steamrolled. Enduring value sits with companies that own the end user, integrate deeply into an industry’s workflow and data, and sell outcomes (work product) rather than per-seat software.
Summary
This compilation episode gathers eight investors and operators (Sam Altman, Arthur Mensch, Tom Hulme of GV, Des Traynor of Intercom, Tomasz Tunguz, Emad Mostaque, Brad Lightcap, Sarah Tavel of Benchmark, Tom Blomfield of YC, and Miles Grimshaw of Thrive) to answer three intertwined questions: are foundation models commoditising, where will value accrue (infra vs. application), and which startups will OpenAI flatten vs. enable. Below are the actionable insights, career signals, and tech patterns to pull from the discussion.
Actionable insights for builders and operators:
- Run the “100x model” test on your own product. Brad Lightcap’s heuristic: if a 100x better model would supercharge your roadmap, you are aligned with the platform. If it would erase your product, you are a thin wrapper on borrowed time. Bake this question into every new feature decision.
- Choose the “thick wrapper” path. Des Traynor’s framing: solve a user’s problem end-to-end in a domain OpenAI will never staff five engineers against (wealth management, banking integrations, construction workflow, oncology tooling). Pick one vertical and nail the entire use case rather than building a science-fair demo across many.
- Plan for LLM-agnosticism as table stakes. Intercom torture-tests every model and keeps the ability to swap providers fast, because the next unlock could come from any lab. Architect for model portability from day one.
- Sell outcomes, not seats. Sarah Tavel’s reframing: AI lets you sell the work product itself, which looks like a services business priced like software. This is disruptive precisely because incumbents are locked into per-seat economics tied to headcount. Pricing innovation may be more defensible than feature innovation.
- Be orthogonal to incumbents, not a co-pilot. Miles Grimshaw argues co-pilot is an incumbent strategy: Microsoft, GitHub, and Salesforce already own distribution, data, UX, and the seat-based business model that co-pilots slot into. A startup co-piloting someone else’s product will likely be outcompeted; aim to do the job the user previously did, not just assist them.
- Build for the industry, not the model. Tom Blomfield notes most application-layer AI is 80-90% traditional software, 10% AI. The moat is deep integration with industry regulation, tooling, language, and existing systems (ProCore, Salesforce, Oracle), not the model itself.
- Own the end user. Sarah Tavel: whoever owns the user can layer in more value over time and capture it. Optimise for distribution, retention, and the right to ship more product into the same relationship.
- Treat memory and agency as the next defensibility frontiers. Tom Hulme flags that no foundation model has cracked persistent memory yet; a model that truly remembers and can take agency on your behalf would be genuinely defensible. Founders working on these orthogonal vectors have an opening.
Tech patterns and market structure to internalise:
- Foundation models behave like power stations with rapidly depreciating assets. Hundreds of millions in training cost get written off in months as the next generation arrives. Investing fundamentals look weak even where momentum trading still pays.
- The end-state is utility-like. The consensus thesis: 5-6 foundation model players (Nvidia, Google, Microsoft/OpenAI, Meta, Apple, possibly Anthropic), eventually bundled into cloud providers (AWS, GCP, Azure) and offered as commoditised compute. Meta open-sourcing Llama accelerates this.
- Web2 analogy from Tomasz Tunguz: top-3 clouds are roughly $2.1T market cap; top-100 public cloud-era app companies are also roughly $2.1T. Equal value pools, but with 33x more winners at the application layer. As an investor or founder, the odds shift heavily toward apps.
- Two opposing forces (Arthur Mensch): better models thin the application layer (less custom work needed), while cheaper inference thins the model layer (price-per-intelligence collapses). Mistral’s bet is that the model layer is still big enough to support a platform; most others are betting the value moves up.
- Clay Christensen lens (Tom Hulme): Gen AI is largely a sustaining innovation sprinkled across existing businesses to cut costs and improve products, not the creative-destruction wave that the internet was. Calibrate expectations accordingly.
- Sam Altman’s long-term differentiator is personalisation: the model that has your life context and is integrated into your tools wins, not the raw base model.
Career-relevant signals:
- Every knowledge worker will likely have an AI co-pilot within 2-3 years (Blomfield). Engineers, lawyers, doctors, professors: get fluent in working alongside one now.
- Domain depth is the new moat. Generalist AI tinkering is commodity; the durable career bet is to combine AI skill with deep industry knowledge (regulation, workflows, tooling).
- For engineers picking employers: prefer companies whose product narrative gets stronger with each model upgrade, not weaker. The “100x test” works for job choice too.
- For founders: avoid building anything completable in a weekend hackathon. Pick a regulated, workflow-heavy industry and embed.
Chapter Summaries
- Cold open and framing — Harry Stebbings sets up the compilation around three questions: are foundation models commoditising, where does value accrue, and which startups get steamrolled by OpenAI.
- Sam Altman on commoditisation — Compares foundation models to early auto industry; expects a small number of expensive providers, with long-term differentiation in personalisation and life context rather than the base model.
- Arthur Mensch (Mistral) — Two opposing forces: better models thin the app layer; cheaper inference thins the model layer. Mistral bets the model platform layer is still large enough to be worth owning.
- Tom Hulme (GV) — Foundation models are like power stations depreciated over months; meta’s 350k H100s and open-source Llama make competition brutal. Gen AI looks like sustaining, not disruptive, innovation. GV invests in infra and apps, not foundation models.
- End state of model providers — Consensus that hyperscalers (AWS, Azure, GCP) absorb foundation model companies and treat them as utility cash cows.
- Des Traynor (Intercom) — Currently routes most value to OpenAI but torture-tests rivals; warns Amazon could bundle a model into EC2 and undercut OpenAI; would not invest in OpenAI at $90B.
- Tom Hulme on OpenAI at $90B — Would struggle; advantages look ephemeral, consumer revenue is not sticky. Defensibility would require memory, agency, or another orthogonal unlock.
- Tomasz Tunguz on Web2 market-cap analysis — Cloud infra and cloud apps each ~$2.1T market cap, but apps have 100 winners vs. 3 in infra; odds favour application layer for investors and founders.
- Emad Mostaque — Predicts only 5-6 foundation model companies in 3-5 years: Nvidia, Google, Microsoft/OpenAI, Meta, Apple, with Anthropic at structural disadvantage versus Google’s $150B war chest.
- Sam Altman on build strategies — Two approaches: assume models stagnate (build patches) or assume OpenAI’s trajectory holds (build leverage). 95% should bet on the latter; the rest get steamrolled.
- Brad Lightcap’s 100x test — A clean delineator: founders excited about a 100x model improvement are platform allies; those who go quiet are vulnerable.
- Des Traynor on thick vs. thin wrappers — Pick a domain OpenAI will never staff against and solve it end-to-end; thin wrappers are coins on the train tracks.
- Sarah Tavel — Application layer captures the most value because owning the end user compounds; the underlying-model debate matters less than user ownership.
- Tom Blomfield (YC) — Sustainable value lives in deep industry integration; AI products are mostly traditional software with an AI sliver. Every knowledge worker gets an AI co-pilot in 2-3 years.
- Miles Grimshaw (Thrive) on co-pilot strategy — Co-pilot is an incumbent strategy aligned with seat-based distribution and pricing. Startups should be orthogonal, not assistive.
- Sarah Tavel on pricing — AI enables selling the work product itself, disrupting per-seat SaaS economics that incumbents are locked into.
- Outro — Stebbings asks for listener feedback on the compilation format and previews an upcoming Jason Lemkin episode.