← All summaries

20VC: Is More Compute the Answer to Model Performance | Why OpenAI Abandons Products, The Biggest Opportunities They Have Not Taken & Analysing Their Race for AGI | What Companies, AI Labs and Startups Get Wrong About AI with Ethan Mollick

20VC · Harry Stebbings — Ethan Mollick · July 31, 2024 · Original

Most important take away

The most valuable people in the AI era won’t be the model builders but the “skilled artisans” — domain experts who figure out how to convert raw LLM capability into usable work inside organizations. AI labs are obsessed with the race to AGI and ignore product, documentation, and use-case discovery; that gap is where careers, startups, and competitive advantage will be built.

Summary

Actionable insights and patterns from the conversation:

Career advice

  • Become a “skilled artisan” with AI. The Watt steam engine analogy: the money was made by artisans inside factories who figured out how to gear back-and-forth power into useful machines. Today the equivalent is people who can translate LLM capabilities into real workflows inside their organization — not the model builders.
  • Get to 10+ hours of hands-on use. Mollick’s rule of thumb: most people who claim to “use” ChatGPT have used it casually; you need at least ten serious hours before you start finding productive uses. Knowledge workers in a Danish study saved ~50% of time on roughly a third of tasks once they got past the onboarding gap.
  • Domain expertise + good theory of mind beats coding skill at prompting. Coders are often poor prompters because LLMs don’t behave like deterministic systems. Teachers, doctors, managers — anyone who is good at instructing humans — tend to write better prompts.
  • Don’t hide your AI use (“secret cyborgs”). Many employees use AI covertly because they fear being fired, losing credit, or causing layoffs. Push your employer for clear policy on rewards, ownership, and what counts as cheating vs. legitimate productivity.
  • If you’re building a startup, move to Silicon Valley. VCs invest, on average, within ~40 miles. New direct flights from SFO to a city measurably raise VC investment in that city. Zoom doesn’t substitute for monitoring.
  • Take a position on the future. Founders should be opinionated about how good AI gets, where the jaggedness will remain (organizational, interface, regulatory), and how adoption actually spreads inside organizations — not just hunt incremental product-market fit.

What AI labs get wrong

  • They are run by computer scientists chasing AGI; product is an afterthought. ChatGPT and the API are the only real products; great features like Code Interpreter get half-built and abandoned because top talent is redirected to scaling.
  • Almost all documentation is technical. There is no “LLMs for Dummies.” Use cases are being discovered by end users on Twitter, not by the labs.
  • Labs don’t understand the industries they’re trying to disrupt (e.g., education) because they have no domain experts on staff.

What companies get wrong

  • Only ~5–10% of people in any room (banks, conferences, even tech) have actually used these tools meaningfully. Adoption stalls at the blank-page problem.
  • Policies are vague or prohibitive. Many enterprises still block GPT-4 access; employees use it on their phones.
  • Treating AI purely as a cost-saving / headcount-cutting tool guarantees employees will never reveal their productivity gains. The industrial-revolution choice is: cut headcount for margin, or expand output and grow — only the latter unlocks employee disclosure.
  • IT-style 30% productivity = 30% layoffs is the wrong frame for a general-purpose technology.

Tech patterns and predictions

  • Four AI futures: (1) fizzle/stabilize; (2) linear improvement; (3) continued exponential; (4) AGI/superintelligence. Most public discussion focuses on 1 and 4; the middle scenarios (linear or steady exponential) are the most likely and least prepared for.
  • “Reverse salient” model of progress: technology advances until something becomes the bottleneck (data, compute, energy, batteries in the green economy), money and prestige flow to whoever solves that bottleneck, then a new bottleneck appears. Bet on the reverse salient.
  • Jagged frontier: AI is excellent at some tasks, surprisingly bad at others. Useful predictions come from asking concrete questions (“can it review a legal document with low enough hallucination rate?”) not vague heuristics like “100x better.”
  • Multimodal + voice + agency is the next interface, not chat. Chat is the temporary, awkward stage; voice with action-taking agents will leapfrog the prompt-engineering era for consumers.
  • Open source models: will be jailbroken quickly. Real near-term risks are spear-phishing at scale and persuasive social-engineering, not bio-weapons. Need fast-follow regulation (Joshua Gans model), not pre-regulation.
  • Every startup is implicitly betting against AGI. If the founders funding you truly believe AGI is 5 years away, “thin wrapper on Llama” companies don’t survive that world. Pick a coherent AGI thesis and build for it.
  • Compute as the currency of the future is plausible only if intelligence-on-demand is real. If so, energy (likely nuclear) becomes the next reverse salient — and a huge investment opportunity.
  • AI is hyper-persuasive (81.7% more likely to shift views than a human in controlled tests) — expect marketing and politics to change before voting systems do. Dystopias arrive slowly because human systems are sticky.
  • Education pattern: flipped classrooms with AI tutors outside class and active learning inside class. One-on-one tutoring historically produces ~2-sigma gains; AI tutoring won’t replicate that automatically — it requires scaffolding, probing questions, and resisting the urge to let students “feel” like they’re learning when they’re not.
  • Meaning-of-work crisis is undercovered: the threat isn’t being replaced, it’s semi-replacing yourself and realizing your work didn’t matter.

Chapter Summaries

  • Intro and Llama 3.1 reaction: Open-weights model catches up to GPT-4 class; Mollick is unsurprised and warns the closed labs have a lot more “ammunition” coming. Don’t overweight week-to-week leaderboard swings.
  • Four AI futures and the topping-out question: Fizzle, linear, exponential, or AGI. Most discourse fixates on the extremes; linear or steady exponential is the underprepared middle. Reverse-salient framing for where bottlenecks and money will move.
  • Picks-and-shovels vs. the steam engine: The real value of new general-purpose tech is captured by skilled artisans inside organizations who convert raw capability into usable work — not by tool sellers.
  • Why there’s no “AI for Dummies”: Labs are chasing AGI and won’t spend talent on docs or use-case discovery; documentation is “by rumor” on Twitter.
  • Open-source AI debate and EU regulation: Favors openness for healthcare/education upside but wants fast-follow monitoring of harms. Worries EU AI Act over-regulates pre-emptively; founders still need to be in the Valley.
  • What AI labs misunderstand about companies: No domain expertise on staff; products get abandoned (Code Interpreter); top users are end users and managers, not engineers.
  • What companies misunderstand about AI: Adoption is 5–10% even in “tech-forward” rooms; “secret cyborgs” hide their use; policy vacuum and cost-cutting framing kill productivity gains.
  • Knowledge distribution and the 1% gap: Risk that a Silicon Valley elite gets 10x leverage while the rest of the world barely uses chat. Ubiquity, voice, and non-coder advantage in prompting are the equalizers.
  • The future interface: Multimodal voice + agents will skip the awkward chat era for consumers.
  • Startups and VC in a radical-innovation regime: Lean methodology is a poor fit; need opinionated bets, deep-tech-style funding, and a coherent AGI thesis.
  • Heuristics for AGI claims: “100x better” is meaningless; ask concrete capability questions instead. Distrust founders whose funding depends on near-term AGI hype.
  • Energy, compute, and Sam Altman’s thesis: If intelligence-on-demand exists, demand is infinite; nuclear power and compute become the choke point and the money.
  • Politics, persuasion, and deepfakes: AI is hyper-persuasive at the individual level, but political systems change slowly. Real near-term threat is spear-phishing and influence, not voting algorithms.
  • Education with AI: Flipped classrooms, AI tutors outside class, active learning inside; early RCT in Turkey showed unscaffolded GPT-4 use hurt test scores even though homework looked better. Expertise and scaffolding required.
  • Quick-fire round: AI keeps getting better than people think; the biggest risk is losing human agency to AI-incorporating systems; meaning-of-work is the under-discussed crisis.