← All summaries

Don't Fall For the Stock Market Hype. The $7,000 Raise AI Is Giving You (That Nobody Mentions)

AI News & Strategy Daily · AI News & Strategy Daily | Nate B Jones · February 26, 2026

Chapter Summaries

Chapter 1 — The Doom Narrative: Citrini Research’s “2028 Global Intelligence Crisis”

A viral Substack post by investment firm Citrini Research — written as a fictional macro memo from 2028 — triggered over $100B in market cap losses. The scenario: AI capabilities compound, companies rationally cut white-collar headcount, displaced workers spend less, which cascades through mortgages and private credit into a full financial crisis (S&P down 38%, unemployment at 10.2%). The mechanism is an “intelligence displacement spiral” — AI gets better, companies cut payroll, savings fund more AI, repeat. Separately, IBM dropped 13% — its worst day in 25 years — simply because Anthropic published a blog post about COBOL modernization, which markets interpreted as a threat to IBM’s consulting revenue. Nate points out that the doom narrative went viral due to negativity bias: threat-based headlines get 10-50x more engagement than equivalent positive headlines, distorting the information environment people use for career and investment decisions.

Chapter 2 — The Bull Case: Economic Counterarguments

University of Chicago Booth economist Alex Ermis modeled the Citrini scenario and found it requires extreme and simultaneous conditions — zero policy response, no consumption rebound after price declines, no increased spending by capital owners — all of which strain credulity. Historically, governments respond to economic crises even under divided rule. Separately, economist Michael Bloch argues AI agents will most powerfully compress the cost of services (travel booking, insurance, tax prep, mortgage services) — potentially returning $4,000–$7,000 per median household annually in real purchasing power gains, tax-free, with no legislation required. That money doesn’t vanish; it recirculates into home renovations, furniture, vacations, and business formation. The Census Bureau reported 532,000 new business applications in January 2026 alone, up 7%+ from December — accelerating a trend since 2021.

Chapter 3 — The Missing Variable: The Capability-Dissipation Gap

The central insight of the episode. Both doom and boom narratives assume AI capabilities translate rapidly into economic impact. They don’t, because of four compounding inertia forces: (1) Regulatory inertia — financial services, healthcare, and government all face multi-year approval cycles; COBOL systems powering 95% of ATM transactions won’t be migrated because a startup published a blog post. (2) Organizational inertia — companies aren’t rational actors; headcount decisions run through HR, employment law, union agreements, and management politics; the gap between “AI can technically do parts of this job” and “we’ve reorganized workflows and cut headcount” takes 18+ months in practice. (3) Cultural inertia — even at Shopify, one of the most technically sophisticated companies in the world, CEO Toby Lütke had to issue a company-wide mandate in April 2025 requiring AI usage and bake it into performance reviews just to get adoption. Most companies aren’t Shopify. (4) Trust inertia — enterprises need formal verification systems, audit trails, and oversight before deploying AI for high-leverage work; building that institutional trust takes years no model benchmark can compress.

Chapter 4 — The Career & Investment Opportunity

The capability-dissipation gap is wide and will stay wide for years. This creates asymmetric economic returns for early adopters. Large companies have capital, data, and distribution advantages but are slowed by organizational inertia — it can take 18 months from “this tool will save us $10M” to actually saving the money. Small firms and individuals lack capital but have speed, which is the most valuable advantage when the gap is large. Toby Lütke’s approach at Shopify is held up as the model: treat AI model evaluation as a personal discipline (he runs a structured personal “Toby Eval” folder of prompts against every new model), require AI exploration in every project’s prototype phase (to build evaluation muscle memory, not necessarily for production use), and think in hours/days not weeks/quarters. The episode concludes with three actionable takeaways for listeners.


Summary

This episode’s central thesis: stop letting viral doom narratives and market gyrations drive your career and investment decisions. The real story is the capability-dissipation gap — AI capabilities are advancing on an exponential curve while actual societal and organizational adoption is advancing on a much flatter one. That gap is the source of the current confusion and the greatest generational economic opportunity for individuals.

Key actionable insights and career advice:

  1. Recontextualize the stock market noise. The AI-driven sell-offs (DoorDash, IBM, ServiceNow, Blackstone, etc.) are pricing disruption on a timeline of weeks — but disruption will actually unfold over years. Some of those companies will face real challenges, but not as fast as markets imply. Meanwhile, markets aren’t pricing the buy side at all: what happens to the $42B redirected from real estate commissions? What do companies do with 40% software cost savings? Those reinvestments don’t get headlines.

  2. Treat the doom narrative as policy warning, not career plan. The economic bear case is built on real forces, but requires an unrealistically rapid, simultaneous, and policy-free collapse. An economist at Chicago Booth has modeled this and found the scenario implausible at scale. Use the doom narrative to think about societal support systems; don’t use it to freeze your career decisions.

  3. Map the capability-dissipation gap in your specific domain — this is the most important career move you can make right now. Ask yourself: Am I operating at the AI capability frontier (testing new models regularly, integrating AI into real workflows, building evaluation frameworks) or am I at the dissipation rate (aware AI exists, use it occasionally, but working the same way as two years ago)? The economic value is concentrating at the frontier, and social inertia means the gap won’t close quickly — giving early movers a persistent, compounding advantage.

  4. The most valuable person in any organization right now is someone who can walk into a room of panicking executives and say with genuine authority: “I’ve tested this. Here’s what AI can actually do in our workflow. Here’s what it cannot do. Here’s the implementation plan, the budget, and the timeline.” That person bridges technical AI knowledge, business workflow understanding, and practical implementation skill — and almost nobody yet has all three.

  5. Build evaluation frameworks for your domain, not just general AI fluency. Following Toby Lütke’s example: maintain a personal folder of prompts you run against every new model release. Test AI on real tasks in your actual workflow. Even when AI fails, you’ve built an evaluation you can reuse when the next (better) model drops. This compounds — every model improvement lands on a foundation of practical understanding, making your AI fluency asset more valuable, not less, over time.

  6. Think in hours and days, not weeks and quarters. Speed is the primary competitive advantage for individuals and small firms in the capability-dissipation gap era. AI-native operators collapse the integration timeline that kills large-firm adoption.