Tesla, xAI, And Digital Optimus | The Brainstorm EP 123
Most important take away
xAI is currently in fourth place among top AI labs because it has failed to translate its compute advantage into productized, revenue-generating AI tools the way Anthropic and OpenAI have. Elon Musk’s strategy to close this gap centers on “Digital Optimus,” a lightweight model trained to run on Tesla’s AI4 chips already deployed in millions of vehicles, creating a distributed compute network that could become a massive competitive advantage in a compute-constrained future — but only if the smaller edge models can deliver real-world utility comparable to cloud-based frontier models.
Chapter Summaries
xAI’s Competitive Position (Opening) The hosts discuss xAI’s current standing as the fourth-ranked AI lab behind OpenAI, Anthropic, and Google. Despite rapidly catching up in pre-training, xAI has fallen behind in productization and reinforcement learning research. Having compute density per employee hasn’t translated into product traction because the industry has shifted from benchmark performance to real-world utility packaged into usable software.
Why Compute Alone Isn’t Enough The discussion explores why xAI and Meta — both of which touted high compute-per-researcher ratios — have fallen behind. The key shift is that reinforcement learning and fine-tuning require real-world feedback loops and deep research experimentation, not just raw pre-training compute. The other labs’ cumulative investment in research has compounded over time.
Digital Optimus and Edge Computing Strategy Frank explains xAI’s new direction: training smaller models to run on Tesla’s AI4 chip (shipping in cars since 2023, with 8GB RAM per die). The “Digital Optimus” concept is a lightweight edge model that handles routine tasks locally while offloading complex reasoning to cloud-based Grok. This mirrors the system-one/system-two approach Tesla already uses in autonomous driving.
The Tesla-SpaceX-xAI Compute Ecosystem The hosts map out Elon’s cross-company strategy: Tesla manufactures custom chips deployed in vehicles and cybercabs; SpaceX plans orbital data centers with future chip generations; xAI provides the models and distribution through X. The key economic argument is avoiding Nvidia’s 70%+ gross margins — if you can do equivalent work on custom chips at a fraction of the cost, you gain an enormous advantage.
Business Model and Owner Compensation Tesla owners could be compensated for lending compute — through FSD credits, supercharging discounts, robotaxi ride credits, or direct revenue sharing similar to Tesla’s virtual power plant program. Cybercabs will have significant idle time with chips plugged in and charging, creating a natural distributed compute network.
The Long-Term Compute Scarcity Thesis The hosts argue we are very early in understanding how compute-constrained the world will become over the next 3-5 years. An Uber analogy is drawn: the winning platform will be the one that controls compute/energy supply (the “drivers”), not just customer demand. The fail-safe for xAI is that even if productization fails, they can sell compute capacity to companies like Anthropic that have figured out productization.
Summary
Stocks and investments discussed:
- Tesla (TSLA): Central to the thesis as both a chip manufacturer and distributed compute platform. Tesla’s AI4 chips in vehicles represent latent compute that could become valuable infrastructure.
- Nvidia (NVDA): Positioned as the dominant but expensive supplier whose 70%+ gross margins create an incentive for every AI company to find alternatives. Tesla’s custom chips are explicitly aimed at reducing dependence on Nvidia.
- ARK Invest funds: Implicit throughout as the hosts are ARK analysts (ARK is an SEC-registered investment advisor).
- SpaceX (private): Plans to launch orbital data centers, expanding compute capacity beyond terrestrial constraints.
- Anthropic (private): Cited as signing $9 billion in additional annualized revenue over roughly two months, demonstrating massive enterprise AI demand but also compute delivery constraints.
Actionable insights:
- The AI competition has shifted from model benchmarks to productization. Investors should evaluate AI companies not by model performance alone but by their ability to deliver packaged, real-world utility to enterprise customers. Anthropic and OpenAI are leading here; xAI and Meta are lagging despite having massive compute.
- Watch for compute scarcity as the dominant bottleneck over 3-5 years. Companies that control both compute supply and energy will have structural advantages. Tesla’s distributed fleet and SpaceX’s orbital ambitions represent a non-traditional but potentially massive compute supply angle.
- Tesla’s value proposition is expanding beyond vehicles. The AI4 chip fleet creates optionality — if Digital Optimus works, Tesla becomes both a vehicle and compute infrastructure company. The cybercab business model benefits doubly from this since idle vehicles generate compute revenue.
- Custom silicon is a strategic imperative. Avoiding Nvidia’s margins (roughly 75% of a $10B/year gigawatt compute cost goes to Nvidia) could cut AI compute costs dramatically. Tesla designing its own chips for edge workloads parallels Google’s TPU strategy.
- Anthropic’s demand signals validate the market. The $9B annualized revenue surge and users being rate-limited on compute suggest demand far exceeds supply, which supports the thesis that whoever solves compute access wins long-term.
- xAI’s fail-safe matters for risk assessment. Even if xAI cannot build competitive products, its compute infrastructure (terrestrial and orbital) retains value as sellable capacity, somewhat de-risking Elon’s AI bet for Tesla shareholders.