← All summaries

AI for Atoms: How Periodic Labs is Revolutionizing Materials Engineering with Co-Founder Liam Fedus

No Priors: Artificial Intelligence | Technology | Startups · Sarah Guo, Elad Gil — Liam Fedus · April 3, 2026 · Original

Most important take away

The next major frontier for AI is not just language or code but connecting intelligent systems to the physical world through closed-loop experimentation. Periodic Labs is building an AI foundation model for materials science that orchestrates specialized neural networks, literature ingestion, and real experimental data to dramatically accelerate discovery in chemistry, semiconductors, aerospace, and energy — potentially creating a materials revolution analogous to the agricultural revolution in productivity.

Chapter Summaries

Liam’s Background: From Physics to AI

Liam Fedus studied physics, worked on dark matter research, then gravitated toward machine learning in grad school. He joined Google Brain during a pivotal era (2016-2017) when distributed training, mixture of experts, and the transformer were being created. He later moved to OpenAI.

Building ChatGPT at OpenAI

Liam joined OpenAI to help productionize GPT-4. The team brainstormed various bot ideas, but John Schulman pushed for a general chatbot, which became ChatGPT — the starting gun for widespread AI awareness and adoption.

The Vision Behind Periodic Labs

Liam argues that real acceleration in science and technology requires connecting AI to the physical world through experiments. The AI technology of 2022 was too weak, but improvements in reasoning, test-time inference, and tool use made it feasible to build closed-loop systems for materials science.

Data and Training Strategy

Periodic Labs leverages tens of trillions of tokens from open-source models as a foundation, then adds domain-specific experimental data. Literature-reported material properties can span orders of magnitude, so real experimental data is essential for grounding. The system operates as an active loop: run experiments, analyze results, find patterns, and drive the next round.

Architecture: Orchestration Layer Plus Specialized Models

Language models serve as an orchestration and co-pilot layer, directing experiments and ingesting literature. Underneath, specialized neural networks with symmetry awareness handle atomic simulations at low latency. These specialized models also serve as tools and reward functions for the broader system.

Commercialization and Business Model

Periodic Labs treats itself as customer zero, transforming how materials science is done internally. The company sees itself as an intelligence layer for enterprises bottlenecked by materials and process engineering. There is also potential for a discovery model similar to biotech, where breakthroughs carry high standalone value.

The Capital and Scale Question

Like frontier AI labs, Periodic Labs is compute-intensive. GPU costs often exceed physical infrastructure costs. The company aims to bring scaling-law thinking from AI research into physical sciences, running much larger sets of experiments enabled by intelligent automation.

AGI, Self-Improvement, and Domain Specificity

Liam cautions against thinking of intelligence as a scalar. AI systems have “odd spikiness” — world-class in one domain but brittle with small perturbations. Software engineering self-improvement is happening now due to cheap, verifiable environments. AI research self-improvement has a slower loop. Physical-world self-improvement requires closed-loop experimentation, which is Periodic’s core bet.

Robotics as an Accelerator

While Periodic Labs currently uses off-the-shelf robotics and human operators, general-purpose dexterous robots would massively accelerate lab throughput. The company does not depend on robotics breakthroughs but would benefit enormously from them.

Hiring and Team

Periodic is hiring across “bits” (mid-training, pre-training AI roles, infrastructure) and “atoms” (control engineering, systems engineering), plus product engineering roles that span both.

Summary

Actionable Insights:

  • Closed-loop experimentation is the key unlock. Simply having a good model is not enough for physical-world AI. The real power comes from tightly integrating AI with actual experiments — analyzing results, spotting aberrations, and iteratively designing the next round of tests. Anyone building AI for science or engineering should prioritize this feedback loop over model quality alone.

  • Leverage existing foundation models; specialize where they fall short. Periodic spends zero effort improving coding models (they use Codex, Claude Code, etc.) and instead focuses ML investment on domains where frontier models are insufficient. This is a strong pattern for any startup: do not rebuild what already exists; invest only where the gap is real.

  • Literature data is unreliable at face value. Reported material properties in academic papers can span orders of magnitude. Teams working with scientific literature should treat extracted values as noisy distributions, not ground truth, and invest in generating or acquiring clean experimental data.

  • Intelligence is not a scalar — plan for spikiness. AI systems can be world-class on a benchmark but brittle to small perturbations. When deploying AI in high-stakes physical domains, build evaluation suites that test robustness, not just peak performance.

  • Physical-world AI is capital-intensive but primarily on compute. GPU costs dominate over physical lab infrastructure. This is a useful mental model for founders and investors sizing the capital requirements of “atoms” companies.

  • Scaling-law thinking applies beyond language models. The mindset of predictable improvement through scaling data and compute is transferable to physical sciences. Establishing scaling properties in a new domain is what unlocks large capital deployment and predictable progress.

  • Career advice for scientists: the compensation gap is real. Academic scientists, especially postdocs, are dramatically undercompensated relative to their societal value. Companies like Periodic Labs offer a path to both higher impact and better compensation for physical scientists willing to work at the intersection of AI and their domain.

  • Robotics is not a blocker but is a major accelerator. Teams building AI for physical experimentation should design systems that work with current automation (simple, commoditized robotics plus human operators) while being positioned to benefit from future dexterous robotic systems.

  • The biggest tech pattern: AI moving from digital to physical. The progression is clear — language, then code, then AI research, then physical-world science and engineering. Each domain requires its own closed-loop system with domain-specific data and evaluation. The companies that build these loops first in their respective physical domains will have enormous advantages.