TL;DR
Anthropic CEO Dario Amodei says the most surprising thing about the last three years in AI isn't the pace of progress. It's that the public still doesn't grasp how close we are to AI systems that outperform any human at any cognitive task.
What Happened
Dario Amodei, co-founder and CEO of Anthropic, sat down for a roughly two-hour interview on the Dwarkesh Podcast, published February 13, 2026. The episode is titled "We are near the end of the exponential," and Amodei's use in the episode of "end" doesn't mean AI progress is slowing. It means the opposite: we are entering the endgame, the phase where AI systems saturate every benchmark pegged to human ability.
Amodei argued the most surprising development of the past three years is not how fast AI has advanced, which he said has tracked roughly to his expectations, but "the lack of public recognition of how close we are to the end of the exponential."
He traced the trajectory back to his 2017 paper, "The Big Blob of Compute Hypothesis," which laid out the case that more compute, higher data quality, and better distribution across knowledge would produce smarter and smarter models. All three scaling laws he identified, pre-training, post-training (reinforcement learning), and test-time compute, are holding.
As a result, he has a personal hunch that we'll have the equivalent of a "country of geniuses in a data center" within one to three years thanks to AI. But he was equally direct about the bottleneck. The real uncertainty, he said, is not AI capabilities. It's the diffusion of AI through the actual economy, including how fast organizations, industries, and governments adopt and integrate what the labs are building.
The Key Numbers
$0 to $10 billion — Anthropic's revenue growth in three years, a pace Amodei himself described as hard to believe
1–3 years — Amodei's personal timeline for AI that outperforms any human at any cognitive task
3 scaling laws — Pre-training, post-training (reinforcement learning), and test-time compute, all of which are still holding
Trillions — Revenue Amodei considers plausible for the AI industry before 2030
Why the Gap Between Insiders and Everyone Else Keeps Growing
The scaling laws aren't slowing down. Amodei walked through the full stack: pre-training continues on its log-linear trajectory, reinforcement learning is now showing the same gains, and test-time compute (the reasoning capabilities that power products like Claude Code) adds a third layer of improvement.
"Those together, sort of the three scaling laws — pre-training, post-training, and test time compute — are the current three that the labs are kind of leaning into," says SmarterX founder and CEO Paul Roetzer on Episode 198 of The Artificial Intelligence Show.
Continual learning is the next unlock. One of the more technically significant parts of the interview was Amodei's discussion of continual learning, the idea that models stop learning at a cutoff date and can only access new information through tool use. Amodei said this limitation is "largely solvable."
As Roetzer explains: "The idea of continual learning is that the models don't just stop at the cutoff date. They actually continue to learn like a human would. Like you're always learning."
The revenue is real, but the risk is enormous. Anthropic went from zero to $10 billion in revenue in three years. Claude Code started as an internal tool and became a category-defining product. But Amodei was unusually candid about the financial knife edge the labs are walking. He said that if a company buys a trillion dollars of compute and revenue projections are off by even a single year, bankruptcy is unavoidable. "Even though a part of my brain wonders if it keeps growing at 10x," Roetzer paraphrased, "he can't buy a trillion dollars of compute in 2027 if he's off by a year. If the growth rate is 5x instead of 10x, then you go bankrupt."
That was a thinly veiled shot at OpenAI's Stargate project, which, as Roetzer noted, has already shown significant cracks. "You're starting to see these kind of cracks in these massive future investments that OpenAI is making, and Dario's kind of comfortable to just take the middle ground on risk," he says.
"Any time you get a chance to listen to Dario or Demis in particular, there's just always going to be elements that are important when you're trying to connect the bigger dots of where this all goes," says Roetzer.
The 75% margin claim deserves scrutiny. Amodei projected that the industry will reach an equilibrium where compute spending is split roughly 50/50 between training and inference, with inference yielding approximately 75% gross margins. But Roetzer is skeptical that holds over time.
"The economic pressure would make it so that's not true at some point," he says. Older chips still running at full capacity, "good enough" models that don't require frontier-level compute, and aggressive price competition from players like Google all point toward margin compression.
"How in the hell would they have those margins on the average knowledge work task?" Roetzer asks. "For writing emails and landing pages, it's good enough. I don't need another generation of models."
SmarterX Take
Amodei is the CEO of one of the three companies closest to building artificial general intelligence. When he says we're one to three years away from a "country of geniuses in a data center," that's not marketing. It's the view from the lab. His track record on capability predictions has been strong. He called it in 2017 and the scaling laws have held.
But the diffusion problem he himself raised is the real story for most businesses. Anthropic went from zero to $10 billion in three years, and most enterprises still haven't figured out how to get even a handful of employees using Claude effectively. The gap between what the technology can do and what organizations are actually doing with it is the defining challenge of 2026. Capability is accelerating on a curve. Adoption is crawling in a straight line.
The financial dynamics Amodei described also deserve more attention than they're getting. If even the CEO of Anthropic says being off by a single year on revenue projections means bankruptcy, the entire AI infrastructure boom is built on confidence intervals, not certainties. The 75% inference margins he projects may be real today, but competition, commoditization, and "good enough" models will compress them. Anyone building a business case on today's AI pricing holding steady should plan for a world where it doesn't.
What to Watch
Continual learning is the capability to track most closely. If Anthropic or another lab solves it by developing models that don't stop learning at a cutoff date, the gap between AI and human knowledge workers narrows dramatically. Amodei said it's "largely solvable." If he's right, that changes the value proposition of every AI product on the market.
On the business side, the tension between Anthropic's measured approach and OpenAI's aggressive compute bets will play out over the next 12 to 18 months. Amodei essentially told the world that Stargate-scale bets are a path to bankruptcy if the timeline slips by a year. Watch whether the market starts pricing that risk in, and whether the "good enough" model dynamic Roetzer described starts showing up in enterprise purchasing decisions sooner than the labs expect.
Resources
Dwarkesh Podcast: Dario Amodei — "We are near the end of the exponential" → dwarkesh.com
Listen on Apple Podcasts → podcasts.apple.com
Bloomberg: Sam Altman and Dario Amodei Refused to Hold Hands at AI Summit in India → x.com
Heard on The Artificial Intelligence Show, Episode 198
Paul Roetzer and Mike Kaput break down Dario Amodei's Dwarkesh interview and what the "end of the exponential" actually means for business leaders. Listen Now
Mike Kaput
Mike Kaput is the Chief Content Officer at SmarterX and a leading voice on the application of AI in business. He is the co-author of Marketing Artificial Intelligence and co-host of The Artificial Intelligence Show podcast.

