The CEOs of Google DeepMind and Anthropic took the stage together at The World Economic Forum in Davos, Switzerland, last week, and what they said should have every business leader’s attention.Demis Hassabis and Dario Amodei, two of the most influential minds in artificial intelligence, appeared at the World Economic Forum this past week for a session titled "The Day After AGI." What followed was 31 minutes of remarkably candid discussion about how close we are to transformative AI, what it will do to the economy, and why neither leader is sure we're ready.
To understand just how significant this moment was and what it means for society, I talked with SmarterX and Marketing AI Institute founder and CEO Paul Roetzer on Episode 193 of The Artificial Intelligence Show.
First, some important context: When Hassabis and Amodei talk about AGI, they're not talking about exactly the same thing.
Amodei doesn't even like the term “AGI.” He prefers "powerful AI" and has laid out a specific vision for what that means: AI systems smarter than Nobel Prize winners across multiple fields, capable of working autonomously for hours or days, controlling digital tools, and essentially functioning as "a country of geniuses in a data center."
Hassabis takes a broader view. His definition of AGI includes the highest levels of human creativity. That means not just solving math problems, but coming up with breakthrough theories as Einstein did with general relativity. He also includes physical intelligence and robotics.
So when Amodei says AGI could arrive in one to two years and Hassabis says five to 10, the timeline disparity shouldn’t be the focus.
What is important for business leaders to pay attention to, Roetzer argues, is a simpler threshhold that tells us when we've reached AGI: When will AI systems generally be capable of outperforming the average human at most cognitive tasks?
"I actually believe we're probably already there," he says.
Both leaders pointed to the same mechanism as the primary driver of AI acceleration: self-improvement.
Amodei described engineers at Anthropic who no longer write code. Instead, they let AI do it and simply edit the output. He predicted that within six to 12 months, AI could be doing most or all of what software engineers do now.
"The mechanism whereby I imagined it would happen is that we would make models that were good at coding and good at AI research, and we would use that to produce the next generation of model and speed it up to create a loop," Amodei explained during the session.
That loop, in which AI systems build better AI systems, is what both leaders are watching most closely. If that happens, the pace of progress could accelerate dramatically. If it doesn't, other research directions such as world models and continual learning will need to fill the gaps.
Roetzer highlighted four dimensions of progress that kept coming up across multiple Davos interviews: self-improvement, memory, continual learning, and world models.
"Those four seem to be right at the top of the list for all the labs right now," he says.
Perhaps the most striking part of the conversation was how both leaders addressed the impact on employment, and how unprepared everyone seems to be for what's coming.
Hassabis acknowledged that some jobs will be disrupted but predicted that new, more valuable roles would emerge. He urged young professionals to become "unbelievably proficient" with AI tools, suggesting that mastering them could be more valuable than a traditional internship.
Amodei was more direct about what he's already seeing.
"I even see it within Anthropic, where I can look forward to a time where on the more junior end and then on the more intermediate end, we actually need less and not more people," he said. "We're thinking about how to deal with that within Anthropic."
His worry? That the exponential nature of AI progress will eventually overwhelm society's ability to adapt.
Roetzer pointed out that this is the closest any major AI leader has come to acknowledging the reality of workforce displacement.
"I've definitely become convinced that the labs themselves are the wrong people to ask about what's going to happen to jobs," he says.
"They don't think about marketers and salespeople and customer success people and operations people and legal people. That's not their world."
Meanwhile, Hassabis raised an even deeper concern: What happens to human purpose and meaning when machines can do most of the cognitive work?
"The job displacement is one question," he said during the session. "But then there are even the things that keep me up right now, bigger questions that have to do with meaning and purpose. What happens to the human condition and humanity as a whole?"
Amodei made clear that the primary reason AI labs can't simply slow down is competition with China. And he compared American companies selling advanced chips to such adversaries as akin to "selling nuclear weapons to North Korea," a stark analogy that underscores how seriously he views the national security implications.
"If we can just not sell the chips, then this isn't a question of competition between the U.S. and China," he said. "This is a question of competition between me and Demis, which I'm very confident that we can work out."
Roetzer noted that the mutual respect between Hassabis and Amodei was evident throughout the session, and that behind-the-scenes collaboration between Google DeepMind and Anthropic seems increasingly likely.
"Keep in mind that Google owns an estimated 14% of Anthropic," he says. "I would say it's probably very, very safe to assume Anthropic and Google DeepMind are very closely aware of what's happening with each other."
The significance of this session extends far beyond the usual conference panel. Just a few years ago, discussing AGI in such a forum would have seemed absurd. Now it's headlining one of the world's most prominent gatherings of business and political leaders.
"I always wanted to talk about AGI going back to 2019, and I avoided it," says Roetzer. "I didn't talk about it on the podcast. I didn't put it on LinkedIn anywhere. I was like, ‘These people just aren't ready for that.’ It's amazing how fast things have changed."
AI systems are becoming capable enough to handle significant portions of knowledge work. The question isn't whether this will affect your organization, but whether you're preparing for it.
The self-improvement loop both leaders described could accelerate AI advancement dramatically. Or it could hit unexpected barriers. Either way, the world's most prominent AI researchers are telling us that transformative change is being measured in years, not decades.