In an extraordinary new essay, "The Adolescence of Technology," Anthropic CEO Dario Amodei warns of significant dangers of a powerful new AI that could arrive as early as 2027. He says humanity must be ready.
The 20,000 word essay reads like a battle plan for navigating five existential risks that we must confront to survive. It’s intense and not always reassuring.
To understand what this means coming from one of the few people actually deciding the future of AI, I talked with SmarterX and Marketing AI Institute founder and CEO Paul Roetzer on Episode 195 of The Artificial Intelligence Show.
The essay identifies five primary dangers:
Amodei doesn't present these as remote possibilities. He frames them as challenges humanity must actively navigate starting now.
Amodei addresses his critics directly in the essay, and Roetzer emphasizes that dismissing these concerns would be a mistake.
"There are certainly people in the AI space who will make fun of Dario for this and laugh at him and call him a doomer, despite the fact that he's very clear he is not a doomer," he says.
The essay deliberately follows Amodei's earlier essay, "Machines of Loving Grace," which painted an optimistic vision of AI's potential. This new essay is the companion and examines what stands between us and that positive future.
"You don't have to have his conviction about the risks ahead," says Roetzer. "You cannot deny that risks exist, though, at some level. The things he highlights exist in some spectrum of complexity and probability."
What makes this essay extraordinary is its source. Amodei isn't a pundit or policy analyst. He's building the systems he's warning about.
"There are about five people who are basically deciding the future of humanity here with these AI labs," says Roetzer. "And one of them is allowing you into the inner workings of his mind, of how he thinks about this."
Amodei begins his essay by referencing a scene from the movie “Contact,” where the protagonist asks aliens how their civilization survived its own technological adolescence. "When I think about where humanity is now with AI," he writes, "my mind keeps going back to that scene because the question is so apt for our current situation."
Amodei advocates for surgical government intervention, strict chip export controls to slow authoritarian progress, and what he calls a "constitutional approach" to AI safety. He argues for transparency legislation before stricter regulations, giving the public visibility into what's being built before deciding how to control it.
On the economic front, he acknowledges the need for mechanisms such as universal basic income and aggressive retraining programs but acknowledges these are open questions, not solved problems.
"I've never gotten the impression from following him for years that he hypes anything," says Roetzer. "I think his concerns are real."
The essay is long and demands time and attention. But it's worth the read if you want to understand how one of the most influential people in AI thinks about the technology he's building, and the risks it poses. There's no better source.
It’s important to understand how the people building these systems are thinking about these risks. The rest of us should be thinking about them and preparing for them, too.