Business US

Anthropic CEO Worries Humanity May Not be ‘Mature’ Enough for Advanced AI

Dario Amodei has some thoughts on artificial intelligence. About 38 pages worth of thoughts, in fact. The founder of Anthropic, maker of Claude, published on Monday a sprawling essay titled “The Adolescence of Technology,” in which he discusses what he sees as the immense potential for dangers that achieving the development of a superintelligence would present for the world.

His company will continue to develop AI, by the way.

Amodei, who drops these essays from time to time, suggests that humanity is about to enter a new era. “I believe we are entering a rite of passage, both turbulent and inevitable, which will test who we are as a species.” It could also be our last era if things go sideways. “Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it,” Amodei wrote, later stating that “AI-enabled authoritarianism terrifies me.”

Side note: Anthropic offered Claude to the Trump administration’s federal government for $1 per year.

To his credit, Amodei has a vivid imagination that he flexes throughout the essay. He recounts the time that the Aum Shinrikyo religious movement released sarin nerve gas in a Tokyo subway in 1995, resulting in 14 deaths and many injuries. He then suggested that putting a “genius in everyone’s pocket” would remove the barrier to carrying out such an attack, or even deadlier ones.

“The disturbed loner who wants to kill people but lacks the discipline or skill to do so will now be elevated to the capability level of the PhD virologist, who is unlikely to have this motivation,” he wrote. “I am worried there are potentially a large number of such people out there, and that if they have access to an easy way to kill millions of people, sooner or later one of them will do it.”

Apropos of nothing, did you know that one of the evaluations that Anthropic published in its “System card” report for Claude Opus 4.5 was a test where the model was tasked with helping virologists reconstruct a challenging virus?

Amodei is understandably impressed with the rate of improvement that AI has seen in recent years, but warned that if it keeps improving at the same rate, then we’re not far from developing a superintelligence—what guys like Amodei used to call artificial general intelligence, but have since shifted away from that. “If the exponential continues—which is not certain, but now has a decade-long track record supporting it—then it cannot possibly be more than a few years before AI is better than humans at essentially everything,” he wrote.

What would that mean, exactly? Amodei offered an analogy: “Suppose a literal ‘country of geniuses’ were to materialize somewhere in the world in ~2027. Imagine, say, 50 million people, all of whom are much more capable than any Nobel Prize winner, statesman, or technologist,” he wrote. “Suppose you were the national security advisor of a major state, responsible for assessing and responding to the situation. Imagine, further, that because AI systems can operate hundreds of times faster than humans, this ‘country’ is operating with a time advantage relative to all other countries: for every cognitive action we can take, this country can take ten.”

From that framework, Anthorpic’s CEO said it’s worth considering what our biggest concerns should be. Amodei floated his own—including “Autonomy risks,” “misuse for destruction,” and “misuse for seizing power”—and ultimately concluded that the report on that country would consider it “the single most serious national security threat we’ve faced in a century, possibly ever.”

A reminder that Anthropic is building that country in the analogy.

Anthropic has been, more than any other AI firm, proactive in identifying risks associated with the development of AI and advocating for additional regulatory scrutiny and consumer protections (whether you believe that is legitimate or a form of regulatory capture is in the eye of the beholder, but it’s at least talking a good game). But it keeps building the machine that it warns could cause impending doom. You don’t have to build the doom machine! And frankly, continuing to build it undermines just how seriously anyone should take the warnings of existential threats.

If there is real concern that humanity may not be mature enough to handle AI, maybe don’t make it publicly available for people with minimal barriers to access, and then brag about your monthly active users.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button