AnnouncementsPolicy

Statement from Dario Amodei on the Paris AI Action Summit

A hand-drawn image of a lighthouse on a rock, with large waves hitting the rock on either side

We were pleased to attend the AI Action Summit in Paris, and we appreciate the French government’s efforts to bring together AI companies, researchers, and policymakers from across the world. We share the goal of responsibly advancing AI for the benefit of humanity. However, greater focus and urgency is needed on several topics given the pace at which the technology is progressing. The need for democracies to keep the lead, the risks of AI, and the economic transitions that are fast approaching—these should all be central features of the next summit.

Time is short, and we must accelerate our actions to match accelerating AI progress. Possibly by 2026 or 2027 (and almost certainly no later than 2030), the capabilities of AI systems will be best thought of as akin to an entirely new state populated by highly intelligent people appearing on the global stage—a “country of geniuses in a datacenter”—with the profound economic, societal, and security implications that would bring. There are potentially greater economic, scientific, and humanitarian opportunities than for any previous technology in human history—but also serious risks to be managed.

First, we must ensure democratic societies lead in AI, and that authoritarian countries do not use it to establish global military dominance. Governing the supply chain of AI (including chips, semiconductor manufacturing equipment, and cybersecurity) is an issue that deserves much more attention—as is the judicious use of AI technology to defend free societies.

Second, international conversations on AI must more fully address the technology’s growing security risks. Advanced AI presents significant global security dangers, ranging from misuse of AI systems by non-state actors (for example on chemical, biological, radiological, or nuclear weapons, or CBRN) to the autonomous risks of powerful AI systems. In advance of the summit, nearly 100 leading global experts published a scientific report highlighting the potential for general-purpose AI to meaningfully contribute to catastrophic misuse risks or “loss of control” scenarios. Anthropic’s research has also shown significant evidence that, if not trained very carefully, AI models can deceive their users and pursue goals in unintended ways even when trained in a seemingly innocuous manner.

We are pleased to see commitments from over 16 frontier AI companies to follow safety and security plans (Anthropic’s version, our Responsible Scaling Policy, was first released in September of 2023 and was the first policy of its kind) in advance of the Summit. But we also believe that governments need to enforce the transparency of these plans, and need to facilitate measurement of cyber attacks, CBRN, autonomy, and other global security risks, including by third-party evaluators, for developers building in their countries.

Third, while AI has the potential to dramatically accelerate economic growth throughout the world, it also has the potential to be highly disruptive. A “country of geniuses in a datacenter” could represent the largest change to the global labor market in human history. A first step is to monitor and observe the economic impacts of today’s AI systems. That’s why this week we released the Anthropic Economic Index, which tracks the distribution of economic activities for which people are currently using our AI systems, including whether they augment or automate current human tasks. There is a need for governments to use their much greater resources to do similar measurement and monitoring—and eventually to enact policy focused on ensuring that everyone shares in the economic benefits of very powerful AI.

At the next international summit, we should not repeat this missed opportunity. These three issues should be at the top of the agenda. The advance of AI presents major new global challenges. We must move faster and with greater clarity to confront them.