Leaders at the Paris AI Summit Must Set Global Standards or Risk a Destructive Race


Today, world leaders from over 90 countries will gather in Paris to discuss artificial intelligence policy. We need leaders to seize this unique opportunity to set common AI risk management rules, or risk a dangerous race to the bottom.

Indeed, AI is advancing at a breathtaking pace. In 2019, OpenAI’s GPT-2 could not reliably count to ten. Fast forward to December 2024: its successor, OpenAI’s o3 can solve 45 problems from the FrontierMath benchmark. Even Fields Medal winner Timothy Gowers acknowledges that he would have “no idea how to solve” some of the benchmark’s most advanced questions.

[time-brightcove not-tgx=”true”]

This isn’t just about mathematics. As a general-purpose technology, AI capabilities are advancing across many domains at this blistering pace. These systems can now write complex software code, and engage in sophisticated scientific reasoning. The potential benefits are large—but so are the risks, which are directly linked to what AI can do, and what it will be capable of in the future, given this rate of improvement. While the jury is still out on whether AI will cause mass unemployment, some economic implications are already manifesting, notably market concentration in a handful of companies. There are also numerous reports of AI systems causing dramatic malfunctions, for instance when mental health chatbots increase the risk of harm to the user.

In addition, two categories of risk, both listed in the January 2025 International AI Safety Report, are becoming particularly prominent in this context of rapid AI capability increase. First is the threat of misuse. As a general-purpose technology, advanced AI systems can be weaponized for increasingly sophisticated attacks. We’re already seeing AI-generated deepfakes attempting to manipulate political opinions. AI systems are being used by state affiliated threat actors to conduct malicious cyber activities. Recent models have demonstrated capabilities that could accelerate bioweapons development, according to developers themselves. To grasp the scale of the risk involved, consider how you would feel if powerful military tools capable of starting a pandemic started being sold off the street, no license required.   

Second, there is the risk of accidents. Today’s most powerful AI models use neural networks with hundreds of billions of parameters, with decision-making processes largely opaque even to their creators. If deployed in critical applications—such as power grid management or financial trading systems—the potential for unexpected and uncontrollable outcomes would increase significantly. While a single autonomous trading algorithm malfunction in 2012 caused Knight Capital to lose $440 million in just 45 minutes, general purpose AI systems are likely to be deployed far more widely across sectors, multiplying the potential for costly accidents. Experts also warn about another concerning scenario, usually referred to as “loss of control”. Because AI systems are essentially “black boxes” whose decision-making are not fully understood, we can’t always reliably predict how they will behave once deployed; they might, for instance, actively resist being shut down if continued operation better serves the objective they have been assigned by humans. A hypothetical AI tasked with fetching coffee would have an incentive to prevent itself from being turned off, since “you can’t fetch coffee if you’re dead.” This isn’t mere speculation but a mathematically grounded concern shared by leading experts who warn that we may be unable to keep powerful AI systems in check. 

Whatever their source, risks mentioned above are currently largely unmitigated. Companies are rushing to deploy increasingly powerful systems. At SaferAI we’ve analyzed current risk management practices—the process of identifying, analyzing and mitigating risks—in six of the leading AI labs and we believe they are nowhere up to what is considered the bare minimum in other high risk industries such as aviation or the drug industry. Regulation is not keeping up with the pace of innovation. In the United States there are currently more safety requirements for selling a sandwich than for developing AI systems and the new administration has further rolled back oversight. While the European Union makes progress with the AI Act implementation and in particular its companion Code of Practice, regulatory fragmentation is looming. 

Yet, there are solutions. Risk management has proven highly effective in mitigating the dangers of innovative technologies across various sectors. The aviation industry provides a compelling example: through comprehensive risk management practices, it achieved a dramatic reduction in fatal accidents from 6 per million flights in the 1970s to 0.5 per million flights today—a 90% improvement in safety. Similar success stories can be found in the pharmaceutical industry, where rigorous approval processes have enabled drug development while safeguarding public health. To be sure, the breadth of AI-related risks makes them especially challenging to address. However, the risk-agnostic nature of the risk management framework makes it well-suited for evaluating and mitigating this full spectrum of threats. 

While AI systems themselves may be complex and opaque, managing their associated risks need not be. We can draw upon proven processes from other industries to address the challenges posed by AI. Specific frameworks vary by industry, but they share fundamental elements such as risk identification, evaluation, mitigation, and governance. No need to reinvent the wheel: this is a solid foundation for AI-specific risk management approaches. For instance, financial risk management offers lessons in handling rapidly evolving risks; cybersecurity provides frameworks that are particularly useful for adversarial AI threats; environmental risk assessment offers insights into managing impacts extending beyond organizational boundaries to society at large. 

Crucially, we urge leaders to understand that sound risk management is not merely a safeguard against potential harms but a crucial enabler of sustainable technological progress. The success of other safety-critical industries demonstrates that effective risk management does not stifle innovation but rather promotes it. Risk management helps organizations objectively determine acceptable risk levels while maximizing potential benefits, encouraging developers to pursue projects with the highest likelihood of delivering value safely. Moreover, public confidence fostered by robust risk management has been essential for widespread adoption of transformative technologies. Effectively addressing AI risks through systematic management could help overcome adoption hesitancy among economic actors concerned about potential malfunctions.

The stakes are rising rapidly. OpenAI’s Stargate project is committing $500 billion in the United States alone, while the Bank of China has announced $137 billion for AI infrastructure. The recent rise to fame of Chinese startup DeepSeek, which claims to match or exceed U.S. AI capabilities for a fraction of the cost, further intensifies this global competition. This financial escalation risks triggering an AI race that could sideline critical safety considerations. Self-regulation is even more unlikely to be up to the task in a context of fierce economic competition. To avoid a race to the bottom in safety standards which would endanger the general public globally, world leaders need to intervene now to level the regulatory playing field. Without common rules, the benefits of safe and sustainable AI innovation will remain out of reach.

The upcoming Paris AI Action Summit is a crucial opportunity—and perhaps one of the last—for leaders to agree on common risk management standards and prevent large-scale AI misuse or accidents. It will bring together key AI powers, notably the US and China, at a critical juncture, offering a rare window of opportunity to establish pivotal measures. First, leaders could agree to harmonise emerging risk management approaches to reduce regulatory fragmentation between countries. As noted in a recent Oxford report, AI risk management is one of the areas with the greatest need for international standardisation. As a first step, it would be helpful to agree on the nature and definition of key components of sound AI risk management, from risk identification, to risk assessment, risk mitigation and risk governance. This could lay the groundwork for the development of a third party audit and certification ecosystem which will be necessary to foster the safe development and adoption of AI. Second, given that effective risk management hinges on accurate measurement, establishing standardized evaluation protocols is paramount. Given the nascency of the field, our ability to properly estimate emerging risks through model evaluations would notably benefit from coordinated efforts. Finally, the creation of a shared incident reporting system could establish a crucial feedback loop, enabling the global AI community to rapidly learn from and adapt to emerging safety challenges.

Historically, key safety regulations have often emerged only after catastrophic events occur. With AI, we might not get a second chance; considering the technology’s speed of advancement and potential scope of impact,  reactive responses are a big gamble. Leaders now have a chance to do things differently this time around: to establish global guardrails, preemptively, before it is too late. We urge them to seize this chance to stand out in history as those who avoided a crash, rather than those who let it happen.



Source link

Related posts

How Elon Musk’s Anti-Government Crusade Could Benefit Tesla and His Other Businesses

Fox Says Super Bowl LIX Hit Record Viewership

A Timeline of the Tit-for-Tat Tariffs Between China and the U.S. Since 2017