Inside France’s Effort to Shape the Global AI Conversation


One evening early last year, Anne Bouverot was putting the finishing touches on a report when she received an urgent phone call. It was one of French President Emmanuel Macron’s aides offering her the role as his special envoy on artificial intelligence. The unpaid position would entail leading the preparations for the France AI Action Summit—a gathering where heads of state, technology CEOs, and civil society representatives will seek to chart a course for AI’s future. Set to take place on Feb. 10 and 11 at the presidential Élysée Palace in Paris, it will be the first such gathering since the virtual Seoul AI Summit in May—and the first in-person meeting since November 2023, when world leaders descended on Bletchley Park for the U.K.’s inaugural AI Safety Summit. After weighing the offer, Bouverot, who was at the time the co-chair of France’s AI Commission, accepted. 

[time-brightcove not-tgx=”true”]

But France’s Summit won’t be like the others. While the U.K.’s Summit centered on mitigating catastrophic risks—such as AI aiding would-be terrorists in creating weapons of mass destruction, or future systems escaping human control—France has rebranded the event as the ‘AI Action Summit,’ shifting the conversation towards a wider gamut of risks—including the disruption of the labor market and the technology’s environmental impact—while also keeping the opportunities front and center. “We’re broadening the conversation, compared to Bletchley Park,” Bouverot says. Attendees expected at the Summit include OpenAI boss Sam Altman, Google chief Sundar Pichai, European Commission president Ursula von der Leyen, German Chancellor Olaf Scholz and U.S. Vice President J.D. Vance.

Some welcome the pivot as a much-needed correction to what they see as hype and hysteria around the technology’s dangers. Others, including some of the world’s foremost AI scientists—including some who helped develop the field’s fundamental technologies—worry that safety concerns are being sidelined. “The view within the community of people concerned about safety is that it’s been downgraded,” says Stuart Russell, a professor of electrical engineering and computer sciences at the University of California, Berkeley, and the co-author of the authoritative textbook on AI used at over 1,500 universities.

“On the face of it, it looks like the downgrading of safety is an attempt to say, ‘we want to charge ahead, we’re not going to over-regulate. We’re not going to put any obligations on companies if they want to do business in France,”‘ Russell says.

France’s Summit comes at a critical moment in AI development, when the CEOs of top companies believe the technology will match human intelligence within a matter of years. If concerns about catastrophic risks are overblown, then shifting focus to immediate challenges could help prevent real harms while fostering innovation and distributing AI’s benefits globally. But if the recent leaps in AI capabilities—and emerging signs of deceptive behavior—are early warnings of more serious risks, then downplaying these concerns could leave us unprepared for crucial challenges ahead.


Bouverot is no stranger to the politics of emerging technology. In the early 2010s, she held the director general position at the Global System for Mobile Communications Association, an industry body that promotes interoperable standards among cellular providers globally. “In a nutshell, that role—which was really telecommunications—was also diplomacy,” she says. From there, she took the helm at Morpho (now IDEMIA), steering the French facial recognition and biometrics firm until its 2017 acquisition. She later co-founded the Fondation Abeona, a nonprofit that promotes “responsible AI.” Her work there led to her appointment as co-chair of France’s AI Commission, where she developed a strategy for how the nation could establish itself as a global leader in AI.

Bouverot’s growing involvement with AI was, in fact, a return to her roots. Long before her involvement in telecommunications, in the early 1990s, Bouverot earned a PhD in AI at the Ecole normale supérieure—a top French university that would later produce French AI frontrunner Mistral AI CEO Arthur Mensch. After graduating, Bouverot figured AI was not going to have an impact on society anytime soon, so she shifted her focus. “This is how much of a crystal ball I had,” she joked on Washington AI Network’s podcast in December, acknowledging the irony of her early skepticism, given AI’s impact today. 

Under Bouverot’s leadership, safety will remain a feature, but rather than the summit’s sole focus, it is now one of five core themes. Others include: AI’s use for public good, the future of work, innovation and culture, and global governance. Sessions run in parallel, meaning participants will be unable to attend all discussions. And unlike the U.K. summit, Paris’s agenda does not mention the possibility that an AI system could escape human control. “There’s no evidence of that risk today,” Bouverot says. She says the U.K. AI Safety Summit occurred at the height of the generative AI frenzy, when new tools like ChatGPT captivated public imagination. “There was a bit of a science fiction moment,” she says, adding that the global discourse has since shifted. 

Back in late 2023, as the U.K.’s summit approached, signs of a shift in the conversation around AI’s risks were already emerging. Critics dismissed the event as alarmist, with headlines calling it “a waste of time” and a “doom-obsessed mess.” Researchers, who had studied AI’s downsides for years felt that the emphasis on what they saw as speculative concerns drowned out immediate harms like algorithmic bias and disinformation. Sandra Wachter, a professor of technology and regulation at the Oxford Internet Institute, who was present at Bletchley Park, says the focus on existential risk “was really problematic.”

“Part of the issue is that the existential risk concern has drowned out a lot of the other types of concerns,” says Margaret Mitchell, chief AI ethics scientist at Hugging Face, a popular online platform for sharing open-weight AI models and datasets. “I think a lot of the existential harm rhetoric doesn’t translate to what policy makers can specifically do now,” she adds.

On the U.K. Summit’s opening day, then-U.S. Vice President, Kamala Harris, delivered a speech in London: “When a senior is kicked off his health care plan because of a faulty A.I. algorithm, is that not existential for him?” she asked, in an effort to highlight the near-term risks of AI over the summit’s focus on the potential threat to humanity. Recognizing the need to reframe AI discussions, Bouverot says the France Summit will reflect the change in tone. “We didn’t make that change in the global discourse,” Bouverot says, adding that the focus is now squarely on the technology’s tangible impacts. “We’re quite happy that this is actually the conversation that people are having now.”


One of the actions expected to emerge from France’s Summit is a new yet-to-be-named foundation that will aim to ensure AI’s benefits are widely distributed, such as by developing public datasets for underrepresented languages, or scientific databases. Bouverot points to AlphaFold, Google DeepMind’s AI model that predicts protein structures with unprecedented precision—potentially accelerating research and drug discovery—as an example of the value of public datasets. AlphaFold was trained on a large public database to which biologists had meticulously submitted findings for decades. “We need to enable more databases like this,” Bouverot says. Additionally, the foundation will focus on developing talent and smaller, less computationally intensive models, in regions outside the small group of countries that currently dominate AI’s development. The foundation will be funded 50% by partner governments, 25% by industry, and 25% by philanthropic donations, Bouverot says.

Her second priority is creating an informal “Coalition for Sustainable AI.” AI is fueling a boom in data centers, which require energy, and often water for cooling. The coalition will seek to standardize measures for AI’s environmental impact, and incentivize the development of more efficient hardware and software through rankings and possibly research prizes. “Clearly AI is happening and being developed. We want it to be developed in a sustainable way,” Bouverot says. Several companies, including Nvidia, IBM, and Hugging Face, have already thrown their weight behind the initiative

Sasha Luccioni, AI & climate lead at Hugging Face, and a leading voice on AI’s climate impact, says she is hopeful that the coalition will promote greater transparency. She says that currently, calculating the AI’s emissions is made more challenging because often companies do not share how long a model was trained for, while data center providers do not publish specifics on GPU—the kind of computer chips used for running AI—energy usage. “Nobody has all of the numbers,” she says, but the coalition may help put the pieces together.


Given AI’s recent pace of development, some fear severe risks could materialize rapidly. The core concern is that artificial general intelligence, or AGI—a system that surpasses humans in most regards—could potentially outmaneuver any constraints designed to control it, perhaps permanently disempowering humanity. Experts disagree about how quickly—if ever—we’ll reach that technological threshold. But many leaders of the companies seeking to build human-level systems expect to succeed soon. In January, OpenAI’s Altman wrote in a blog post: “We are now confident we know how to build AGI.” Speaking on a panel at Davos last month, Dario Amodei, the CEO of rival AI company, Anthropic, said that AI could surpass human intelligence in almost all things as soon as next year. 

Those same titans of industry have made no secret of what they believe is at stake. Amodei has previously said he places a 10% to 25% likelihood that AI causes a societal-scale catastrophe. In 2015, months before co-founding OpenAI, Altman said “AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.” More recently, Altman has downplayed AI’s risks. Meanwhile, a string of safety staff have departed OpenAI, citing concerns over the company’s direction. “Over the past years, safety culture and processes have taken a backseat to shiny products,” former OpenAI staffer Jan Leike posted to X in May, after resigning. He previously co-led a now defunct team charged with pioneering techniques to keep smarter-than-human AI systems under humanity’s control.

Predictions of AGI’s imminent arrival could prove premature. The tech industry is notorious for bold claims and hype cycles crafted to court investors, and progress could still plateau, leaving AGI a distant prospect. Nevertheless, over the past year, AI models have demonstrated notable advances. At the beginning of 2024, the most capable model scored only 2% on a benchmark consisting of real world software engineering problems; by October, a new model achieved 49%; by December, an astonishing 71.7%. Similar leaps in performance have occurred in competition math and PhD-level science questions, with current state-of-the-art comfortably surpassing 85%. Though benchmarks are imperfect proxies for intelligence, the wider trend is unmistakable. Some observers note that this rapid progress has largely gone unnoticed by the public, perhaps because few people use AI for complex math or science in daily life. Companies including OpenAI, Anthropic, and Google are shifting their focus from user-prompted systems to creating autonomous virtual agents that can operate computers and solve problems independently.

This recent rise in intelligence has been accompanied by new deceptive abilities. In December, an experiment conducted by Apollo Research revealed that when confronted with a fictional scenario where the AI was threatened with shutdown, cutting-edge models, including OpenAI’s recent o1 and a version of Anthropic’s Claude, tried (unsuccessfully) to copy themselves to a new server. When researchers asked the models why they had tried to replicate themselves, they sometimes lied to avoid being caught—often doubling down when confronted with their lies. The same month, a test by Anthropic and the nonprofit Redwood Research, showed a version of Anthropic’s model, Claude, strategically misleading its creators during training. “Exactly the things that people have been writing about for the last 10 years are happening now,” Russell says. “As the systems are increasing their ability to reason, we see that indeed they can figure out how to escape. They can lie about it while they’re doing it, and so on.”


Yoshua Bengio, founder and scientific director of Mila Quebec AI Institute, and often referred to as one of the three “Godfathers of AI” for his pioneering work in deep learning, says that while within the business community there is a sense that the conversation has moved on from autonomy risks, recent developments have caused growing concerns within the scientific community. Although expert opinion varies widely on the likelihood, he says the possibility of AI escaping human control can no longer be dismissed as mere science fiction. Bengio led the International AI Safety Report 2025, an initiative modeled after U.N. climate assessments and backed by 30 countries, the U.N., E.U., and the OECD. Published last month, the report synthesizes scientific consensus on the capabilities and risks of frontier AI systems. “There’s very strong, clear, and simple evidence that we are building systems that have their own goals and that there is a lot of commercial value to continue pushing in that direction,” Bengio says. “A lot of the recent papers show that these systems have emergent self-preservation goals, which is one of the concerns with respect to the unintentional loss of control risk,” he adds.

At previous summits, limited but meaningful steps were taken to reduce loss-of-control and other risks. At the U.K. Summit, a handful of companies committed to share priority access to models with governments for safety testing prior to public release. Then, at the Seoul AI Summit, 16 companies, across the U.S., China, France, Canada, and South Korea signed voluntary commitments to identify, assess and manage risks stemming from their AI systems. “They did a lot to move the needle in the right direction,” Bengio says, but he adds that these measures are not close to sufficient. “In my personal opinion, the magnitude of the potential transformations that are likely to happen once we approach AGI are so radical,” Bengio says, “that my impression is most people, most governments, underestimate this whole lot.”

But rather than pushing for new pledges, in Paris the focus will be streamlining existing ones—making them compatible with existing regulatory frameworks and each other. “There’s already quite a lot of commitments for AI companies,” Bouverot says. This light-touch stance mirrors France’s broader AI strategy, where homegrown company Mistral AI has emerged as Europe’s leading challenger in the field. Both Mistral and the French government lobbied for softer regulations under the E.U.’s comprehensive AI Act. France’s Summit will feature a business-focused event, hosted across town at Station F, France’s largest start-up hub. “To me, it looks a lot like they’re trying to use it to be a French industry fair,” says Andrea Miotti, the executive director of Control AI, a non-profit that advocates for guarding against existential risks from AI. “They’re taking a summit that was focused on safety and turning it away. In the rhetoric, it’s very much like: let’s stop talking about the risks and start talking about the great innovation that we can do.” 

The tension between safety and competitiveness is playing out elsewhere, including India, which, it was announced last month, will co-chair France’s Summit. In March, India issued an advisory that pushed companies to obtain the government’s permission before deploying certain AI models, and take steps to prevent harm. It then swiftly reserved course after receiving sharp criticism from industry. In California—home to many of the top AI developers—a landmark bill, which mandated that the largest AI developers implement safeguards to mitigate catastrophic risks, garnered support from a wide coalition, including Russell and Bengio, but faced pushback from the open-source community and a number of tech giants including OpenAI, Meta, and Google. In late August, the bill passed both chambers of California’s legislature with strong majorities but in September it was vetoed by governor Gavin Newsom who argued the measures could stifle innovation. In January, President Donald Trump repealed the former President Joe Biden’s sweeping Executive Order on artificial intelligence, which had sought to tackle threats posed by the technology. Days later, Trump replaced it with an Executive Order that “revokes certain existing AI policies and directives that act as barriers to American AI innovation” to secure U.S. leadership over the technology.

Markus Anderljung, director of policy and research at AI safety think-tank the Centre for the Governance of AI, says that safety could be woven into the France Summit’s broader goals. For instance, initiatives to distribute AI’s benefits globally might be linked to commitments from recipient countries to uphold safety best practices. He says he would like to see the list of signatories of the Frontier AI Safety Commitments signed in Seoul expanded —particularly in China, where only one company, Zhipu, has signed. But Anderljung says that for the commitments to succeed, accountability mechanisms must also be strengthened. “Commitments without follow-ups might just be empty words,” he says, ”they just don’t matter unless you know what was committed to actually gets done.”

A focus on AI’s extreme risks does not have to come at the exclusion of other important issues. “I know that the organizers of the French summit care a lot about [AI’s] positive impact on the global majority,” Bengio says. “That’s a very important mission that I embrace completely.” But he argues the potential severity of loss-of-control risks warrant invoking precautionary principle—the idea that we should take preventive measures, even absent scientific consensus. It’s a principle that has been invoked by U.N. declarations aimed at protecting the environment, and in sensitive scientific domains like human cloning. 

But for Bouverot, it is a question of balancing competing demands. “We don’t want to solve everything—we can’t, nobody can,” she says, adding that the focus is on making AI more concrete. “We want to work from the level of scientific consensus, whatever level of consensus is reached.”


In mid December, in France’s foreign ministry, Bouverot, faced an unusual dilemma. Across the table, a South Korean official explained his country’s eagerness to join the summit. But days earlier, South Korea’s political leadership was thrown into turmoil when President Yoon Suk Yeol, who co-chaired the previous summit’s leaders’ session, declared martial law before being swiftly impeached, leaving the question of who will represent the country—and whether officials could attend at all—up in the air. 

There is a great deal of uncertainty—not only over the pace AI will advance, but to what degree governments will be willing to engage. France’s own government collapsed in early December after Prime Minister Michel Barnier was ousted in a no-confidence vote, marking the first such collapse since the 1960s. And, as Trump, long skeptical of international institutions, returns to the oval office, it is yet to be seen how Vice President Vance will approach the Paris meeting.

When reflecting on the technology’s uncertain future, Bouverot finds wisdom in the words of another French pioneer who grappled with powerful but nascent technology. “I have this quote from Marie Curie, which I really love,” Bouverot says. Curie, the first woman to win a Nobel Prize, revolutionized science with her work on radioactivity. She once wrote: “Nothing in life is to be feared, it is only to be understood.” Curie’s work ultimately cost her life—she died at a relatively young 66 from a rare blood disorder, likely caused by prolonged radiation exposure.



Source link

Related posts

Judge Temporarily Blocks Trump Plan Offering Incentives for Federal Workers to Resign

Clean Slate Is Norman Lear’s Parting Gift to a Fractured Society

History Suggests Trump Faces Long Odds in Trying to Broker Middle Eastern Peace