Friday, 12 September, 2025
London, UK
Friday, September 12, 2025 10:08 AM
scattered clouds 17.3°C
Condition: Scattered clouds
Humidity: 62%
Wind Speed: 25.9 km/h

G7 skirts touchy AI safety discussion

KANANASKIS, Alberta — G7 leaders on Monday steered clear of using overly strong language on keeping AI safe and managing the risks associated with the technology at their summit in Canada. 

Just as at the Paris AI Action Summit in February, they placed greater emphasis on the adoption of AI in a leaders’ statement titled “AI for prosperity.” 

“We must better drive innovation and adoption of secure, responsible and trustworthy AI that benefits people, mitigates negative externalities, and promotes our national security,” read a draft statement seen by POLITICO. 

It’s a break with the past. 

In May 2023, meeting in Hiroshima, Japan, G7 leaders launched the so-called Hiroshima AI process to promote safe and trustworthy AI and signed a statement that explicitly recognized “the need to manage risks” and pushed for “keeping humankind at the center.”   

It came with a voluntary code of conduct for companies developing the most advanced AI models. That code of conduct was namechecked only once in the current G7 draft. 

The safety focus back then followed the overnight popularity of generative AI thanks to OpenAI’s ChatGPT. 

In the meantime, however, attention has shifted from safety to winning the AI race. 

In January, U.S. President Donald Trump revoked an executive order from the Biden era that had aimed to develop “safe, secure and trustworthy” AI, in a rare win for the EU.

At the February Paris Summit, U.S. Vice President JD Vance warned EU countries against heavy-handed AI regulation, while the U.S. called for a pause of the EU’s AI Act, which aims to mitigate the risks stemming from the technology.  

Just as at the Paris AI Action Summit in February, they placed greater emphasis on the adoption of AI in a leaders’ statement titled “AI for prosperity.” | Mohammed Badra/EPA

The U.K., meanwhile, in February changed the name of its AI Safety Institute to the AI Security Institute.

Pieter Haeck reported from Brussels, Mickey Djuric from Ottawa and Koen Verhelst from the G7 summit in Kananaskis.

LP Staff Writers

Writers at Lord’s Press come from a range of professional backgrounds, including history, diplomacy, heraldry, and public administration. Many publish anonymously or under initials—a practice that reflects the publication’s long-standing emphasis on discretion and editorial objectivity. While they bring expertise in European nobility, protocol, and archival research, their role is not to opine, but to document. Their focus remains on accuracy, historical integrity, and the preservation of events and individuals whose significance might otherwise go unrecorded.

Categories

Follow

    Newsletter

    Subscribe to receive your complimentary login credentials and unlock full access to all features and stories from Lord’s Press.

    As a journal of record, Lord’s Press remains freely accessible—thanks to the enduring support of our distinguished partners and patrons. Subscribing ensures uninterrupted access to our archives, special reports, and exclusive notices.

    LP is free thanks to our Sponsors

    Privacy Overview

    Privacy & Cookie Notice

    This website uses cookies to enhance your browsing experience and to help us understand how our content is accessed and used. Cookies are small text files stored in your browser that allow us to recognise your device upon return, retain your preferences, and gather anonymised usage statistics to improve site performance.

    Under EU General Data Protection Regulation (GDPR), we process this data based on your consent. You will be prompted to accept or customise your cookie preferences when you first visit our site.

    You may adjust or withdraw your consent at any time via the cookie settings link in the website footer. For more information on how we handle your data, please refer to our full Privacy Policy