College Green Group blog

The Paris AI Summit and the UK’s uncertain approach to AI governance

As artificial intelligence continues to evolve, discussions about its regulation have intensified. The AI Action Summit, held in Paris in mid-February, aimed to address the growing impact of AI and establish a global framework for its development. Attended by government officials, business leaders, NGOs, and the public, the summit saw 60 nations sign an agreement promoting an “open”, “inclusive”, and “ethical” approach to AI.

60 countries, including major economies such as China,  India, Canada, and France backed the pact, with President Macron emphasising the need for global AI safety measures. However, two notable absences stood out: the United States and the United Kingdom. While the US made its opposition clear, citing concerns over free speech and collaboration with authoritarian regimes, the UK’s reasoning remains more ambiguous.

The UK's stance: A contradiction in policy?

Unlike the US, which opposed the agreement outright, the UK did sign parts of the deal related to AI sustainability but refused to endorse its core safety regulations. A UK government spokesperson cited concerns over national security and the lack of a clear global governance framework as key reasons for withholding full support.

This decision appears to contradict the UK’s existing AI policies. The UK’s AI Security Institute under the Department for Science, Innovation, and Technology was established to ensure AI safety amid rapid technological advancements. If the UK is already prioritising AI security, why reject an agreement that aligns with this goal? The move raises questions about whether the UK is shifting focus from broader AI safety to national security concerns, or if deeper geopolitical factors are at play.

Was the UK aligning itself with the US?

Some speculate that the UK’s decision was influenced by its close ties with the US. Washington’s opposition to the pact, driven by concerns over censorship and China’s involvement, may have played a role in the UK’s hesitation.  However, when questioned, UK officials denied any coordination with the US, insisting their decision was based solely on national interests.

Nonetheless, given the US administration’s firm stance against AI regulation, a UK endorsement of the agreement could have strained AI-related collaboration between the two allies. 

A shift in the UK’s AI strategy?

Beyond the summit, recent policy proposals suggest a shift away from stringent AI regulations. The Labour government’s plans to overhaul copyright laws would include allowing tech companies to use copyrighted materials such as books, music, and art to train AI models without compensating creators. While content owners could theoretically opt out, similar policies in other countries have proven difficult to enforce.

This move has sparked backlash from the creative industry,  and cross-party MPs are warning that AI could “steal” the work of British artists, writers, and journalists. If the UK is prioritising AI-driven innovation over strict regulation, it could explain its reluctance to sign an international agreement focused on AI safety and oversight.

What’s next for AI regulation in the UK?

The UK’s refusal to sign the agreement leaves its AI strategy uncertain. While European nations push for stronger security measures, the UK appears to be charting its own course, one that may align more closely with the US than with its European counterparts. However, with AI deepfakes and misinformation on the rise, the need for robust AI safety policies remains urgent. 

If the UK aims to take a national approach to AI regulation, it needs to clarify its strategy. Will it introduce alternative frameworks for AI safety? Or will it continue prioritising innovation and economic growth over stringent oversight? Without a clear roadmap, the UK’s position on AI governance remains as uncertain as its refusal to sign the agreement.

Want to know more? Sign up to our newsletter here.

About the author