Europe’s political landscape has changed dramatically in a short space of time, with major shifts in power and priorities following elections in the European Parliament, France, and the UK. What do these changes mean for AI policy in the EU? EU policymakers seem poised to prioritise other issues, such as defence, security, immigration, and economic growth, but they should not overlook the role AI could play in supporting those objectives. In addition, with the EU beginning to implement the AI Act, EU policymakers should take steps to address fragmentation in digital policies between member states, sector-specific AI needs, and overlap between the AI Act and other EU laws.
Make Europe Great Again (MEGA)
With only six months on the rotating presidency of the European Council, Hungary has the mega task of implementing its MEGA campaign. Yet, AI features little in its agenda, suggesting limited political will to foster AI adoption during the presidency. Instead, discussions about a new European competitiveness focus on topics such as electric vehicles, and the broader impact of the automotive industry on the Green Deal, as well as the need for a well-functioning single market.
Rather than consider these industries in isolation, the Council should consider the role AI can play in advancing them and global European competitiveness in general. For example, AI can help predict supply chain disruptions and optimise inventory, leading to improved efficiency across the manufacturing process and ultimately reducing costs for consumers. Similarly, a proactive push for an AI-ready EU would drive ambitions for a digital single market that takes advantage of the EU’s workforce and economy. According to one study, China is world-leading, not in AI model development, but in AI diffusion—integrating AI-ready solutions into businesses and the wider public. As the world adapts to more prevalent AI tools, early adoption of the technology, and supporting citizens in growing comfortable with it, will pay dividends in terms of long-term competitiveness and productivity. This is something the EU should tap into as a way to leverage its own competitiveness and “Make Europe Great Again”.
Deepening Member State Fragmentation
AI Act implementation will now be the concern of every member state as each tries to implement the complex piece of legislation within transposition timeframes. Given this task, a clear concern for EU-level policymakers should be the threat of deepening digital fragmentation which would undermine an already lagging digital single market, and it is highly likely that AI Act implementation will start hitting major roadblocks as resources and expertise vary between member states. As a centralised body, the AI Office should play a stronger role in supporting member states to implement the Act, in addition to supporting AI developers and deployers with the new requirements by rooting them in current industry technical feasibility.
Moreover, individual member state agendas will continue to impact cohesive implementation EU-wide. The next AI Safety Summit, hosted by France in February 2025, is already showing signs of departure from both the UK safety agenda and the EU’s risk-based approach to AI governance, instead opting for an AI Action Summit that is likely to go beyond AI safety issues into areas such as attracting top talent and securing domestic compute and infrastructure that could fulfill France’s desire to become a global AI superpower. In any event, it is promising that France is taking more proactive steps to introduce AI domestically. Should France implement the AI Act in this spirit, it could well serve as a litmus test for other member states who wish to promote innovation within these new digital constraints.
Heightened Sector-Specific AI Needs
Similarly, the EU, and more specifically the Commission and AI Office, should remain agile to the emerging needs of sectors as they deal more frequently with sector-specific AI risks. The AI Act’s horizontal applicability means it has the potential to leave gaps in areas such as finance and healthcare. This is unfortunately one of the pitfalls of horizontal legislation governing a technology with vertical, sector-specific impacts. Rather than introduce further regulation to tackle these new impacts, the Commission should use its authority to update the AI Act’s use-case list. This list outlines specific use cases and their corresponding risk categories, which in turn determine the compliance requirements for deployers and developers. Such powers are a critical tool the Commission should wield if it wishes to stay true to its promise that the EU AI Act is a future-forward regulation that can adapt to changing practices and AI applications.
Digital Regulatory Overlap
Finally, regulatory overlap is already showing up within AI in Europe, with recent updates to the Products Liability Directive calling into question the need for an AI Liability Directive. This new overlap, in addition to the regulations that predate the AI Act, will no doubt raise issues as member states and broader industries begin to make sense of how the AI Act works in practice. The Commission should be aware of this emerging area as one to get on top of, such as by introducing a centralised body similar to the UK’s own Digital Regulation Cooperation Forum, which brings together key UK regulators to make sense of the digital obligations generated by regulation. A similar initiative implemented EU-wide would help member states coordinate digital regulation, providing conclusive decisions on which regulations take precedence, and establishing minimal compliance standards that satisfy regulatory obligations.
New election appointments and shifting political priorities offer a fresh opportunity for the EU to start getting AI right. With EU competitiveness an ever-growing concern for policymakers, AI offers a strong solution to assert the EU’s domestic and foreign priorities, leverage the power of a consolidated AI-ready single market, and start to make sense of the EU’s digital regulations to support innovators who want to bring AI innovation to Europe.
* Ayesha Bhatti is a policy analyst at the Center for Data Innovation. Prior to joining, she worked as a data scientist at a technology consulting firm in London. She has an LLB from the University of Nottingham, and an MSc in Computer Science from Birkbeck, University of London. She is also a licensed attorney in the state of New York.
Source: Center for Data Innovation