How Should the UK Regulate AI?

Next month the UK will host the long-awaited ‘AI Safety Summit’ in Bletchley Park. The venue is appropriate given that it was at Bletchley Park that a team of codebreakers, including ‘father of computer science’ Alan Turing, decrypted many Nazi communications, thereby shortening the length of the Second World War by up to four years. World War II is full of examples of ingenuity and new technologies being used to inflict atrocities as well as to defeat evil. Today, it is AI that dominates discussions about technology and safety, with many worried that AI poses risks that make World War II look like a school yard skirmish in comparison.

The safety summit is welcome, but there is a danger its outcomes will hamper innovation and investment if attendees focus too much on AI risks that are more at home in a Hollywood screenplay than reality. In my new paper for the Centre for Policy Studies ‘Regulating Artificial Intelligence: The Risks and Opportunities‘, I outline a series of policy proposals that would allow the government to address many of the concerns about AI safety without hamstringing the development of technology that has the potential to revolutionise civilisation for the better.

Despite what recent commentary may lead many to believe, AI is not new. Headlines associated with ChatGPT may have renewed debates about AI safety, but AI has been with us for years. Social media feeds, navigation apps, smart speakers, weather predictions, and many other features of everyday life are fueled by AI. That the technology is already used in a wide range of tasks and industries makes regulating AI safety a challenge. The AI risks posed by driverless cars are different to the risks posed by MRI image scanning technology and facial recognition.

Fortunately, the government grasps that AI is ubiquitous and that centralised regulation would be inappropriate, as shown in the AI White Paper published by the Department for Science, Innovation and Technology in March this year. The white paper embraced a decentralised approach to AI policy.

Although the AI White Paper included much to applaud, the government can go further. In my paper I argue for regulators to draft safety charters that outline to industry, researchers, and investors what significant harms such as the facilitation of serious crimes or risks to life, limb, national security, and infrastructure they aim to prevent. This approach would provide businesses and researchers with a clear understanding of their obligations and put them in the position of asking for forgiveness rather than permission.

AI safety proposals are often linked to regulation and legislation, but there are other ways for the government to signal that it is a serious and innovative home of AI safety debate and research. One way, which I outline in the paper, is for the government to establish prediction markets. As economist Alex Tabbarok has noted, ‘a bet is a tax on bullshit’. Unfortunately, much of the commentary surrounding AI is dominated by hyperbolic rhetoric with few participants willing to put their money where their mouth is.

For example, earlier this year thousands of AI researchers, industry leaders and computer scientists signed an open letter calling for a pause in AI research. Many others put their name to a ‘statement on AI risks’ claiming that the risk of AI-fueled extinction should be treated with the same seriousness as the risk of nuclear war and pandemics. Such statements and letters are unhelpful. They do not tell readers what the likelihood of AI risks are and none of the signatories will incur a price if AI harms occur.

Prediction markets focussed on AI safety would allow researchers, observers, businesses, and others to bet on the chances of specific AI risks. For example, a user of the AI safety prediction market could establish a market on whether a particular driverless car would pass a safety test, or whether a new deepfake detection tool will achieve a particular detection rate. Another might choose to bet on whether a new Large Language Model tool would pass the Maths A-Level exam, or whether a facial recognition application will yield the same false positive/negative rates across every racial group.

If the government were to establish AI safety prediction markets and ensure that regulators explain to the public which risks they aim to prevent, it would send a strong signal to the world that it is taking AI seriously without resorting to heavy-handed regulation. At a time when AI safety debates around the world too often degenerate into unhelpful and unfounded rhetoric, such an approach would be very welcome to business, researchers, and government.

*  Matthew Feeney is Head of Tech & Innovation at Centre for Policy Studies. Before joining CPS, Matthew was the director of Cato Institute’s Project on Emerging Technologies. His writing has appeared in The New York Times, The Washington Post, City A.M., and others. He received both his BA and MA in philosophy from the University of Reading.

Source: Capx