News

AI needs superintelligent regulation

Powerful artificial intelligence systems can be of enormous benefit to society and help us tackle some of the world’s biggest problems. Machine learning models are already playing a significant role in diagnosing diseases, accelerating scientific research, boosting economic productivity and cutting energy usage by optimising electricity flows on power grids, for example.

It would be a tragedy if such gains were jeopardised as a result of a backlash against the technology. But that danger is growing as abuses of AI technology multiply, in areas such as unfair discrimination, disinformation and fraud, as Geoffrey Hinton, one of the “godfathers of AI”, warned last month on resigning from Google. That makes it imperative that governments move fast to regulate the technology appropriately and proportionately.

How to do so will be one of the greatest governance challenges of our age. Machine learning systems, which can be deployed across millions of use cases, defy easy categorisation and can throw up numerous problems for regulators. This fast-evolving technology can also be used in diffuse, invisible and ubiquitous ways, at massive scale. But, encouragingly, regulators around the world are finally starting to tackle the issues.

Last week, the White House summoned the bosses of the biggest AI companies to explore the benefits and perils of the technology before outlining future guidelines. The EU and China are already well advanced in drawing up rules and regulations to govern AI. And the UK’s competition authority is to conduct a review of the AI market.

The first step is for the tech industry itself to agree and implement some common principles on transparency, accountability and fairness. Companies should never try to pass off chatbots as humans, for example. The second step would be for all regulators, in areas such as employment law, financial and consumer markets, competition policy, data protection, privacy and human rights, to modify existing rules to take account of specific risks raised by AI. The third is for government agencies and universities to deepen their own technological expertise to reduce the risk of industrial capture.

Beyond that, two overarching regulatory regimes should be considered for AI, even if neither alone is adequate for the size of the challenge. One regime, based on the precautionary principle, would mean that algorithms used in a few critical, life-and-death areas, such as healthcare, the judicial system and the military, would need preapproval before use. This could operate in much the same way as the US Food and Drug Administration, which screens pharmaceutical drugs before release and has a broader remit to protect and promote public health.

The second, more flexible, model could be based on “governance by accident”, as operates in the airline industry. Alarming though this sounds, it has worked extremely effectively in raising air safety standards over the past few decades. International aviation authorities have the power to mandate changes for all aeroplane manufacturers and airlines once a fault is detected. Something similar could be used when harmful flaws are found in consumer-facing AI models, such as self-driving cars.

Several leading industry researchers have called for a moratorium in developing leading-edge generative AI models. But pauses are pointless unless clearer governance regimes can be put in place. Even the tech industry accepts that it now needs clearer rules of the road and must work constructively with governments and civil rights organisations to help write them. After all, cars can drive faster around corners when fitted with effective brakes.

Articles You May Like

UK military spending needs to rise to 3.6% of GDP, defence figures say
China would balk at a sweeping Mar-a-Lago accord
Kentucky’s Bellarmine University downgraded to B1 by Moody’s
Limit foreign political donations, says UK government’s ethical adviser
Company recalls over 541K winter tires for this obvious reason