The writer is founder of Sifted, an FT-backed site about European start-ups
The British mathematician IJ Good was among the first to speculate about what would happen when computers outsmarted humans. One day, he wrote, we would build an ultra-intelligent machine that could design an even more intelligent machine by itself, triggering an “intelligence explosion”. “Thus the first ultra-intelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.”
Good’s speculations seemed fantastical when they appeared in 1964. They do not seem so fantastical today. Recent advances in artificial intelligence, highlighted by powerful generative AI models such as OpenAI’s GPT-4 and Google’s Bard, have dazzled millions. We may still be a few conceptual breakthroughs away from the creation of Good’s ultra-intelligent machines, the founder of one leading AI company tells me. But it is no longer “absolutely crazy” to believe that we might achieve artificial general intelligence, as it is called, by 2030.
The companies that are developing AI technology rightly highlight its potential to raise economic productivity, enhance human creativity and open up exciting new avenues for scientific research. But they also accept that generative AI models have serious flaws. “The downside is, at some point, that humanity loses control of the technology it is developing,” Sundar Pichai, Google’s chief executive, bluntly told CBS News.
More than 27,000 people, including several leading AI researchers, have signed an open letter from the Future of Life Institute calling for a six-month moratorium on developing leading-edge models. Others have gone further in calling for all research into AGI to be shut down. Eliezer Yudkowsky, research lead at the Machine Intelligence Research Institute, has argued that, if nothing changes, the most likely result of building “superhumanly smart AI” is that “literally everyone on Earth will die”. We should strictly monitor the use of the advanced computer chips used for AI and even consider air strikes against rogue data centres that flouted a ban, he wrote in Time.
Such hyperventilating talk incenses some other researchers, who argue that AGI may forever remain a fantasy and discussion of it only obscures the technology’s here-and-now harms. For years, researchers such as Timnit Gebru, Margaret Mitchell, Angelina McMillan-Major and Emily Bender have been warning that powerful machine-learning models risk further concentrating corporate power, exacerbating societal inequalities and polluting public information. It is dangerous to distract ourselves with “a fantasised AI-enabled utopia or apocalypse”, they claim. “Instead, we should focus on the very real and very present exploitative practices of the companies claiming to build them,” they wrote in response to the FLI’s letter.
One can only pity the policymaker trying to respond to these clashing concerns. How should they prioritise their regulatory efforts? The short answer is they need to take both sets of concerns seriously, differentiating between the immediate and longer-term risks.
It is certainly not helpful, as John Tasioulas, director of the Institute for Ethics in AI at Oxford university, observes, that the AI safety crowd and the AI ethics crowd, as he calls them, appear to be “engaged in internecine warfare”. But he suggests they are mostly arguing about different things. The safety crowd tends to see the source of the problem as the technology itself, demanding a technical solution. The ethics crowd argue that AI must be viewed in a far broader social and economic context.
On the immediate challenges, every regulator should be considering how AI might affect their field, enforce existing human rights, privacy, data and competition rules and consider how they might be updated. On the longer-term challenges, we should be debating more radical approaches.
In a recent article, investor Ian Hogarth urged legislators to grill the leaders of the AI research labs under oath about safety risks. He also called for the creation of a collaborative international agency, modelled on the Cern particle physics laboratory, to research AGI. This would nullify the dangerous dynamics of a private-sector race to develop the technology.
It is a smart idea, even if it is hard to imagine how such an international agency could be rapidly created. It is madness to believe that profit-driven private companies alone will safeguard society’s interests when pursuing the possibility, however remote, of AGI. Keeping machines docile enough for humans to control, as Good hoped for, will be the governance challenge of our age.