The writer is international policy director at Stanford University’s Cyber Policy Center and serves as special adviser to Margrethe Vestager
Tech companies recognise that the race for AI dominance is decided not only in the marketplace but also in Washington and Brussels. Rules governing the development and integration of their AI products will have an existential impact on them, but currently remain up in the air. So executives are trying to get ahead and set the tone, by arguing that they are best placed to regulate the very technologies they produce. AI might be novel, but the talking points are recycled: they are the same ones Mark Zuckerberg used about social media and Sam Bankman-Fried offered regarding crypto. Such statements should not distract democratic lawmakers again.
Imagine the chief executive of JPMorgan explaining to Congress that because financial products are too complex for lawmakers to understand, banks should decide for themselves how to prevent money laundering, enable fraud detection and set liquidity to loan ratios. He would be laughed out of the room. Angry constituents would point out how well self-regulation panned out in the global financial crisis. From big tobacco to big oil, we have learnt the hard way that businesses cannot set disinterested regulations. They are neither independent nor capable of creating countervailing powers to their own.
Somehow that basic truth has been lost when it comes to AI. Lawmakers are eager to defer to companies and want their guidance on regulation; Senators even asked OpenAI chief executive Sam Altman to name potential industry leaders to oversee a putative national AI regulator.
Within industry circles, the calls for AI regulation have verged on apocalyptic. Scientists warn that their creations are too powerful and could go rogue. A recent letter, signed by Altman and others, warned that AI posed a threat to humanity’s survival akin to nuclear war. You would think these fears would spur executives into action but, despite signing, virtually none have modified their own behaviour. Perhaps their framing of how we think of guardrails around AI is the actual goal. Our ability to navigate questions about the type of regulation needed is also heavily influenced by our understanding of the technology itself. The statements have focused attention on AI’s existential risk. But critics argue that prioritising the prevention of this down the line overshadows the much-needed work against discrimination and bias that should be happening today.
Warnings about the catastrophic risks of AI, supported by the very people who could stop pushing their products into society, are disorienting. The open letters make signatories seem powerless in their desperate appeals. But those sounding the alarm already have the power to slow or pause the potentially dangerous progression of artificial intelligence.
Former Google chief executive Eric Schmidt maintains that companies are the only ones equipped to develop guardrails, while governments lack the expertise. But lawmakers and executives are not experts in farming, fighting crime or prescribing medication either, yet they regulate all those activities. They should certainly not be discouraged by the complexity of AI — if anything it should encourage them to take responsibility. And Schmidt has unintentionally reminded us of the first challenge: breaking the monopolies on access to proprietary information. With independent research, realistic risk assessments and guidelines on the enforcement of existing regulations, a debate about the need for new measures would be based on facts.
Executive actions speak louder than words. Just a few days after Sam Altman welcomed AI regulation in his testimony before Congress, he threatened to pull the plug on OpenAI’s operations in Europe because of it. When he realised that EU regulators did not take kindly to threats, he switched back to a charm offensive, pledging to open an office in Europe.
Lawmakers must remember that businesspeople are principally concerned with profit rather than societal impacts. It is high time to move beyond pleasantries and to define specific goals and methods for AI regulation. Policymakers must not let tech CEOs shape and control the narrative, let alone the process.
A decade of technological disruption has highlighted the importance of independent oversight. That principle is even more important when the power over technologies like AI is concentrated in a handful of companies. We should listen to the powerful individuals running them but never take their words at face value. Their grand claims and ambitions should instead kick regulators and lawmakers into action based on their own expertise: that of the democratic process.