News

Military is the missing word in AI safety discussions

Stay informed with free updates

The writer is international policy director at Stanford University’s Cyber Policy Center and special adviser to the European Commission

Western governments are racing each other to set up AI Safety Institutes. The UK, US, Japan and Canada have all announced such initiatives, while the US Department of Homeland Security added an AI Safety and Security Board to the mix only last week. Given this heavy emphasis on safety, it is remarkable that none of these bodies govern the military use of AI. Meanwhile, the modern-day battlefield is already demonstrating the potential for clear AI safety risks. 

According to a recent investigation by the Israeli magazine +972, the Israel Defense Forces have used an AI-enabled program called Lavender to flag targets for drone attacks. The system combines data and intelligence sources to identify suspected militants. The program allegedly identified tens of thousands of targets, and bombs dropped in Gaza resulted in excessive collateral deaths and damage. The IDF denies several aspects of the report.

Venture capitalists are boosting the “deftech” — or defence tech — market. Technology companies are keen to be part of this latest boom and all too quick to sell the benefits of AI on the battlefield. Microsoft is reported to have pitched Dalle-E, a generative AI tool to the US military, while controversial facial recognition company Clearview AI prides itself on having helped Ukraine identify Russian soldiers with their technology. Anduril makes autonomous systems and Shield AI develops AI-powered drones. The two companies have brought in hundreds of millions of dollars in their first investment rounds.

But though it’s easy to point the finger at private companies who hype AI for warfare purposes, it is governments who have let the ‘deftech’ sector escape their oversight. The landmark EU AI Act does not apply to AI systems that are “exclusively for military, defence or national security purposes”. Meanwhile, the White House’s Executive Order on AI had important carve-outs for military AI (though the defence department does have internal guidelines). For example, its implementation of much of its Executive Order “does not cover AI when it is being used as a component of a National Security System”. And Congress has taken no action to regulate military uses of the technology.

That leaves the two main democratic blocs of the world with no new binding rules about what types of AI systems the military and intelligence services can use. They therefore lack the moral authority to encourage other countries to put guardrails around their own use of AI in their respective militaries. A recent political declaration on “Responsible Military Use of Artificial Intelligence and Autonomy” that was supported by a number of countries is no more than that: a declaration. 

We have to ask ourselves how meaningful political discussions of AI safety are, if they don’t cover military uses of the technology. Despite the lack of evidence that AI-enabled weapons can comply with international law on distinction and proportionality, they are sold around the world. Since some of the technologies are dual use, the lines between civilian and military uses are blurring.

The decision to not regulate military AI has a human price. Even if they are systematically imprecise, these systems are often given undue trust in military contexts as they are wrongly seen as impartial. Yes, AI can help make faster military decisions, but it can also be more error prone and may fundamentally not adhere to international humanitarian law. Human control over operations is critical in legally holding actors to account.

The UN has tried to fill the void. Secretary-general António Guterres first called for a ban on autonomous weapons in 2018, describing them as “morally repugnant.” More than 100 countries have expressed interest in negotiating and adopting new international laws to prohibit and restrict autonomous weapons systems. But Russia, the US, the UK and Israel, have opposed a binding proposal, causing the talks to break down.

If nations will not act to protect civilians from military uses of AI, the rules-based international system must step up. The UN secretary-general’s high-level advisory body on AI (on which I serve) would be one of several groups well placed to recommend the proscription of risky uses of military AI, but political leadership remains vital to ensure rules are upheld.

Making sure human rights standards and laws of armed conflict continue to protect civilians in a new age of warfare is critical. The unregulated use of AI on the battlefield cannot continue.

Articles You May Like

Nigel Farage held talks with Elon Musk at Mar-a-Lago
How ‘the mother of all bubbles’ will pop
Goodbye to Berlin, Europe’s self-effacing capital
Texas clears Wells Fargo after bank quits Net-Zero alliance
This billionaire is betting artificial intelligence will choose your meals for you in the future