News

AI executives warn its threat to humanity rivals ‘pandemics and nuclear war’

A group of chief executives and scientists from companies including OpenAI and Google DeepMind has warned the threat to humanity from the fast-developing technology rivals that of nuclear conflict and disease.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” said a statement published by the Center for AI Safety, a San Francisco-based non-profit organisation.

More than 350 AI executives, researchers and engineers, including Sam Altman of OpenAI, Demis Hassabis of Google DeepMind and Dario Amodei of Anthropic, were signatories of the one-sentence statement.

Geoffrey Hinton and Yoshua Bengio, who won a Turing Award for their work on neural networks and are often described as “godfathers” of AI, also signed the document. Hinton left his position at Google at the beginning of the month to speak freely about the potential harms of the technology.

The statement follows calls for regulation across the sector after a number of AI launches from Big Tech companies have heightened awareness of its potential flaws, including spreading misinformation, perpetuating societal biases and replacing workers.

EU lawmakers are pushing ahead with Europe’s Artificial Intelligence Act, while the US is also exploring regulation.

Microsoft-backed OpenAI’s ChatGPT, launched in November, is seen as leading the way in the widespread adoption of artificial intelligence. Altman this month gave testimony for the first time in US Congress, calling for regulation in the form of licences.

In March, Elon Musk and more than 1,000 other researchers and tech executives called for a six-month pause on the development of advanced AI systems to halt what they called an “arms race”.

The letter was criticised for its approach, including by some researchers cited within its reasoning, while others disagreed with the recommended pause on the technology.

With the one-line statement, the Center for AI Safety told the New York Times that it hoped to avoid disagreement.

“We didn’t want to push for a very large menu of 30 potential interventions,” executive director Dan Hendrycks said. “When that happens, it dilutes the message.”

Kevin Scott, Microsoft’s chief technology officer, and Eric Horvitz, its chief scientific officer, also signed the statement on Tuesday, as did Mustafa Suleyman, former Deepmind co-founder who now runs start-up Inflection AI.

Articles You May Like

ECB will need more rate cuts if Fed holds back, says policymaker
US data blurs the picture for bond investors
Russian court orders seizure of $440mn from JPMorgan
Abortion Access Could Be Blocked In Much Of The South After Florida Gov. Ron DeSantis Signed Into Law A Six-Week Ban
Issuers to bring $7B amid FOMC week; munis outperform for now