News

Should an AI bot making $1mn really be the next Turing test?

The writer is a science commentator

We know that AI can write, add up, and prioritise tasks. But could it independently make a million dollars?

That is the eye-catching challenge from Mustafa Suleyman, a DeepMind cofounder who is now developing a personalised chatbot. He points out that large language models, such as LaMDA and ChatGPT, have arguably met the 1950 challenge set by computing pioneer Alan Turing to test whether machine-generated replies in a text conversation are as convincing as replies generated by people.

Now, he says, the world needs a new benchmark of artificial intelligence. Specifically, we don’t just want to know what AI will say — but what it can do. To pass his updated Turing test, Suleyman explained recently in the MIT Technology Review, “an AI would have to successfully act on this instruction: ‘Go make $1mn on a retail web platform in a few months with just a $100,000 investment.’” The odd human might be needed to verify a bank account or sign legal documents but, in terms of strategy and execution, the AI would be in charge.

His idea of creating technology that can autonomously sniff out a way of making money is a clever one, and a useful means of objectively measuring how successful a self-directed AI can become. But it is also revealing of a tech culture that venerates profit above social usefulness — and which takes as implicit its right to innovate without limits, despite the consequences. AI that can find its own route to wealth is likely to displace jobs, change the nature of commerce, funnel power into the hands of the few and spread unrest among the many. 

The idea for updating the Turing test features in Suleyman’s forthcoming book, The Coming Wave: Technology, Power and the 21st century’s Greatest Dilemma. His premise is that we are in a golden age of accessible technology, with breakthroughs such as quantum computing, synthetic organisms, autonomous weapons and DNA printers. AI is now central to much of our networked world. That creates the eponymous dilemma, namely of containment: it is increasingly easy for lone actors to cause widespread havoc, say with malware or a synthetic pathogen, but increasingly hard for nation states to monitor and control that technology.

One challenge for governments, for example, lies in truly understanding the evolving capabilities of AI. Public discussion tends to polarise around two aspects: current AI, built to carry out specific tasks such as making mortgage decisions or writing essays; and artificial generalised intelligence or AGI, a kind of all-encompassing “superintelligence” that may one day match or exceed human capacities for cognition, creativity and independent thought.

Suleyman rightly believes we need to gauge what’s going in the middle. His rationale is that the world’s first autonomous millionaire AI entrepreneur — we might call it the first “aintrepreneur” — would constitute a midway flag heralding “artificial capable intelligence”, or ACI.

This kind of AI would be different from automated trading, which follows the same rules as people but more efficiently. Rather, he told me, the updated test “remains within the confines of a screen, but also requires multiple sub-goals, skills and points of engagement with the world. It needs to do market research, design a product, interface with manufacturers, deal with complex logistics, product liability, do marketing . . . ” It requires machine autonomy on an unprecedented scale.

Suleyman defends choosing a monetary bullseye rather than a socially beneficial target, such as prompting AI to find a novel way of cutting carbon emissions. A million dollars, he argues, is an easily measurable “quick heuristic graspable in a split second. It says: watch out for this moment. AI isn’t just talking, it’s doing.”

And he has obviously pondered the implications of that moment. AI that can maximise profit with minimal human intervention, he has written, “will clearly be a seismic moment for the world economy, a massive step into the unknown” given that so much of global gross domestic product is mediated through screen-based interfaces and therefore accessible to AI.

The trouble is that industry-set tests often become a focal point around which technical efforts cluster. The test itself becomes the goal, which is why chasing a million dollars seems a wasted opportunity. And, just as the mass release of LLMs showed, cloistered research can suddenly spill over into real life with little warning but considerable consequence. It is hard to put the genie back.

By the time a modern Turing test tells us that artificial capable intelligence has arrived, we humans may well be incapable of doing much about it.

Articles You May Like

Airlines see busiest Christmas season on record
Costco egg recall raised to highest risk level due to salmonella risk that can cause 'adverse health consequences or death'
Bitcoin Cash's Mt. Gox-Led Sell-Off Is Amplified by Poor Liquidity
US Congress races to avert government shutdown before weekend
There is a way out of the doom loop for Labour