News

Where is artificial general intelligence? My grandfather’s guess is as good as yours

The writer is a technology analyst

In 1946, my grandfather, writing as “Murray Leinster”, published a science fiction story called “A Logic Named Joe”. In it, everyone has a computer (a “logic”) connected to a global network that does everything from banking to newspapers and video calls. One day, one of these logics, Joe, starts giving helpful answers to any request anywhere on the network: invent an undetectable poison, say, or suggest the best way to rob a bank. Panic ensues — “Check your censorship circuits!” — until they work out what to unplug. 

For as long as we’ve thought about computers, we’ve thought about making “artificial intelligence”, and wondered what that would mean. There’s an old joke that AI is whatever doesn’t work yet, because once it works it’s just software. Calculators do superhuman maths, and databases have superhuman memory, but they can’t do anything else, and they don’t understand what they’re doing, any more than a dishwasher understands dishes. Databases are superhuman, but they’re just software. But people do have something different, and so, on some scale, do dogs, chimpanzees and octopuses and many other creatures. AI researchers call this “general intelligence”.

If we could make artificial general intelligence, or AGI, it should be obvious that this would be as important as computing, or electricity or perhaps steam. Today we print microchips, but what if you could print digital brains at the level of a human, or more than the level of a human, and do it by the billion? At the very least, that would be a huge change in what we could automate, and as my grandfather and a thousand other science fiction writers have pointed out, it might mean a lot more: steam engines did not have opinions about people. 

Every few decades since 1946, there’s been a wave of excitement that this might be close (in 1970 the AI pioneer Marvin Minsky claimed that we would have human-level AGI in three to eight years). The large language models (LLMs) that took off 18 months ago have started another such wave. This week, OpenAI and Meta signalled they were near to releasing new models that might be capable of reasoning and planning. Serious AI scientists who previously thought AGI was decades away now suggest that it might be much closer. 

At the extreme, the so-called “doomers” argue there is a real risk of AGI emerging spontaneously from current research and that this could be a threat to humanity. They call for urgent government action. Some of this comes from self-interested companies seeking barriers to competition (“This is very dangerous and we are building it as fast as possible, but don’t let anyone else do it”), but plenty of it is sincere.  

However, for every expert who thinks AGI might be close, there’s another who doesn’t. There are some who think LLMs might scale all the way to AGI, and others who think we still need an unknown number of unknown further breakthroughs. More importantly, they would all agree that we don’t actually know.

The problem is that we don’t have a coherent theoretical model of what general intelligence really is, nor why people are better at it than dogs. Equally, we don’t know why LLMs seem to work so well, and we don’t know how much they can improve. We have many theories for parts of these, but we don’t know the whole system. We can’t plot people and ChatGPT on a chart and say when one will reach the other. 

Indeed, AGI itself is a thought experiment: what kind of AGI would we actually get? It might scale to 100x more intelligent than a person, or it might be faster but no more clever. We might only produce AGI that’s no more intelligent than a dog. We don’t know. 

This is why all conversations about AGI turn to analogies: if you can compare this to nuclear fission then you know what to do. But again, we had a theory of fission, and we have no such theory of AGI. Hence, my preferred analogy is the Apollo programme. We had a theory of gravity, and a theory of the engineering of rockets. We knew why rockets didn’t explode, why they went up, and how far they needed to go. We have no equivalents here. We don’t know why LLMs work, how big they can get, or how far they have to go. And yet, we keep making them bigger, and they do seem to be getting close. Will they get there? Maybe! 

What, then, is your preferred attitude to real but unknown risks? Do you worry, or shrug? Which thought experiments do you prefer? Presume, though, you decide the doomers are right: what can you do? The technology is in principle public. Open source models are proliferating. For now, LLMs need a lot of expensive chips (Nvidia sold $47.5bn in the last fiscal year and can’t meet demand), but on a decade’s view the models will get more efficient and the chips will be everywhere. In the end, you can’t ban mathematics. It will happen anyway.  

By default, though, this latest excitement will follow all the other waves of AI, and become “just” more software and more automation. Automation has always produced frictional pain, back to the Luddites. The UK’s Post Office scandal reminds us that you don’t need AGI for software to ruin people’s lives. LLMs will produce more pain and more scandals, but life will go on. At least, that’s the answer I prefer. 

Articles You May Like

Spanish PM suspends public duties after wife’s corruption investigation
Western banks in Russia paid €800mn in taxes to Kremlin last year
ExxonMobil’s profit falls 28% on weaker gas prices and refining margins
Ukraine to increase long-range strikes in Russia, says UK defence chief
Biden adviser backs bill to counter China in Latin America