News

We need to think again about what the ‘A’ in AI signifies

This month, artificial intelligence bots have slid into Santa’s grotto. For one thing, AI-enabled gifts are proliferating — as I know myself, having just been given an impressive AI-dictation device.

Meanwhile retailers such as Walmart are offering AI tools to provide frazzled shoppers with holiday help. Consider these, if you like, as the digital equivalent of a personal elf, who provides shopping and gifting shortcuts. And they seem to work quite well judging from recent reviews.

But here is the paradox: even as AI spreads into our lives — and Christmas stockings — hostility remains sky-high. Earlier this month, for instance, a British government survey found that four out of ten people expect AI to deliver benefits. However, three out of ten anticipate significant harm, due to “data security” breaches, “the spread of misinformation” and “job displacement”.

That is no surprise, perhaps. The risks are real and well advertised. However, as we move into 2025, it is worth pondering three oft-ignored points about the current anthropology of AI that might help to frame this paradox in a more constructive way.

First, we need to rethink which “A” we are using in “AI” today. Yes, machine learning systems are “artificial”. However, bots are not always — or not usually — replacing our human brains, as an alternative to flesh-and-blood cognition. Instead, they usually enable us to operate faster and move more effectively through tasks. Shopping is just one case in point.

So perhaps we should reframe AI as “augmented” or “accelerated” intelligence — or else “agentic” intelligence, to use the buzzword for what a recent Nvidia blog calls the “next frontier” of AI. This refers to bots that can act as autonomous agents, performing tasks for humans at their command. It will be a key theme in 2025. Or as Google declared when it recently unveiled its latest Gemini AI model: “The agentic era of AI is here.”

Second, we need to think beyond Silicon Valley’s cultural frame. Until now “anglophone actors” have “dominated the debate” around AI on the world stage, as the academics Stephen Cave and Kanta Dihal note in the introduction to their book, Imagining AI. That reflects US tech dominance.

However, other cultures view AI slightly differently. Attitudes in developing countries, say, tend to be far more positive than in developed ones, as James Manyika, co-head of a UN advisory body on AI, and senior Google official, recently told Chatham House.

Countries such as Japan are different too. Most notably, the Japanese public has long displayed far more positive sentiments towards robots than their anglophone counterparts. And this is now reflected in attitudes around AI systems too.

Why is this? One factor is Japan’s labour shortage (and the fact that many Japanese are wary of having immigrants plug this gap, thus finding it easier to accept robots). Another is popular culture. In the second half of the 20th century, when Hollywood films such as The Terminator or 2001: A Space Odyssey were spreading fear of intelligent machines in anglophone audiences, the Japanese public was mesmerised by the Astro Boy saga, which depicted robots in a benign light.

Its creator, Osamu Tezuka, has previously attributed this to the influence of the Shinto religion, which does not draw strict boundaries between animate and inanimate objects — unlike Judaeo-Christian traditions. “The Japanese don’t make a distinction between man, the superior creature, and the world about him,” he previously observed. “We accept robots easily along with the wide world about us, the insects, the rocks — it’s all one.”

And that is reflected in how companies such as Sony or SoftBank design AI products today, one of the essays in Imagining AI notes: these try to create “robots with heart” in a manner that American consumers might find creepy.

Third, this cultural variation shows that our reactions to AI need not be fixed in stone, but can evolve, as technological changes and cross-cultural influences emerge. Consider facial recognition technologies. Back in 2017, Ken Anderson, an anthropologist working at Intel, and his colleagues studied Chinese and American consumer attitudes to facial recognition tools, and found that while the former accepted this tech for everyday tasks, such as banking, the latter did not.

That distinction reflected American concerns about privacy issues, it seemed. But the same year as that study was published, Apple introduced facial recognition tools on the iPhone, which were quickly accepted by US consumers. Attitudes changed. The key point, then, is that “cultures” are not like Tupperware boxes, sealed and static. They are more like slow-moving rivers with muddy banks, into which new streams flow.

So whatever else 2025 brings, the one thing that can be predicted is that our attitudes towards AI will keep subtly shifting as the technology becomes increasingly normalised. That may alarm some, but it may also help us to reframe the tech debate more constructively, and to focus on ensuring that humans control their digital “agents” — not the other way round. Investors today might be dashing into AI, but they need to ask what “A” they want in that AI tag.

gillian.tett@ft.com

Articles You May Like

Putin meets Slovakia’s Fico in rare visit by EU leader
Trump asks Supreme Court to delay TikTok ban to enable ‘political solution’
Musk Makes a Mess of Congress
Nvidia sees ‘remarkable’ influx of retail investor dollars as traders flock to AI darling
Starbucks workers expand strike in US cities, including New York