News

Should we be fretting over AI’s feelings?

Stay informed with free updates

The writer is a science commentator

The conversation about whether AI will attain or supersede human intelligence is usually framed as one of existential risk to Homo sapiens. A robot army rising up, Frankenstein-style, and turning on its creators. The autonomous AI systems that quietly handle government and corporate business one day calculating that the world would operate more smoothly if humans were cut out of the loop.  

Now philosophers and AI researchers are asking: will these machines develop the capacity to be bored or harmed? In September, the AI company Anthropic appointed an “AI welfare” researcher to assess, among other things, whether its systems are inching towards consciousness or agency, and, if so, whether their welfare must be considered. Last week, an international group of researchers published a report on the same issue. The pace of technological development, they write, brings “a realistic possibility that some AI systems will be conscious and/or robustly agentic, and thus morally significant, in the near future”. 

The idea of fretting over AI’s feelings seems outlandish but reveals a paradox at the heart of the big AI push: that companies are racing to build artificial systems that are more intelligent and more like us, while also worrying that artificial systems will become too intelligent and too like us. Since we do not fully understand how consciousness, or a sense of self, arises in human brains, we cannot be truly confident it will never materialise in artificial ones. What seems remarkable, given the profound implications for our own species of creating digital “minds”, is that there is not more external oversight of where these systems are heading.

The report, entitled Taking AI Welfare Seriously, was written by researchers at Eleos AI, a think-tank devoted to “investigating AI sentience and wellbeing”, along with authors including New York University philosopher David Chalmers, who argues that virtual worlds are real worlds, and Jonathan Birch, an academic at the London School of Economics whose recent book, The Edge of Sentience, offers a framework for thinking about animal and AI minds.

The report does not claim that AI sentience (the capacity to feel sensations like pain) or consciousness is possible or imminent, only that “there is substantial uncertainty about these possibilities”. They draw parallels with our historical ignorance of the moral status of non-human animals, which enabled factory farming; it was only in 2022, with the help of Birch’s work, that crabs, lobsters and octopuses became protected under the UK Animal Welfare (Sentience) Act.

Human intuition, they warn, is a poor guide: our own species is prone to both anthropomorphism, which ascribes human traits to non-humans that don’t have them, and anthropodenial, which denies human traits to non-humans that do have them.

The report recommends that companies take the issue of AI welfare seriously; that researchers find ways of investigating AI consciousness, following the lead of scientists who study non-human animals; and that policymakers begin considering the idea of sentient or conscious AI, even convening citizens’ assemblies to explore the issues.

Those arguments have found some support in the traditional research community. “I think real artificial consciousness is unlikely, but not impossible,” says Anil Seth, professor of cognitive and computational neuroscience at the University of Sussex and a renowned consciousness researcher. He believes our sense of self is bound up with our biology and is more than mere computation.

But if he is wrong, as he admits he might be, the consequences could be immense: “Creating conscious AI would be an ethical catastrophe since we would have introduced into the world new forms of moral subject and potentially new forms of suffering, at industrial scale.” Nobody, Seth adds, should be trying to build such machines.

The illusion of consciousness feels like a more proximate concern. In 2022, a Google engineer was fired after saying he believed the company’s AI chatbot showed signs of sentience. Anthropic has been “character training” its large language model to give it traits like thoughtfulness. 

As machines everywhere, particularly LLMs, are crafted to be more humanlike, we risk being duped at scale by companies restrained by few checks and balances. We risk caring for machines that cannot reciprocate, diverting our finite moral resources from the relationships that matter. My flawed human intuition worries less about AI minds gaining the capacity to feel — and more about human minds losing the capacity to care.

Articles You May Like

Turkey: will Erdoğan emerge as the big winner of the Syria crisis?
Investors pour $140bn into US stock funds after Trump election victory
More than 10,000 UK civil service jobs to be cut
Can Farage turn Reform into a serious contender for government?
NYC prices rising faster than 12 other major US cities quickest pace since March 2023