Should we worry about artificial intelligence’s feelings?

The debate over whether artificial intelligence will catch up with or surpass human intelligence is often framed as an existential threat to Homo Sapiens. It is about an army of robots rebelling, Frankenstein-style, and attacking their creators. Autonomous artificial intelligence [AI] systems that silently manage government and corporate affairs calculate that one day the world would function better if humans were excluded from the process.

Today, philosophers and AI researchers are asking: will these machines develop the ability to feel pain or sadness? In September, AI company Anthropic appointed a researcher on “AI well-being” to assess, among other things, whether its systems are approaching consciousness or concern and, if so, whether their well-being should be taken into account. Last week, an international group of researchers published a report on the issue. According to them, rapid technological development creates a “real possibility that some AI systems will be conscious and/or have strong concerns, thus becoming morally relevant in the near future.”

The idea of ​​worrying about AI “feelings” may seem strange, but it reveals a paradox at the heart of the race for AI: companies are racing to create artificial systems that are more intelligent and more like us, while at the same time worrying that these systems will become too intelligent and too like us. Since we don’t fully understand how consciousness or a sense of self arises in the human brain, we can’t be completely sure that it will never emerge in artificial systems. What seems remarkable, given the profound implications that the creation of digital “minds” could have for our species, is that there is not more external oversight of the direction these systems are taking.

The report, titled Taking AI Wellbeing Seriously, was written by researchers at Eleos AI, an institute that investigates “AI sentience and well-being,” along with several other prominent authors, including philosopher David Chalmers of New York University, who argues that virtual worlds are real worlds, and scholar Jonathan Birch of the London School of Economics (LSE), whose recent book, The Edge of Sentience, offers a framework for thinking about animal and AI minds.

The report does not claim that sentience (the ability to experience feelings such as pain) or consciousness in AI is possible or imminent, but only that “there is considerable uncertainty about these possibilities.” They draw parallels with our historical ignorance of the moral status of nonhuman animals, which made factory farming possible; only in 2022, thanks to Birch’s work, were crabs, lobsters, and octopuses protected under the Animal Welfare (Sentience) Act in the United Kingdom.

They warn that human intuition is a poor guide: our species is prone to both anthropomorphism, which attributes human qualities to nonhumans that do not have them, and anthropodenialism, which denies human qualities to nonhumans that do.

The report recommends that companies take the issue of AI welfare seriously; that researchers find ways to investigate AI consciousness, following the example of scientists studying nonhuman animals; and that policymakers begin to consider the idea of ​​sentient or conscious AI, and even convene civic assemblies to explore these issues.

These arguments have found support in the traditional research community. “I think it’s unlikely that there’s true artificial consciousness, but it’s not impossible,” says Anil Seth, professor of cognitive and computational neuroscience at the University of Sussex and a leading researcher on consciousness. He believes that our sense of self is connected to our biology and is more than just a calculation.

But if he is wrong, as he admits he may be, the consequences could be huge: “Creating a conscious AI would be an ethical disaster, as it would introduce new forms of moral agency and – potentially, on an industrial scale – new forms of suffering.” No one, Sethi adds, should attempt to build such machines.

The illusion of consciousness seems a more immediate concern. In 2022, a Google engineer was fired after saying he believed the company’s AI chatbot showed signs of sentience. Anthropic has been “training the character” of its large language model to give it thinking traits.

As machines, and especially large language models (LLMs), are increasingly designed to be more human-like, we risk being deceived on a large scale by those companies that have few constraints and controls. We risk worrying about machines that can’t respond to us, diverting our limited moral resources from the relationships that matter. My imperfect human intuition worries less about AI minds gaining the ability to feel—and more about human minds losing the ability to worry.

Categories: Lifestyle
Morgan White

Written by:Morgan White All posts by the author