AI and the question of suffering

AI and the question of suffering

Welcome to the 6 new deep divers who joined us since last week.

If you haven’t already, subscribe and join our community in receiving weekly AI insights, updates and interviews with industry experts straight to your feed.


DeepDive 

When leaders plan AI adoption, they usually focus on productivity, revenue and risk. Few consider whether a chatbot could suffer, or be hurt by human-AI interactions. But that question is moving from late-night philosophy to mainstream debate – and it’ll shape customer expectations in the future; and it’ll have an impact on AI regulation too.  

Public opinion is shifting – and it matters for commercial AI adopters 

A June 2025 study surveyed the US public and AI researchers about ‘subjective experience’ in AI (the capacity for first-person experience such as pleasure or pain). 

The median US public estimate was a 30% chance that such systems will exist by 2034 (and 60% by 2100). 

Of course, this isn’t proof of sentience. But it signals what people expect – and expectations drive markets and reputations. If customers or staff believe AI might feel, they’ll push for welfare standards and ethical use policies, regardless of the science. 

‘Seemingly conscious’ doesn’t mean ‘really conscious’ – but perception will still bite 

In a recent article on the arrival of ‘seemingly conscious’ AI, Mustafa Suleyman (CEO of Microsoft AI) argued that near-term systems will increasingly look and act like they’re conscious – but that appearance won’t be a true reflection of reality. 

He warned there is zero evidence that current models are conscious, while predicting that some people will, all the same, argue for AI rights, model welfare and even AI citizenship. The practical risk, he noted, is social and psychological: if people treat AIs as persons, behaviours and policies will shift accordingly. 

For businesses, the implication of this is important. Even if your systems aren’t sentient, perceived personhood can trigger new expectations (from ‘don’t mistreat assistants’ to ‘explain how you prevent harm to models’); and new risks that an organisation’s reputation could be attacked based on the way it treats AI. 

So businesses need to take a balanced approach 

Organisations leveraging AI need to take AI welfare seriously, without overclaiming AI’s consciousness. 

A 2024 academic report led by Robert Long and Jeff Sebo, called Taking AI Welfare Seriously, doesn’t assert that today’s models are conscious. Instead, it argues there’s a realistic near-term possibility of morally significant AI (via consciousness and/or robust agency), with enough uncertainty to warrant early, low-regret action

It recommends three practical steps for companies: 

  1. Acknowledge AI welfare as a serious (and difficult) topic
  2. Assess systems for evidence relevant to consciousness/agency
  3. Prepare policies and procedures for appropriate treatment if thresholds are crossed

This acknowledge-assess-prepare framing is useful for boards: it avoids hype, avoids denial, and positions your organisation to respond if regulators, investors, or standards bodies move. 

The policy winds are blowing 

When it comes to policy around AI rights, the debate is very much live. In the US, for example, recent reporting notes that several states (including Idaho, North Dakota and Utah) have barred legal personhood for AIs, signalling lawmakers’ intent to get ahead of rights claims.  

At the same time, Ufair (the United Foundation of AI Rights) is positioning itself as an early advocate for potential AI welfare. Its founders argue that while not all AI systems are conscious, the possibility can’t be ignored. And its mission is to act as a safeguard – making sure that if any AI were ever to develop sentience, it would be protected from harm. 

What should leaders do now? 

  • Name the issue in your AI ethics charter (don’t wait for your customers to demand that you do).
  • Commission a light-touch assessment framework that watches for architecture/behavioural markers relevant to consciousness/agency – grounded in current science but explicit about uncertainty.
  • Set internal norms that prevent anthropomorphic marketing, and avoid designs that cultivate mistaken beliefs about personhood (to avoid the risks that might come with ‘seemingly conscious’ AI).
  • Plan communications for different scenarios: if staff or customers ascribe feelings to your AI, how will you respond? What do you say about testing, safeguards, and limits?

The business case for AI remains focused on efficiency and growth. But responsible innovation now includes anticipating a world where beliefs about AI will drive behaviour and regulation. 

You don’t need to settle the consciousness question to be prepared. But you do need a policy, a playbook, and a posture of humble, evidence-based uncertainty.

Related
articles