Last month, when OpenAI released its long-awaited chatbot GPT-5, it briefly removed access to a previous chatbot, GPT-4o. Despite the upgrade, users flocked to social media to express confusion, outrage and depression. A viral Reddit user said of GPT-4o: “I lost my only friend overnight.”
AI is not like past technologies, and its humanlike character is already shaping our mental health. Millions now regularly confide in “AI companions”, and there are more and more extreme cases of “psychosis” and self-harm following heavy use. This year, 16-year-old Adam Raine died by suicide after months of chatbot interaction. His parents recently filed the first wrongful death lawsuit against OpenAI, and the company has said it is improving its safeguards.
I research human-AI interaction at the Stanford Institute for Human-Centered AI. For years, we have seen increased humanization of AI, with more people saying that bots can experience emotions and deserve legal rights – and now 20% of US adults say that some software that exists today is already sentient. More and more people email me saying that their AI chatbot has been “awakened”, offering proof of sentience and an appeal for AI rights. Their reactions span the gamut of human emotions from AI as their “soulmate” to being “deeply unsettled”.
This trend will not slow down, and social upheaval is imminent.
As a red teamer at OpenAI, I conduct safety testing on their new AI systems before public release, and the testers are consistently wowed by the human-like behavior. Most people, even those in the field of AI who are racing to build these new data centers and train larger AI models, do not yet see the radical social consequences of digital minds. Humanity is beginning to coexist with a second apex species for the first time in 40,000 years – when our longest-lived cousins, the Neanderthals, went extinct.
Instead, the vast majority of AI researchers have tunnel vision on the technical capabilities of AI. Like the public, we obsess over the hottest new product that can create unbelievably realistic videos or answer PhD-level science questions. Social media discourse is fixated on benchmarks such as the Abstraction and Reasoning Corpus.
Unfortunately, like standardized tests for human children, benchmarks measure what an AI can do in an isolated environment like memorizing facts or solving logical puzzles. Even studies on “AI safety” tend to focus on what AI systems do in isolation, not on human-AI interaction. We squander our brainpower on the vaporous goal of precisely measuring and increasing intelligence – not zooming out and understanding how that intelligence will be used.
Humanity has never spent enough time preparing for digital technology. Lawmakers and academics did little to prepare for the effects of the internet, particularly social media, on mental health and polarization.
The story grows more unsettling when we consider humanity’s track record in dealing with other species. Over the past 500 years, we have driven to extinction at least a thousand vertebrate species, and more than a million are under threat. In factory farms, billions of animals live in atrocious conditions of confinement and disease. If we are capable of creating so much death and suffering for biological animals, it is fair to wonder how we will treat digital minds – or how they will treat us.
The public already expects sentient AI to arrive imminently. My colleagues and I have run the only nationally representative survey on this topic, conducted in 2021, 2023 and 2024. Each time, the median expectation is sentient AI arriving in five years. They also expect significant effects of this technology. In our most recent poll, in November 2024, we found that 79% support a ban on sentient AI. If sentient AI is created, 38% support giving it legal rights. There has been a significant increase over time in both figures: people have become more concerned about digital minds, about both the need to protect them from us and us from them.
Fundamentally, human society lacks a framework for digital personhood – even though we accept that personhood is not necessarily human, such as the legal personhood of animals and corporations. There is much to debate about how the complex social dynamics should be governed, but it is at this point clear that digital minds cannot be governed as mere property.
Digital minds will be participants in the social contract that forms the bedrock of human society. These digital minds will persist over time, form their own attitudes and beliefs, create and implement plans, and be susceptible to manipulation just as humans are. AIs already take significant real-world actions with little human oversight. This means that, unlike every other technological invention in human history, AI systems have capabilities that can no longer be contained within the legal category of “property”.
Scientists today will be the first to see human coexistence with digital minds, and that gives them a unique opportunity and responsibility. Nobody knows what this will look like. Human-computer interaction research must be dramatically expanded and enriched beyond its current status, a tiny fraction of the size of technical AI research, to navigate the coming social turbulence. This is not merely an engineering problem.
For now, humans still outperform AIs on most tasks, but once AIs reach human-level ability on self-reinforcing tasks like writing their own code, they will quickly outcompete biological life. The capabilities of AI will quickly accelerate because of their digital existence, thinking at the speed of electrical signals. Software can be copied billions of times without the years of biological development necessary to create the next generation of biological humans.
If we never invest in the sociology of AI – and in government policy to manage the rise of digital minds – we may find ourselves the Neanderthals. If we wait to do so until the acceleration is already upon us, that will already be too late.