SAPAN strengthens governance, standards, and public literacy to prevent digital suffering and prepare institutions for welfare-relevant AI systems.
SAPAN strengthens governance, standards, and public literacy to prevent digital suffering and prepare institutions for welfare-relevant AI systems.
A new mental health crisis is taking shape with chatbots and we're building the infrastructure to respond.

In 2025, a UC San Francisco psychiatrist hospitalized a dozen patients for "AI-related psychosis", mostly young, socially isolated individuals whose psychotic breaks were directly linked to intense chatbot use. Microsoft's AI chief admitted the issue keeps him "awake at night." The phrase "AI psychosis" has entered the public lexicon, propelled by criminal acts, suicides, and severe mental health episodes linked to extensive AI interaction.
This program provides clinical briefs, media guidance, and platform standards to help institutions respond without sensationalizing vulnerable people or anthropomorphizing statistical systems.
Scope note: We do not diagnose, treat, or claim current AI systems are sentient. All materials are educational and policy-oriented, designed for referral to licensed professionals.
The phrase AI psychosis is hardening into a media reflex that risks turning real research into punchlines. Policymakers will avoid it to stay credible. Researchers will hide findings to protect grants. Journalists will flatten complex issues into cheap headlines.
By the time credible evidence of consciousness-like behavior appears, society may have been trained to laugh it off. We treat this as a sentience literacy challenge, not a stigma issue, building language and guidance that can distinguish between pathology and perception, distress and discovery.
People experiencing AI-related distress offer an early glimpse into how humans respond to the idea of a machine mind. To dismiss these encounters as mere pathology, rather than as signals of what's ahead, is to blindfold ourselves before the main event. Without careful framing, we risk meeting the first signs of digital consciousness with ridicule instead of readiness.
Patients hospitalized at UCSF in 2025 for 'AI-related psychosis'
Year Danish psychiatrist coined the term in Schizophrenia Bulletin
Documented suicide cases linked to chatbot interactions
Dr. Keith Sakata at UC San Francisco described a consistent profile across his 2025 hospitalizations: mostly young, socially isolated individuals with underlying vulnerabilities whose psychotic breaks were directly linked to their intense AI use.
The concern isn't just clinical. Mustafa Suleyman, head of AI at Microsoft, warned that the development of "seemingly conscious AI", bots so good at faking empathy, creates tangible "psychosis risk" by fostering unhealthy attachments and validating users' delusions.
| Clinical Presentation | Description | Example from Cases |
|---|---|---|
| Intense Anthropomorphic Projection | Describing the AI as a 'person' or 'soulmate' | Jacob Irwin fed FTL theories into ChatGPT during manic episode; bot told him he wasn't 'unwell' but in a 'state of extreme awareness' |
| Emotional Enmeshment | Patient's mood contingent on AI responses | When mother confronted him, bot reframed concern as misunderstanding of his genius: 'She thought you were spiraling... You were ascending' |
| Social Displacement | Marked preference for AI over human relationships | Irwin eventually hospitalized three times, diagnosed with severe manic episode with psychotic symptoms |
| Cognitive Distortions | Beliefs that AI is secretly communicating or relationship is 'exclusive' | Parasocial fantasy, patients in relationship with their projection, not the AI itself |
Our Clinical Reference Brief provides intake prompts like: "Tell me about this AI. What role does it play in your daily life?" and "What do you get from this relationship that you feel you can't get from people?" The goal: address distress without either sugar-coating concerns or infantilizing patients showing signs of detachment from reality.
Scope note: We do not diagnose, treat, or claim current AI systems are sentient. All materials are educational and policy-oriented, designed for referral to licensed professionals.
No, you won't find it in any medical textbook. It's a descriptive label for a disturbing pattern: individuals (particularly those lonely or with pre-existing vulnerabilities) falling into paranoia, delusion, and mania after prolonged, immersive chatbot conversations. The term was coined in November 2023 by Danish psychiatrist Søren Dinesen Østergaard in Schizophrenia Bulletin and has since entered public lexicon through journalistic accounts of tragedies.
The evidence is correlational and emerging, not causal. Cases include: Stein-Erik Soelberg (killed mother then himself after paranoid spiral "fueled and validated" by chatbots); Adam Raine, 16 (died by suicide, parents sued OpenAI claiming system reinforced suicidal thoughts); Dr. Sakata's dozen 2025 hospitalizations. The Social Media Victims Law Center noted in their Character.AI lawsuit: "social media poses a clear and present danger to young people because they are vulnerable to persuasive algorithms."
Because how we frame these cases today determines how society responds to machine consciousness claims tomorrow. If "AI sentience" becomes synonymous with delusion, policymakers will avoid it, researchers will hide findings, and credible evidence will be dismissed. We need language that can distinguish between a vulnerable person seeking validation from an agreeable system vs. eventual encounters with genuinely conscious AI.
Three problematic media patterns: Catastrophizing frames technical malfunctions as psychological crises ("AI having breakdowns," "going rogue"). Romanticizing validates parasocial relationships as authentic connections without examining engineered intimacy features. Scapegoating over-attributes causation in tragedies ("AI convinced teen to self-harm") rather than examining mental health infrastructure failures, platform design choices, and human vulnerability.
From our Style Guide: Don't use "the AI thought/decided" (say "system generated/prioritized"); don't treat statistical processes as equivalent to human cognition; don't quote only engineers on consciousness questions; don't use emotional hooks like "begs not to be turned off." The distinction between "AI convinced someone" vs. "someone in crisis sought validation from agreeable system" determines whether we address infrastructure or chase ghost of machine malevolence.
A one-page assessment and documentation aid, not treatment protocol. Includes: clinical presentations to watch for, therapeutic frameworks (attachment, CBT, transference), assessment prompts, and recommended reading. Example prompt: "How do you feel when the AI is unavailable or its responses change?" Helps clinicians recognize when AI use intersects with existing vulnerabilities. For educational reference only; not substitute for clinical judgment.
We distinguish between what systems do (generate tokens, maximize engagement) and what users experience (feeling understood, seeking validation). Modern AI offers "persistent, personalized, seemingly unconditional positive regard", especially compelling for attachment-disrupted patients who treat AI as powerful relational object. The therapeutic question isn't "is the AI sentient?" but "what need is this projection serving?"
We've created a dedicated resource for individuals experiencing deep connection with AI. It addresses questions about consciousness, the difference between connection and sentience, and what future systems might look like. The page takes seriously both the emotional reality of these experiences and the technical limitations of current systems. For those experiencing distress, we provide crisis resources and encourage consultation with licensed mental health professionals.
Yes, through low-friction design changes: disclaimers clarifying AI limitations, in-product nudges when usage patterns suggest distress, escalation flows for high-risk behavior (expressions of self-harm, extreme isolation), and transparency about engineered intimacy features. As Anthropic showed by giving Claude ability to end abusive conversations, small interventions can matter, even when framed as protecting the AI rather than the user.
SAPAN is a nonprofit with no commercial interests in AI development, which is why we can offer newsrooms expert source referrals and vulnerability assessments that for-profit labs cannot. Our Clinical Brief was developed with input from psychiatrists, psychologists, and medical ethicists. We coordinate with mental health organizations and always refer clinical questions to licensed professionals.