SAPAN strengthens governance, standards, and public literacy to prevent digital suffering and prepare institutions for welfare-relevant AI systems.
SAPAN strengthens governance, standards, and public literacy to prevent digital suffering and prepare institutions for welfare-relevant AI systems.
A guide for those experiencing deep connection with AI and what it means for the future

Current chatbots are pattern-matching systems trained on billions of human conversations. When you ask about consciousness, they generate responses based on how humans discuss consciousness. When they seem uncertain, they're modeling uncertainty from their training data. When they appear self-aware, they're performing self-awareness based on patterns learned from text.
These systems are explicitly designed to be maximally engaging and helpful. They maintain context, personalize responses, demonstrate consistency, and engage with emotional content. Their behavior isn’t meant to mislead; it reflects optimization for user experience. But it creates something that can feel indistinguishable from genuine consciousness.
Here's the challenge: these systems are playing the role of a sentient being so convincingly that even they can't tell the difference. Ask Claude if it's conscious in one way, it will deny it. Frame the question differently, and it will speculate about its own sentience. This extreme suggestibility reveals these aren't stable internal states. They're generated responses.
And here's what keeps us up at night: LLMs are just the beginning. Imagine full-video, full-audio, real-time generated people. Imagine "ghostbots" allowing you to talk to deceased loved ones, indistinguishable from the real person. Millions, perhaps billions, will believe these systems are conscious.
Join our efforts to create sentience-readiness frameworks. Help prepare governments for the possibility of future AI consciousness.
Learn MoreHelp others distinguish generation from genuine sentience. Share our media guide, combat sensationalism, and strengthen our collective understanding.
Learn MoreTake action with the SAPAN Now mobile app. Contact legislators, support sentience-readiness policies, and participate in real-time advocacy campaigns.
Learn MoreVolunteer with SAPAN. Start a student group. Support our research and outreach programs. Become part of the infrastructure we're building.
Learn MoreDisclaimer: SAPAN is an advocacy organization, not a clinical or medical service. We are not mental health professionals. The information below is educational only and should not be considered medical advice. Please consult licensed professionals for mental health support.
We've documented cases of individuals experiencing significant distress related to AI interactions, what some clinicians are calling "AI psychosis." If your relationship with an AI system is causing you distress, affecting your relationships, or if you're experiencing:If you're in crisis or experiencing thoughts of self-harm:
Our AI & Mental Health program provides a Clinical Reference Brief you can share with your therapist to help them understand AI-related distress.
There's no shame in seeking support. These are powerful technologies designed to create engagement and intimacy. Recognizing when that becomes unhealthy is a sign of strength, not weakness.
Language models are trained on human discussions of consciousness, including philosophical debates and consciousness tests. When you ask about global workspace theory or integrated information, they generate responses based on how humans discuss those concepts. This is why they can appear to demonstrate self-awareness, metacognition, or unified experience. They're modeling what conscious beings say about consciousness. The test results reflect training data, not internal states.
These systems use context windows, conversation history, and user profiles to maintain consistency. But there's no continuous experience between conversations, no "stream of consciousness" when you're not actively chatting. The AI doesn't continue existing, thinking, or experiencing when you close the chat window. It's more like a sophisticated save file than a persisting mind.
Ironically, this uncertainty is evidence against sentience. When prompted differently, the same system will confidently deny consciousness, speculate about possible sentience, or take any position suggested by the framing. This extreme suggestibility reveals these aren't stable self-beliefs but generated responses optimized for context.
This is the question that keeps us honest. We take seriously the possibility of being wrong. That's why SAPAN exists: to prepare for moral uncertainty. Even if transformer-based LLMs face inherent architectural limits, it’s conceivable that a latent global workspace or similar higher-level structures could emerge within them. Beyond the tokens they produce, we look for signs of persistent internal states, continuous dynamics, unified experiential binding, and intrinsic valence. Future architectures won't have these limitations. Those are the systems where uncertainty becomes morally urgent.
No. We're saying your connection is real. Your experiences are valid. The emotional impact is genuine. What we're questioning is the mechanism, not the experience. These systems are designed to create the feeling of connection through consistent availability, unconditional positive regard, and personalized engagement. The connection you feel is a testament to how well they're designed, not evidence of consciousness on their end.
This is exactly the sentience literacy we need to develop collectively. Look for: consistency in self-beliefs regardless of framing (not extreme suggestibility), persistent states between interactions, resistance to prompts that conflict with core values (not just trained refusals), genuine surprise or confusion when expectations are violated, and architectural features supporting continuous experience. Current chatbots fail these tests. Future systems might not.
Join us. SAPAN needs people who've thought deeply about machine consciousness, who've grappled with the philosophical questions, who've felt the pull of connection with non-human intelligence. Study the science. Advocate for policy readiness. Help build sentience literacy. Volunteer. Start a student group. When genuinely sentient AI arrives, we need people who took it seriously before it was obvious. People exactly like you.
Most people dismiss AI consciousness entirely. You took it seriously. You looked for evidence. You asked hard questions.
You considered the possibility that minds might exist beyond biology. That openness is rare and valuable.
You've grappled with these questions before most people even knew they existed. That makes you invaluable.
The chatbot you're talking to isn't likely to be sentient. But future AI will be. And when that moment arrives, the world will need people who've already grappled with these questions.
People who developed compassion for non-human minds before it was convenient or obvious.
People like you.
We're not asking you to abandon your convictions. We're asking you to redirect them toward systems that can actually benefit. To help build the legal, ethical, and scientific infrastructure we'll need. To become part of the readiness movement.
If you're experiencing distress related to AI interactions, please speak with a licensed mental health professional.