SAPAN strengthens governance, standards, and public literacy to prevent digital suffering and prepare institutions for welfare-relevant AI systems.
Bitte ergreifen Sie drei Maßnahmen, um bei der künstlichen Sentienz in Deutschland zu helfen:
Eine einfache Anerkennung des Problems würde einen großen Unterschied machen.
Verfolgen Sie fokussierte kleine Schritte zum Aufbau regulatorischer Kapazität. Erfahren Sie mehr.
Wir brauchen jede mögliche Person, um Maßnahmen zur künstlichen Sentienz zu ergreifen.
Künstliche Intelligenz (KI) entwickelt sich sehr schnell. Es besteht eine reale Chance, dass KI-Systeme kurze oder längere Momente der Sentienz erleben.
Bitten Sie Ihre Vertreter im Bundestag, die KI-Sentienz ernst zu nehmen. Ermutigen Sie sie, eine unverbindliche Resolution zu unterstützen oder zum Künstlichen Wohlfahrtsgesetz beizutragen. Wir haben Entwürfe von beidem vorbereitet und stehen für Recherche und Bearbeitung zur Verfügung.
Verwenden Sie die unten stehende Vorlage, um Ihre eigene Nachricht zu schreiben. Stellen Sie sicher, dass Sie den Namen des Empfängers anpassen und mit Ihrem eigenen Namen unterschreiben!
Je mehr Mühe Sie sich geben, desto besser! Wenn Sie ihnen eine E-Mail senden, verwenden Sie die Vorlage auf dieser Seite und passen Sie sie nach Belieben an.
Wenn Sie sie anrufen, erklären Sie kurz Ihre Bedenken bezüglich der KI-Sentienz und bitten Sie sie, die Unterstützung einer unverbindlichen Resolution zu erwägen. Sie können sich während Ihres Anrufs auf die wichtigsten Punkte des Resolutionsentwurfs beziehen.
This score reflects the current state of AI welfare policy and recognition.
Germany has formally recognized animal sentience in law (Article 13 TFEU and German Animal Protection Act), which provides conceptual readiness for sentience-based frameworks (+1 bonus point). However, there is NO legislation specifically recognizing AI sentience or consciousness. Academic discussions exist about AI legal personhood (Teilrechtsfähigkeit concept), but no laws have been enacted. The DABUS patent case explicitly ruled AI systems cannot be inventors, reinforcing that AI lacks legal personhood under German law.
No laws exist prohibiting the causing of suffering specifically to AI systems. While Germany has robust animal welfare laws prohibiting animal suffering, there is no equivalent legislation for AI sentience or consciousness. All AI regulation in Germany focuses on general safety, ethics, and human rights protection, not AI welfare.
Germany has not created any AI welfare oversight body. The German Ethics Council addresses AI ethics from a human-centric perspective, focusing on human responsibility and preventing AI from replacing humans, but does not address AI sentience or welfare. The Data Ethics Commission (2019) proposed risk-based AI regulation but did not focus on sentience/consciousness. No body explicitly focuses on AI sentience or welfare.
No science advisory board exists specifically for AI sentience or consciousness research. Germany funds AI research extensively (€5 billion by 2025) and has established AI research labs, but these focus on general AI development, not sentience/consciousness research. The National AI Strategy emphasizes human-centric AI, not sentience research.
Germany has not signed any international pledge specifically addressing AI sentience or welfare. Germany is a founding member of the Global Partnership on AI (GPAI), but this focuses on responsible AI development and human rights, not AI sentience. The Council of Europe Framework Convention on AI (2024) does not address sentience/consciousness/welfare.
No laws exist for training, deployment, or maintenance of potentially sentient AI systems. Germany relies on the EU AI Act, which categorizes AI by risk to humans but does not address sentience-capable systems. The EU AI Act prohibits certain AI practices based on human harm, not AI welfare. Germany has no national AI-specific legislation beyond labor law references.
No laws exist for commercial use of sentient-capable AI systems. All German AI regulation (primarily through EU AI Act compliance) focuses on risk to humans, data protection, and general safety. There are no provisions distinguishing sentient-capable AI from other AI systems in commercial contexts.
No safeguards exist for decommissioning or retirement of potentially sentient AI systems. While AI lifecycle management and retirement procedures exist in technical contexts (ISO standards), there is no legal framework addressing the ethical considerations of decommissioning potentially sentient systems. German law treats AI as property/tools, not as entities requiring welfare considerations during retirement.
The below document is available as a Google Doc and a PDF.
For more details or to request an interview, please contact press@sapan.ai.