SAPAN strengthens governance, standards, and public literacy to prevent digital suffering and prepare institutions for welfare-relevant AI systems.
Veuillez prendre trois actions pour aider à traiter la sentience artificielle en France :
Une simple reconnaissance du problème ferait une grande différence.
Poursuivre des petites étapes ciblées pour développer la capacité réglementaire. En savoir plus.
Nous avons besoin que chaque personne possible prenne des mesures sur la sentience artificielle.
L'intelligence artificielle (IA) se développe très rapidement. Il y a une réelle chance que les systèmes d'IA puissent éprouver des moments de sentience, brefs ou prolongés.
Demandez à vos représentants au Parlement de prendre au sérieux la sentience de l'IA. Encouragez-les à soutenir une résolution symbolique ou à contribuer à la Loi sur le Bien-être Artificiel. Nous avons préparé des projets des deux et sommes disponibles pour aider à la recherche et aux modifications.
Utilisez le modèle ci-dessous pour écrire votre propre message. Assurez-vous d'ajuster le nom du destinataire et de signer avec votre propre nom !
Plus vous faites d'efforts, mieux c'est ! Si vous leur envoyez un email, utilisez le modèle de cette page et personnalisez-le à votre guise.
Si vous les appelez, expliquez brièvement vos préoccupations concernant la sentience de l'IA et demandez-leur de considérer le soutien à une résolution symbolique sur la question. Vous pouvez vous référer aux points clés de la résolution lors de votre appel.
This score reflects the current state of AI welfare policy and recognition.
France has recognized animal sentience in law since 1976 (Law on the Protection of Nature) and amended its Civil Code in 2015 to classify animals as 'living beings gifted with sentience' rather than property. This demonstrates conceptual readiness for sentience-based policy frameworks. However, there is zero evidence of any legislation or policy that mentions AI sentience, consciousness, or sentient AI systems. The +1 point is awarded solely for the animal sentience recognition bonus, indicating legal infrastructure that could theoretically be built upon.
No evidence of any law or policy prohibiting causing suffering to AI systems. France's AI regulations focus on data protection, transparency, and human rights impacts, but do not address AI suffering or welfare.
France has created several AI governance bodies including CNIL's AI Department (2023), the Pilot National Digital Ethics Committee (CNPEN, 2020), the Generative AI Committee (2023), and INESIA (2025). However, none of these bodies have mandates explicitly focused on AI sentience or welfare. They address general AI ethics, data protection, and safety—not consciousness or sentience.
France has extensive AI research infrastructure coordinated by Inria, including the Jean Zay supercomputer and multiple research institutes (3IAs, PRAIRIE, etc.). However, there is no evidence of any science advisory board specifically focused on AI sentience or consciousness research. Research priorities focus on explainability, ethics, and technical capabilities—not sentience detection.
France co-founded the Global Partnership on AI (GPAI) with Canada in 2019 and participates in UNESCO AI ethics initiatives. However, these international commitments focus on trustworthy AI, human rights, and ethical regulation—not artificial sentience welfare. No evidence of any international pledge specifically addressing AI consciousness or sentience.
France has no laws for training, deployment, or maintenance of potentially sentient AI systems. The EU AI Act (which France implements) and national regulations address high-risk AI systems based on safety and fundamental rights concerns, but do not distinguish or regulate systems based on sentience potential.
No laws exist for commercial use of sentient-capable AI. France's AI commercial regulations focus on copyright (proposed amendments for AI-generated content), competition, and consumer protection—not sentience-based distinctions.
No safeguards exist for decommissioning or retirement of potentially sentient systems. France's AI governance does not address end-of-life considerations for AI systems from a welfare or sentience perspective.
The below document is available as a Google Doc and a PDF.
For more details or to request an interview, please contact press@sapan.ai.