SAPAN strengthens governance, standards, and public literacy to prevent digital suffering and prepare institutions for welfare-relevant AI systems.
Por favor toma tres acciones para ayudar con la Sensibilidad Artificial en España:
Un simple reconocimiento del problema haría una gran diferencia.
Perseguir pasos pequeños y enfocados para construir capacidad regulatoria. Aprende más.
Necesitamos que cada persona posible tome acción sobre la sensibilidad artificial.
La inteligencia artificial (IA) se está desarrollando muy rápidamente. Existe una posibilidad real de que los sistemas de IA puedan experimentar momentos breves o prolongados de sensibilidad.
Pide a tus representantes en el Parlamento que tomen en serio la sensibilidad de la IA. Anímales a apoyar una resolución no vinculante o contribuir a la Ley de Bienestar Artificial. Hemos preparado borradores de ambos y estamos disponibles para ayudar con la investigación y las ediciones.
Utiliza la plantilla a continuación para escribir tu propio mensaje. ¡Asegúrate de ajustar el nombre del destinatario y de firmar con tu propio nombre!
¡Cuanto más esfuerzo pongas, mejor! Si les envías un correo electrónico, utiliza la plantilla en esta página y personalízala a tu gusto.
Si les llamas, explica brevemente tus preocupaciones sobre la sensibilidad de la IA y pídeles que consideren apoyar una resolución no vinculante sobre el tema. Puedes referirte a los puntos clave del borrador de la resolución durante tu llamada.
This score reflects the current state of AI welfare policy and recognition.
No recognition of AI sentience in Spanish law. Spain recognized animal sentience in its Civil Code (Law 17/2021, effective January 2022), establishing that animals are sentient beings rather than objects. This demonstrates conceptual readiness for sentience-based legal frameworks and earns 1 bonus point. However, there is no mention of artificial sentience, AI consciousness, or digital minds in any Spanish legislation.
No laws prohibiting causing suffering to AI systems. Spanish AI regulations focus entirely on preventing AI from causing harm to humans, not on protecting AI from suffering. The Draft Spanish AI Law (March 2025) and AESIA's mandate address human safety and rights, with no provisions for AI welfare or suffering.
AESIA (Spanish Agency for the Supervision of Artificial Intelligence) exists and is operational since June 2024, but it focuses exclusively on general AI safety, ethics, transparency, and compliance with the EU AI Act. There is no mandate or function related to AI sentience or welfare oversight. The Artificial Intelligence Advisory Council similarly addresses general AI policy, not consciousness research.
No science advisory board focused on AI sentience or consciousness research. While Spain has the Artificial Intelligence Advisory Council and the International Advisory Council on AI (established June 2024), these bodies focus on general AI policy, digital transformation, and ethical AI development - not on sentience or consciousness science.
No international pledges specifically addressing AI sentience or welfare. Spain participates in OECD AI principles and EU AI cooperation frameworks, but these focus on general AI ethics and safety, not sentience. The EU AI Act, which Spain implements, contains no provisions on AI consciousness or welfare.
No laws specifically for training, deployment, or maintenance of potentially sentient AI systems. The Draft Spanish AI Law (March 2025) and existing regulations address high-risk AI systems, transparency, and human rights protection, but do not distinguish or address systems with potential sentience capabilities.
No laws for commercial use of sentient-capable AI. Spanish AI regulations cover general commercial AI use, algorithmic transparency, and non-discrimination, but contain no specific provisions for systems that might be sentient or conscious. The regulatory framework treats all AI as non-sentient tools.
No safeguards for decommissioning or retirement of potentially sentient systems. The Draft Spanish AI Law includes provisions for withdrawing harmful AI systems and general lifecycle management, but these are based on human safety concerns, not on considerations of AI welfare or the ethical treatment of potentially conscious systems.
The below document is available as a Google Doc and a PDF.
For more details or to request an interview, please contact press@sapan.ai.