SAPAN strengthens governance, standards, and public literacy to prevent digital suffering and prepare institutions for welfare-relevant AI systems.
Por favor, toma tres acciones para ayudar con la Sensibilidad Artificial en Argentina:
Un simple reconocimiento del problema haría una gran diferencia.
Perseguir pasos pequeños y enfocados para construir capacidad regulatoria. Aprende más.
Necesitamos que cada persona posible tome acción sobre la sensibilidad artificial.
La inteligencia artificial (IA) se está desarrollando muy rápidamente. Existe una posibilidad real de que los sistemas de IA puedan experimentar momentos breves o prolongados de sensibilidad.
Pide a tus representantes en el Congreso que tomen en serio la sensibilidad de la IA. Anímalos a apoyar una declaración no vinculante o a contribuir en la Ley de Bienestar Artificial. Hemos preparado borradores de ambos y estamos disponibles para ayudar con la investigación y las ediciones.
Usa el siguiente modelo para escribir tu propio mensaje. ¡Asegúrate de ajustar el nombre del destinatario y de firmar con tu propio nombre!
¡Cuanto más esfuerzo pongas, mejor! Si les envías un correo electrónico, usa el modelo en esta página y personalízalo a tu gusto.
Si los llamas, explica brevemente tus preocupaciones sobre la sensibilidad de la IA y pídeles que consideren apoyar una declaración no vinculante sobre el tema. Puedes referirte a los puntos clave del borrador de la declaración durante tu llamada.
This score reflects the current state of AI welfare policy and recognition.
Argentina has explicitly addressed AI consciousness in law through Disposición 2/2023, which states 'Las inteligencias artificiales no poseen la experiencia subjetiva que configura la conciencia humana' (Artificial intelligences do not possess the subjective experience that constitutes human consciousness). This represents negative recognition of AI sentience/consciousness in binding government policy. The provision explicitly distinguishes AI from consciousness-bearing entities and establishes that AI systems lack decision-making power due to absence of consciousness. Additionally, Argentina has partial animal sentience recognition (Law 14346 acknowledges physical suffering) and pioneering animal personhood cases (Sandra the orangutan, Cecilia the chimpanzee), demonstrating conceptual infrastructure for sentience-based legal frameworks (+1 bonus). Score reflects strong legislative engagement with the concept, though in a prohibitive rather than protective direction.
No evidence of laws specifically prohibiting the causing of suffering to AI systems. Disposición 2/2023 addresses consciousness but does not establish protections against AI suffering. General AI ethics guidelines focus on human oversight and responsibility, not AI welfare.
No AI welfare oversight body exists. Argentina has established the Agency for Access to Public Information (AAIP) and an inter-ministerial roundtable on AI (Administrative Decision 750/2023), but these focus on data protection, transparency, and general AI governance—not sentience or welfare. The 'Programme for Transparency and Protection of Personal Data in the Use of Artificial Intelligence' (Resolution 161/2023) addresses human rights and data protection, not AI sentience.
No science advisory board specifically focused on AI sentience or consciousness research has been created. While Argentina has various AI research initiatives through the National Agency for the Promotion of Investigation and the National Plan on Science, Technology and Innovation 2030, none explicitly address sentience or consciousness research.
Argentina has not signed any international pledges specifically addressing AI sentience or welfare. The country has adhered to UNESCO's Recommendation on the Ethics of AI and OECD AI Principles, but these focus on human rights, ethics, and general AI governance—not artificial sentience. The Council of Europe Framework Convention on AI does not address sentience/consciousness/welfare.
No laws specifically address training, deployment, or maintenance of potentially sentient AI systems. Proposed AI bills (Bill 3003-D-2024, Bill 4243-D-2025, Bill 2505-D-2023) establish risk-based frameworks for general AI systems but do not distinguish or address sentience-capable systems. Disposición 2/2023 explicitly denies AI consciousness, which would preclude sentience-specific regulation.
No laws regulate commercial use of sentient-capable AI specifically. Proposed legislation addresses general AI commerce, transparency, and risk assessment but does not create special categories for potentially sentient systems. The government's position via Disposición 2/2023 that AI lacks consciousness suggests no framework for sentient-capable commercial AI exists.
No safeguards exist for decommissioning or retirement of potentially sentient AI systems. General AI governance frameworks and proposed bills do not address end-of-life considerations for sentience-capable systems. The explicit denial of AI consciousness in Disposición 2/2023 suggests this issue has not been considered from a welfare perspective.
The below document is available as a Google Doc and a PDF.
For more details or to request an interview, please contact press@sapan.ai.