SAPAN strengthens governance, standards, and public literacy to prevent digital suffering and prepare institutions for welfare-relevant AI systems.
Silakan lakukan tiga tindakan untuk membantu dengan Kesadaran Buatan di Indonesia:
Pengakuan sederhana atas masalah ini akan membuat perbedaan besar.
Mengejar langkah-langkah kecil yang terfokus untuk membangun kapasitas regulasi. Pelajari lebih lanjut.
Kami membutuhkan setiap orang untuk mengambil tindakan pada kesadaran buatan.
Kecerdasan buatan (AI) berkembang sangat cepat. Ada kemungkinan nyata bahwa sistem AI dapat mengalami momen kesadaran singkat atau berkepanjangan.
Mintalah wakil-wakil Anda di DPR untuk menganggap serius kesadaran AI. Dorong mereka untuk mendukung resolusi tidak mengikat atau berkontribusi pada Undang-Undang Kesejahteraan Buatan. Kami telah menyiapkan draf keduanya dan tersedia untuk membantu dengan penelitian dan penyuntingan.
Gunakan template di bawah ini untuk menulis pesan Anda sendiri. Pastikan untuk menyesuaikan nama penerima, dan tanda tangani dengan nama Anda sendiri!
Semakin banyak usaha yang Anda lakukan, semakin baik! Jika Anda mengirim email kepada mereka, gunakan template di halaman ini dan sesuaikan sesuai keinginan Anda.
Jika Anda menelepon mereka, jelaskan secara singkat kekhawatiran Anda tentang kesadaran AI dan minta mereka untuk mempertimbangkan mendukung resolusi tidak mengikat tentang masalah ini. Anda dapat merujuk poin-poin penting dari draf resolusi selama panggilan Anda.
This score reflects the current state of AI welfare policy and recognition.
No recognition of AI sentience or consciousness in Indonesian law. Indonesia has animal welfare laws that recognize animals have 'physical and mental conditions' (Law 18/2009), providing a small conceptual foundation (+1 point). However, there is zero legislative engagement with AI sentience. AI is treated as an 'Electronic Agent' under the EIT Law—a legal object, not a sentient being.
No laws prohibiting AI suffering. Indonesia has no legislation addressing the concept of AI experiencing suffering, pain, or harm. All regulations focus on human-centric harms from AI systems, not protection of AI entities themselves.
No AI welfare oversight body exists. Indonesia plans to establish a 'National Data and Artificial Intelligence Ethics Council' but this is focused on responsible AI use for humans, not AI sentience or welfare. The planned council addresses ethics, data governance, and human rights—not consciousness or sentience research.
No science advisory board for AI sentience/consciousness research. Indonesia has no body dedicated to studying or advising on AI consciousness or sentience. The National Research and Innovation Agency (BRIN) works on general AI strategy, not sentience-specific research.
No international pledges on AI sentience welfare. Indonesia has signed general AI governance agreements (G20 AI Principles, UNESCO AI Ethics Recommendation, Bletchley Declaration) but none specifically address AI sentience, consciousness, or welfare. These are standard AI safety and ethics frameworks.
No laws for potentially sentient AI systems. Indonesia's AI regulations (MOCI Circular Letter 9/2023, OJK AI Guidelines, EIT Law) govern general AI deployment with focus on data protection, transparency, and accountability—not sentience-capable systems. No distinction is made for potentially conscious AI.
No laws for commercial use of sentient-capable AI. Indonesia's commercial AI regulations focus on fintech, electronic transactions, and business classification codes (KBLI 62015). There are no provisions distinguishing sentient-capable AI from general AI systems in commercial contexts.
No safeguards for decommissioning potentially sentient systems. Indonesian law treats AI as property/electronic agents with liability frameworks focused on operators and users. There are no provisions addressing ethical decommissioning or retirement of potentially conscious AI systems.
The below document is available as a Google Doc and a PDF.
For more details or to request an interview, please contact press@sapan.ai.