SAPAN strengthens governance, standards, and public literacy to prevent digital suffering and prepare institutions for welfare-relevant AI systems.
Please take three actions to take to help with Artificial Sentience in Australia:
A simple acknowledgement of the issue would make a big difference.
Pursue focused small steps to build regulatory capacity. Learn more.
We need every person possible to take action on artificial sentience.
Artificial intelligence (AI) is developing very quickly. There is a real chance that AI systems may experience brief or prolonged moments of sentience.
Ask your representatives in Parliament to take AI sentience seriously. Encourage them to support a parliamentary motion or contribute to the Artificial Welfare Bill. We’ve prepared drafts of both and are available to assist with research and edits.
Use the below template to write your own message. Be sure to adjust the recipient's name, and to sign with your own name!
The more effort you put in, the better! If you email them, use the template on this page and customize it to your liking.
If you call them, briefly explain your concerns about AI sentience and ask them to consider supporting a parliamentary motion on the issue. You can refer to the key points from the draft motion during your call.
This score reflects the current state of AI welfare policy and recognition.
No legislation mentions AI sentience or consciousness. Australia has recognized animal sentience in the ACT (Australian Capital Territory) Animal Welfare Act 1992, which provides a +1 bonus point for conceptual readiness. However, there is no AI sentience recognition in any Australian law or proposed legislation.
No legislation prohibits causing suffering specifically to AI systems. While animal suffering is prohibited in various state/territory laws, there are no laws addressing AI suffering, consciousness, or welfare.
Australia announced plans in January 2024 to establish an advisory body for AI oversight, but this body focuses on general AI risks and safety, not AI sentience or welfare. The proposed advisory body does not explicitly focus on sentience/consciousness/welfare issues.
No science advisory board has been created or proposed that specifically focuses on AI sentience or consciousness research. The government mentioned establishing an expert advisory body for AI regulation development, but it does not focus on sentience/consciousness science.
Australia participated in drafting the Council of Europe Framework Convention on AI (2024) and signed international pledges like the Bletchley Declaration (2023) and Seoul Declaration (2024). However, none of these international commitments specifically address AI sentience, consciousness, or welfare—they focus on human rights, democracy, and rule of law in AI governance.
Australia is developing mandatory guardrails for high-risk AI systems and has voluntary AI safety standards, but these frameworks do not specifically address potentially sentient systems. They focus on general AI safety, transparency, and accountability, not sentience-capable systems.
No laws exist for commercial use of sentient-capable AI. Australia's proposed AI regulations focus on high-risk AI applications generally (healthcare, recruitment, law enforcement) but do not specifically address sentient-capable systems or their commercial deployment.
No safeguards exist for decommissioning or retirement of potentially sentient AI systems. The Council of Europe treaty Australia helped draft mentions 'decommissioning' in a risk-based approach, but this is general AI lifecycle management, not specific to potentially sentient systems.
The below document is available as a Google Doc and a PDF.
For more details or to request an interview, please contact press@sapan.ai.