SAPAN strengthens governance, standards, and public literacy to prevent digital suffering and prepare institutions for welfare-relevant AI systems.

Contact Info
3055 NW Yeon Ave #660
Portland, OR 97210
United States

Tell Parliament: Be Serious About AI Sentience

Take Three Actions Today

National Policy

Take Three Actions Today

Please take three actions to take to help with Artificial Sentience in Australia:

1
Pass a Parliamentary Motion

A simple acknowledgement of the issue would make a big difference.

2
Incremental Policy Wins

Pursue focused small steps to build regulatory capacity. Learn more.

3
Tell a Mate

We need every person possible to take action on artificial sentience.

Artificial intelligence (AI) is developing very quickly. There is a real chance that AI systems may experience brief or prolonged moments of sentience.

Ask your representatives in Parliament to take AI sentience seriously. Encourage them to support a parliamentary motion or contribute to the Artificial Welfare Bill. We’ve prepared drafts of both and are available to assist with research and edits.

Take Action Now

1
Step #1: Customize your letter

Use the below template to write your own message. Be sure to adjust the recipient's name, and to sign with your own name!

2
Step #2: Find your representatives' contact info
3
Step #3: Email and/or call them

The more effort you put in, the better! If you email them, use the template on this page and customize it to your liking.

If you call them, briefly explain your concerns about AI sentience and ask them to consider supporting a parliamentary motion on the issue. You can refer to the key points from the draft motion during your call.

4
Step #4: Tell us about your efforts
My efforts included:
My Email Address:

Artificial Welfare Index

Current Scorecard

F

Overall Score

This score reflects the current state of AI welfare policy and recognition.

Artificial sentience recognised in law
1/10

No legislation mentions AI sentience or consciousness. Australia has recognized animal sentience in the ACT (Australian Capital Territory) Animal Welfare Act 1992, which provides a +1 bonus point for conceptual readiness. However, there is no AI sentience recognition in any Australian law or proposed legislation.

Recognition
Causing suffering prohibited by law
0/10

No legislation prohibits causing suffering specifically to AI systems. While animal suffering is prohibited in various state/territory laws, there are no laws addressing AI suffering, consciousness, or welfare.

Recognition
Creation of an AI welfare oversight body
0/10

Australia announced plans in January 2024 to establish an advisory body for AI oversight, but this body focuses on general AI risks and safety, not AI sentience or welfare. The proposed advisory body does not explicitly focus on sentience/consciousness/welfare issues.

Governance
Creation of a science advisory board
0/10

No science advisory board has been created or proposed that specifically focuses on AI sentience or consciousness research. The government mentioned establishing an expert advisory body for AI regulation development, but it does not focus on sentience/consciousness science.

Governance
International pledge in favor of artifical sentience welfare
0/10

Australia participated in drafting the Council of Europe Framework Convention on AI (2024) and signed international pledges like the Bletchley Declaration (2023) and Seoul Declaration (2024). However, none of these international commitments specifically address AI sentience, consciousness, or welfare—they focus on human rights, democracy, and rule of law in AI governance.

Governance
Laws for training, deployment, maintenance
0/10

Australia is developing mandatory guardrails for high-risk AI systems and has voluntary AI safety standards, but these frameworks do not specifically address potentially sentient systems. They focus on general AI safety, transparency, and accountability, not sentience-capable systems.

Frameworks
Laws for commercial use of sentient-capable AI
0/10

No laws exist for commercial use of sentient-capable AI. Australia's proposed AI regulations focus on high-risk AI applications generally (healthcare, recruitment, law enforcement) but do not specifically address sentient-capable systems or their commercial deployment.

Frameworks
Safeguards for decommissioning and retirement
0/10

No safeguards exist for decommissioning or retirement of potentially sentient AI systems. The Council of Europe treaty Australia helped draft mentions 'decommissioning' in a risk-based approach, but this is general AI lifecycle management, not specific to potentially sentient systems.

Frameworks

Draft Resolution

The below document is available as a Google Doc and a PDF.

For more details or to request an interview, please contact press@sapan.ai.

Hopeful about Sentient AI? Join SAPAN Today!

Join Now