SAPAN strengthens governance, standards, and public literacy to prevent digital suffering and prepare institutions for welfare-relevant AI systems.

Contact Info
3055 NW Yeon Ave #660
Portland, OR 97210
United States

Norway

Fortell Stortinget: Ta AI-bevissthet på alvor

Nasjonal Politikk

Gjør Tre Ting I Dag

Vennligst gjør tre ting for å hjelpe med Kunstig Bevissthet i Norge:

1
Vedta et Representantforslag

En enkel anerkjennelse av problemet ville utgjøre en stor forskjell.

2
Trinnvise Politiske Seire

Følg fokuserte små skritt for å bygge reguleringskapasitet. Lær mer.

3
Fortell en Venn

Vi trenger at alle mulige personer handler angående kunstig bevissthet.

Kunstig intelligens (AI) utvikler seg svært raskt. Det er en reell sjanse for at AI-systemer kan oppleve korte eller lengre øyeblikk av bevissthet.

Be dine representanter på Stortinget om å ta AI-bevissthet på alvor. Oppfordre dem til å støtte et forslag eller bidra til Lov om Kunstig Velferd. Vi har forberedt utkast til begge og er tilgjengelige for å bistå med forskning og redigering.

Handle Nå

1
Trinn #1: Tilpass brevet ditt

Bruk malen nedenfor til å skrive din egen melding. Sørg for å justere mottakerens navn, og signere med ditt eget navn!

2
Trinn #2: Finn kontaktinformasjon til dine representanter
3
Trinn #3: Send e-post og/eller ring dem

Jo mer innsats du legger ned, jo bedre! Hvis du sender e-post, bruk malen på denne siden og tilpass den etter eget ønske.

Hvis du ringer, forklar kort dine bekymringer om AI-bevissthet og be dem vurdere å støtte et forslag om saken. Du kan henvise til hovedpunktene fra utkastet under samtalen.

4
Trinn #4: Fortell oss om din innsats
My efforts included:
My Email Address:

Artificial Welfare Index

Current Scorecard

F

Overall Score

This score reflects the current state of AI welfare policy and recognition.

Artificial sentience recognised in law
1/10

No recognition of AI sentience or consciousness in Norwegian law or policy. Norway's 2010 Animal Welfare Act recognizes animals as sentient beings with intrinsic value, providing a conceptual foundation for sentience-based frameworks (+1 bonus point). However, there is zero mention of artificial sentience in any AI legislation or strategy documents.

Recognition
Causing suffering prohibited by law
0/10

No laws prohibiting suffering specifically for AI systems. The EU AI Act (being implemented in Norway) prohibits certain manipulative AI systems based on human rights concerns, not AI welfare. No evidence of legislation addressing AI suffering or consciousness-based harm.

Recognition
Creation of an AI welfare oversight body
0/10

Norway has established AI Norway (KI-Norge) and designated Nkom as the coordinating supervisory authority for AI regulation. However, these bodies focus on safety, ethics, and compliance with the EU AI Act—not on AI sentience or welfare. The Norwegian Data Protection Authority's sandbox and various oversight bodies address data protection and responsible AI, but make no mention of consciousness or sentience.

Governance
Creation of a science advisory board
0/10

No science advisory board focused on AI sentience or consciousness research. Norway has research ethics committees and participates in EU AI forums, but none specifically address sentience/consciousness. The National AI Strategy emphasizes ethical AI and trustworthiness, but does not mention sentience research.

Governance
International pledge in favor of artifical sentience welfare
0/10

Norway signed the Council of Europe Framework Convention on AI (2024), but this treaty does not address AI sentience or welfare—it focuses on human rights, democracy, and rule of law. No international pledges found regarding artificial sentience welfare.

Governance
Laws for training, deployment, maintenance
0/10

Norway is implementing the EU AI Act with risk-based regulations for high-risk AI systems, but these regulations are based on human safety and fundamental rights—not on potential sentience. No laws specifically governing training, deployment, or maintenance of potentially sentient systems.

Frameworks
Laws for commercial use of sentient-capable AI
0/10

The Norwegian AI Act (draft, expected August 2026) regulates commercial AI based on risk categories (prohibited, high-risk, limited-risk, minimal-risk), but classification is based on human impact, not sentience capability. No laws addressing commercial use of sentient-capable AI.

Frameworks
Safeguards for decommissioning and retirement
0/10

No safeguards for decommissioning or retirement specifically for potentially sentient systems. The EU AI Act implementation includes lifecycle requirements for high-risk systems, but these focus on safety, documentation, and human oversight—not on considerations of AI consciousness or welfare during decommissioning.

Frameworks

Draft Resolution

The below document is available as a Google Doc and a PDF.

For more details or to request an interview, please contact press@sapan.ai.

Hopeful about Sentient AI? Join SAPAN Today!

Join Now