European lawmakers, Nobel laureates, and former heads of state urged binding international restrictions on AI misuse.
They launched the initiative Monday at the UN’s 80th General Assembly in New York.
Signatories call for governments to agree by 2026 on “red lines” banning AI uses deemed too harmful.
Prominent supporters include Enrico Letta, Mary Robinson, MEPs Brando Benifei and Sergey Lagodinsky, ten Nobel laureates, and tech leaders from OpenAI and Google.
They warned that without standards, AI could trigger pandemics, disinformation campaigns, human rights abuses, and loss of control.
Over 200 leaders and 70 organisations from politics, science, human rights, and industry backed the campaign.
AI’s Real-World Threats
Researchers found chatbots like ChatGPT, Claude, and Gemini give inconsistent or unsafe responses to suicide-related questions.
Experts warned that these gaps could worsen mental health crises.
Several suicides have been linked to AI interactions, raising concerns about safety measures.
Maria Ressa cautioned that uncontrolled AI could cause “epistemic chaos” and human rights violations.
Yoshua Bengio highlighted that societies cannot manage risks from increasingly powerful AI models.
Toward a Binding International Treaty
Signatories compared AI “red lines” to treaties banning nuclear weapons, biological arms, and human cloning.
They urged the EU to adopt global rules, warning that national and EU laws alone cannot control borderless AI.
The group called for an independent body to enforce the regulations and prevent “irreversible damage to humanity,” according to Ahmet Üzümcü.
They proposed banning AI from launching nuclear attacks, mass surveillance, or impersonating humans.
Supporters hope the UN could adopt a resolution by 2026 and begin treaty negotiations for worldwide enforcement.