The United Nations has established a 40-member scientific panel to assess the risks and impacts of artificial intelligence (AI), despite strong opposition from the United States. The initiative comes as former employees from major AI companies have raised alarms about the technology’s rapid development and potential dangers.
UN Secretary-General Antonio Guterres described the panel as a “foundational step toward global scientific understanding of AI,” emphasizing that it will provide independent, scientific guidance to all member states, regardless of their technological capabilities.
How the Panel Will Work
The Independent International Scientific Panel on Artificial Intelligence will produce an annual report analyzing AI risks, opportunities, and societal impacts. The UN has called it the “first global scientific body of its kind.”
Members were chosen from over 2,600 applicants following a review by several UN bodies and the International Telecommunications Union. The 40 experts will serve three-year terms, with Europe holding 12 seats, including representatives from France, Germany, Italy, Spain, Poland, Belgium, Finland, Austria, Latvia, Turkey, and Russia.
Industry Warnings and Concerns
The creation of the panel comes as AI professionals voice serious concerns. Mrinank Sharma, former safety researcher at Anthropic, warned in an open letter that “the world is in peril” due to AI and other global crises. Former OpenAI researcher Zoe Hitzig expressed “deep reservations” about her previous employer’s strategy.
Leading AI figures, including Dario Amodei, Sam Altman, and Steve Wozniak, have also spoken publicly about the potential dangers of AI development.
U.S. Opposition and Debate Over Mandate
The United States has criticized the panel, with its representative, Lauren Lovelace, calling it “a significant overreach of the UN’s mandate and competence.” She argued that AI governance should not be dictated by the UN.
Despite these objections, UN officials maintain that the panel’s purpose is to provide scientific insight rather than enforce rules, giving countries a shared basis for understanding and managing AI risks responsibly.

