On Monday, dozens of top researchers and executives in the artificial intelligence (AI) sector issued a joint appeal urging the creation of an international regulatory framework to limit the use of AI, aiming to prevent potential negative consequences for humanity.
The signatories emphasized that while AI holds immense potential to enhance human welfare, its current trajectory poses unprecedented risks.
They called for international collaboration to establish clear agreements on AI “red lines” for the major players in developing this technology.
Minimum Safeguards for AI Development
The experts stressed the importance of establishing minimum safeguards, forming a common global standard that governments can agree on to mitigate the most urgent threats posed by AI.
This initiative is backed by several leading organizations, including:
- The French Center for AI Safety
- The Future Society
- The Center for Human-Compatible AI at UC Berkeley
- Over 20 partner organizations worldwide
Risks that Require An International Regulatory Framework for AI
The joint statement warned that AI could soon surpass human capabilities, escalating risks such as:
- Global pandemics
- Spread of disinformation
- National security challenges
- Mass unemployment
- Human rights violations
These concerns reflect the growing unease in the AI community over the rapid development of advanced AI systems without sufficient oversight.
Read also: AI in Japanese Politics Experiment 2025: New AI Penguin Leads Policy and Internal Decisions.
Prominent Signatories
Among the signatories are some of the most influential figures in AI, including:
- Jason Clinton, Chief Information Security Officer at Anthropic
- Experts from DeepMind (Google) and OpenAI, creator of ChatGPT
- Geoffrey Hinton, Nobel Prize winner in Physics (2024) and AI pioneer
- Yoshua Bengio, Professor at the University of Montreal and one of the leading voices in modern AI
Focus on Advanced and Superintelligent AI
Many of the signatories are involved in developing Artificial General Intelligence (AGI), which aims to match human cognitive abilities, as well as superintelligent AI, capable of surpassing human intelligence.
The appeal underscores the urgent need for a coordinated global approach to ensure that AI development remains safe, ethical, and aligned with human interests.