Home Artificial Intelligence Frontier Model Forum

Frontier Model Forum

Frontier Model Forum

Governments and industry agree that, while AI offers tremendous promise to learn the world, appropriate guardrails are required to mitigate risks. Necessary contributions to those efforts have already been made by the US and UK governments, the European Union, the OECD, the G7 (via the Hiroshima AI process), and others. 

To construct on these efforts, further work is required on safety standards and evaluations to make sure frontier AI models are developed and deployed responsibly. The Forum might be one vehicle for cross-organizational discussions and actions on AI safety and responsibility.  

The Forum will give attention to three key areas over the approaching yr to support the protected and responsible development of frontier AI models:

  • Identifying best practices: Promote knowledge sharing and best practices amongst industry, governments, civil society, and academia, with a give attention to safety standards and safety practices to mitigate a wide selection of potential risks. 
  • Advancing AI safety research: Support the AI safety ecosystem by identifying a very powerful open research questions on AI safety. The Forum will coordinate research to progress these efforts in areas resembling adversarial robustness, mechanistic interpretability, scalable oversight, independent research access, emergent behaviors and anomaly detection. There might be a robust focus initially on developing and sharing a public library of technical evaluations and benchmarks for frontier AI models.
  • Facilitating information sharing amongst corporations and governments: Establish trusted, secure mechanisms for sharing information amongst corporations, governments and relevant stakeholders regarding AI safety and risks. The Forum will follow best practices in responsible disclosure from areas resembling cybersecurity.

Kent Walker, President, Global Affairs, Google & Alphabet said: “We’re excited to work along with other leading corporations, sharing technical expertise to advertise responsible AI innovation. We’re all going to want to work together to be certain that AI advantages everyone.”

Brad Smith, Vice Chair & President, Microsoft said: “Corporations creating AI technology have a responsibility to be certain that it’s protected, secure, and stays under human control. This initiative is an important step to bring the tech sector together in advancing AI responsibly and tackling the challenges in order that it advantages all of humanity.”

Anna Makanju, Vice President of Global Affairs, OpenAI said: “Advanced AI technologies have the potential to profoundly profit society, and the power to attain this potential requires oversight and governance. It’s vital that AI corporations–especially those working on essentially the most powerful models–align on common ground and advance thoughtful and adaptable safety practices to make sure powerful AI tools have the broadest profit possible. That is urgent work and this forum is well-positioned to act quickly to advance the state of AI safety.” 

Dario Amodei, CEO, Anthropic said: “Anthropic believes that AI has the potential to fundamentally change how the world works. We’re excited to collaborate with industry, civil society, government, and academia to advertise protected and responsible development of the technology. The Frontier Model Forum will play an important role in coordinating best practices and sharing research on frontier AI safety.”


Please enter your comment!
Please enter your name here