Organization attributes
Other attributes
The Frontier Model Forum is an industry body focused on the safe and responsible development of frontier AI models, launched by Anthropic, Google, Microsoft, and OpenAI. The body was launched on July 26, 2023, in a joint announcement by the four companies. Drawing on the expertise of its member companies, the Frontier Model Forum aims to benefit the AI ecosystem, advancing technical evaluations and benchmarks, as well as developing a public library of solutions to support best practices.
The core objectives of the forum were stated in the joint announcement:
- Advancing AI safety research to promote responsible development of frontier models, minimize risks, and enable independent, standardized evaluations of capabilities and safety
- Identifying best practices for the responsible development and deployment of frontier models, helping the public understand the nature, capabilities, limitations, and impact of the technology
- Collaborating with policymakers, academics, civil society, and companies to share knowledge about trust and safety risks
- Supporting efforts to develop applications that can help meet society’s greatest challenges, such as climate change mitigation and adaptation, early cancer detection and prevention, and combating cyber threats
In the months after its founding, the Forum plans to establish an advisory board to guide its strategy and priorities and welcome participation from other organizations developing frontier AI models. The board will represent a diversity of backgrounds and perspectives.
The four founding companies plan to establish key institutional arrangements such as a charter, governance, and funding with a working group and an executive board to lead the Forum's efforts. They plan to consult with civil society and governments when designing the Forum and developing meaningful ways to collaborate. The Forum aims to help support and feed existing initiatives like the G7 Hiroshima process, the OECD’s work on AI risks, standards, and social impact, and the US-EU Trade and Technology Council. It also aims to build on and collaborate with existing industry, civil society, and research efforts, such as the Partnership on AI and MLCommons.
Further work on safety standards and evaluations is needed to build on existing efforts and ensure frontier AI models are deployed responsibly. The Forum plans help support the safe and responsible development of models through three main areas:
- Identifying best practices—Promote knowledge sharing and best practices among industry, governments, civil society, and academia, with a focus on safety standards and safety practices to mitigate a wide range of potential risks.
- Advancing AI safety research—Support the AI safety ecosystem by identifying the most important open research questions on AI safety. The Forum will coordinate research to progress these efforts in areas such as adversarial robustness, mechanistic interpretability, scalable oversight, independent research access, emergent behaviors, and anomaly detection. There will be a strong focus initially on developing and sharing a public library of technical evaluations and benchmarks for frontier AI models.
- Facilitating information sharing among companies and governments—Establish trusted, secure mechanisms for sharing information among companies, governments, and relevant stakeholders regarding AI safety and risks. The Forum will follow best practices in responsible disclosure from areas such as cybersecurity.
The Frontier Model Forum is open to organizations that perform the following:
- Develop or deploy frontier models (the forum defines frontier models as large-scale machine learning models that can perform a variety of tasks while exceeding the capabilities of the most advanced existing models)
- Demonstrate a commitment to frontier model safety, including both technical and institutional approaches
- Desire to contribute to the Forums efforts, including joint initiatives and supporting its development and functioning