The Frontier Model Forum is an industry body focused on the safe and responsible development of frontier AI models, launched by Anthropic, Google, Microsoft, and OpenAI.
The four founding companies plan to establish key institutional arrangements such as a charter, governance, and funding, as well aswith a working group and an executive board to lead the Forum's efforts. They plan to consult with civil society and governments when designing the Forum and developing meaningful ways to collaborate. The Forum aims to help support and feed existing initiatives like the G7 Hiroshima process, the OECD’s work on AI risks, standards, and social impact, and the US-EU Trade and Technology Council. It also aims to build on and collaborate with existing industry, civil society, and research efforts, such as the Partnership on AI and MLCommons.
The Frontier Model Forum is open to organizations that perform the following:
The Frontier Model Forum is an industry body focused on the safe and responsible development of frontier AI models, launched by Anthropic, Google, Microsoft, and OpenAI. The body was launched on July 26, 2023, in a joint announcement by the four companies. The Forum plans to establish an advisory board to guide its strategy and priorities, and welcomes participation from other organizations developing frontier AI models. Drawing on the expertise of its member companies, the Frontier Model Forum aims to benefit the AI ecosystem, advancing technical evaluations and benchmarks, as well as developing a public library of solutions to support best practices.
1. Advancing AI safety research to promote responsible development of frontier models, minimize risks, and enable independent, standardized evaluations of capabilities and safety. 2. Identifying best practices for the responsible development and deployment of frontier models, helping the public understand the nature, capabilities, limitations, and impact of the technology. 3. Collaborating with policymakers, academics, civil society and companies to share knowledge about trust and safety risks. 4. Supporting efforts to develop applications that can help meet society’s greatest challenges, such as climate change mitigation and adaptation, early cancer detection and prevention, and combating cyber threats.
In the months after its founding, the Forum plans to establish an advisory board to guide its strategy and priorities and welcome participation from other organizations developing frontier AI models. The board will represent a diversity of backgrounds and perspectives.
The four founding companies plan to establish key institutional arrangements such as a charter, governance, and funding, as well as a working group and executive board to lead the Forum's efforts. They plan to consult with civil society and governments when designing the Forum and developing meaningful ways to collaborate. The Forum aims to help support and feed existing initiatives like the G7 Hiroshima process, the OECD’s work on AI risks, standards, and social impact, and the US-EU Trade and Technology Council. It also aims to build on and collaborate with existing industry, civil society and research efforts such as the Partnership on AI and MLCommons
Further work on safety standards and evaluations is needed to build on existing efforts and ensure frontier AI models are deployed responsibly. The Forum plans help support the safe and responsible development of models through three main areas:
The Frontier Model Forum is open to organizations that:
July 26, 2023
The Frontier Model Forum is an industry body focused on the safe and responsible development of frontier AI safety industrymodels, body jointly createdlaunched by Anthropic, Google, Microsoft, and OpenAI.
The Frontier Model Forum is an industry body focused on the safe and responsible development of frontier AI models, launched by Anthropic, Google, Microsoft, and OpenAI. The body was launched on July 26, 2023, in a joint announcement by the four companies. The Forum plans to establish an advisory board to guide its strategy and priorities, and welcomes participation from other organizations developing frontier AI models. Drawing on the expertise of its member companies, the Frontier Model Forum aims to benefit the AI ecosystem, advancing technical evaluations and benchmarks, as well as developing a public library of solutions to support best practices.
The core objectives of the forum were stated in the joint announcement:
1. Advancing AI safety research to promote responsible development of frontier models, minimize risks, and enable independent, standardized evaluations of capabilities and safety. 2. Identifying best practices for the responsible development and deployment of frontier models, helping the public understand the nature, capabilities, limitations, and impact of the technology. 3. Collaborating with policymakers, academics, civil society and companies to share knowledge about trust and safety risks. 4. Supporting efforts to develop applications that can help meet society’s greatest challenges, such as climate change mitigation and adaptation, early cancer detection and prevention, and combating cyber threats.
The Frontier Model Forum is an AI safety industry body jointly created by Anthropic, Google, Microsoft, and OpenAI.
The Frontier Model Forum is an industry body focused on the safe and responsible development of frontier AI models, launched by Anthropic, Google, Microsoft, and OpenAI.