A new voluntary organization aims to establish its own guidelines and principles ahead of formal regulations for the development of frontier AI. OpenAI, Google, Microsoft, and Anthropic have collaborated to form this body with the shared goal of ensuring safe and responsible advancements in AI technology.
New frontier
The newly formed forum's central mission revolves around what OpenAI refers to as "frontier AI," which pertains to advanced AI and machine learning models that are considered potentially hazardous and pose "severe risks to public safety." The coalition contends that these models present a distinct regulatory challenge due to the unpredictability of "dangerous capabilities," making it challenging to preempt their misuse.
The stated objectives of the forum include:
i) Advancing AI safety research to foster responsible development of frontier models, mitigate risks, and facilitate independent, standardized assessments of capabilities and safety.
ii) Identifying best practices for the responsible development and deployment of frontier models, enhancing public understanding of the technology's nature, capabilities, limitations, and impact.
iii) Engaging with policymakers, academics, civil society, and companies to exchange knowledge about trust and safety risks.
iv) Supporting initiatives that aim to create applications addressing society's most pressing challenges, including climate change mitigation and adaptation, early cancer detection and prevention, and combating cyber threats.
Despite having only four members currently, the Frontier Model Forum is open to welcoming new participants. Eligible organizations should be involved in the development and deployment of frontier AI models and must display a robust commitment to frontier model safety.
In the initial phase, the founding members plan to establish an advisory board to guide the forum's strategy, in addition to creating a charter, governance framework, and funding structure. These steps will set the foundation for the forum's operations and future initiatives.
"We intend to engage with civil society and governmental bodies in the upcoming weeks to seek input on the Forum's structure and explore meaningful avenues for collaboration," stated the companies in a joint statement released today.
Regulation
The creation of the Frontier Model Forum not only showcases the AI industry's commitment to addressing safety concerns, but also sheds light on Big Tech's efforts to preempt forthcoming regulations through voluntary endeavors, potentially allowing them to influence rule-making to some extent.
Indeed, today's announcement coincides with Europe's progress in creating the first comprehensive AI rulebook, aimed at incorporating safety, privacy, transparency, and anti-discrimination principles into AI development practices of companies.
Last week, President Biden held a meeting with seven AI companies at the White House, including the four founding members of the Frontier Model Forum, to establish voluntary safeguards in response to the burgeoning AI revolution. However, critics argue that the commitments made were somewhat vague.
Nevertheless, Biden indicated that regulatory oversight would be considered in the future. He stated, "Realizing the potential of AI while managing its risks will require new laws, regulations, and oversight. In the coming weeks, I will continue to take executive action to ensure responsible innovation in America. We will collaborate with both parties to develop appropriate legislation and regulation."

%20(1)-Photoroom.png)