Since the introduction of artificial intelligence into various domains, creators have accelerated their efforts in innovation. However, a recent policy document indicates that Meta CEO Mark Zuckerberg may halt or slow the progression of artificial general intelligence (AGI) systems considered to be at “high risk” or “critical risk.”
AGI refers to AI systems capable of performing any task a human can undertake. While Zuckerberg has expressed intentions to eventually make these systems available to the public, the “Frontier AI Framework” reveals that certain advanced AI technologies will not be released due to their potential risks.
This framework prioritizes addressing critical issues related to cybersecurity threats and the risks associated with chemical and biological weaponry.
“By prioritizing these areas, we can work to protect national security while fostering innovation. Our framework outlines various processes we employ to anticipate and mitigate risks when developing frontier AI systems,” the document states.
Furthermore, the framework aims to identify “potential catastrophic outcomes related to cyber, chemical, and biological risks that we strive to prevent.” It conducts “threat modeling exercises” to predict how different entities might exploit frontier AI for adverse outcomes and maintains “processes to ensure risks remain within acceptable levels.”
If the company assesses that the associated risks are excessively high, it will restrict access to the system and keep it internal.
“While the emphasis of this Framework is on our efforts to anticipate and mitigate catastrophic risks, it is crucial to highlight that the impetus behind developing advanced AI systems lies in the immense societal benefits they can offer,” the document emphasizes.
It appears that Zuckerberg is applying the brakes—at least temporarily—on the rapid advancement of AGI technologies.