As Meta continues to develop yet more advanced AI models and continues its march toward automated general intelligence, it is also keen to establish best-practice guardrails and safety standards so that AI does not. well, enslave the human race.
Among other worries.
That's why today Meta is joining the Frontier Model Forum (FMF), a nonprofit AI safety collective dedicated to establishing industry standards and regulations around AI development.
" FMF :
The nonsolid approach of FMF as a nonprofit organization and the industry-supported body dedicated to advancing frontier AI model safety lets it make real progress on shared challenges and actionable solutions. Members want to get it right on safety because it is the right thing to do and the safer frontier AI is, the more useful and beneficial it will be to society.
Meta, along with Amazon, will collaborate on the FMF mission with Anthropic, Google, Microsoft, and OpenAI-to eventually lead to the creation of some of the world's best-class AI safety regulations. One that might spare us the agony of waiting for John Connor to take the reins of leading the human resistance.
Meta President of Global Affairs Nick Clegg said
Meta said it has long been committed to the ongoing growth and development of a safer and more open AI ecosystem-one that holds transparency and accountability. With the Frontier Model Forum, it said, it can continue this work alongside industry partners to identify and share best practices to keep its products and models safe.
The FMF now acts as a working group for the formation of an advisory board, and other institutional arrangements-to include a charter, governance, and funding-benefiting from their charge in organizing efforts.
And while robot domination may be the stuff of science fiction, there is plenty more of concern on which the FMF will be writing, including but certainly not limited to generation of illegal content, misuse of AI (and how to avoid it), copyright, and so much more (note that Meta recently joined the "Safety by Design" initiative to prevent misuse of generative AI tools in child exploitation).
After all, for Meta, dangers of AGI are indeed foreboding.
Meta's Fundamental AI Research team, FAIR, is working on establishing human-level intelligence and simulating neurons in the brain with a sort of digital environment, equivalent to "thinking" in a simulated environment.
That is to say, we're not anywhere near this at present, for though the latest AI instruments are impressive with what they can produce, they're, really, highly complex mathematical systems, which they use to match queries with responses based on the information that they can access. Not "thinking", but just an estimation of what logically comes next, based on parameters of some given question.
AGI will be able to do all of this by itself, and actually formulate ideas without human prompts.
Which is a little scary, and could, of course, lead to more problems.
Hence the need for groups like FMF to oversee AI development, and ensure that those in charge of such experiments don't accidentally guide us towards the end times.