In a bid to be relevant in the surging AI space, Meta says it will announce a new entity-the Open Innovation AI Research Community-to accelerate what the company calls "transparency, innovation and collaboration" among AI researchers.
The agenda of the group for the first period will be focused on privacy, safety, and security of large language models, such as OpenAI's ChatGPT; input into the refinement of AI models; and setting the agenda for future research. According to Meta, it expects its researchers also to participate in the organization, but the Open Innovation AI Research Community will be "member-led," with Meta's AI R&D group, Meta AI, serving just as a "facilitator.".
It will be a community of practice that strongly advocates large open-source foundation models where partners will team up and collaborate with each other, share learnings, and raise questions on building responsible and safe foundation models, writes Meta in a blog post. "They'll also accelerate training of the next generation of researchers.".
Meta intends to sponsor a series of workshops on "key open questions" and "best practices for the responsible development and sharing of open source models." But specifics end there. Meta says the Open Innovation AI Research Community may someday have a website, social media channels to facilitate collaboration, and submission of papers to academic conferences, but none of that is guaranteed.
Members of the Open Innovation AI Research Community probably will bear the cost of funding for their own work. Meta did not suggest it will invest capital or compute for the group's effort - quite rightly, probably to avoid being seen to have undue influence. That's a hard sell out of the gates, considering the high costs involved in AI research.
Meta has a long history of investing in academia, educates the next generation of researchers and engineers and strengthens interaction across disciplines in AI that could in the past be relatively siloed, says Joelle Pineau, VP AI research at Meta, in a reply via email to TechCrunch. The next step in that process will be our Open Innovation AI Research Community, continuing the efforts we've undertaken to deepen our understanding of responsible development and sharing of large language models alongside academic researchers. More on this soon.
Frankly, the Open Innovation AI Research Community sounds like a performative gesture from a company that's flirted with controversy time and again with its actions involving AI.
Late last year Meta was forced to pull an AI demo after it wrote racist and inaccurate scientific literature. Reports have characterized Meta's AI ethics team as largely toothless and the anti-AI-bias tools it's released as "completely insufficient." Meanwhile, academics have accused Meta of exacerbating socioeconomic inequalities in its ad-serving algorithms and of showing a bias against Black users in its automated moderation systems.
Will the Open Innovation AI Research Community change all this? It doesn't seem likely. Meta is encouraging "professors at accredited universities" with "relevant experience with AI" to participate, but this writer wonders why they would, given the wellspring of open machine learning research communities unaffiliated with any Big Tech company.
Maybe I'll be proven wrong. Maybe Meta's Open Innovation AI Research Community will indeed deliver on its promise, creating "a set of positive dynamics to foster more robust and representative models," as Meta writes. But I question Meta's sincerity and level of devotion here — certainly given the paltry resources that have been pledged toward the effort from the get-go.
Applications are being accepted through September 10 for the Open Innovation AI Research Community, Meta claims, adding that the program welcomes applicants from "diverse research disciplines" and says it tolerates the fact that "more than one participant from the same university may apply.".