Meta, on an open-source tear, looks to expand its reach in the fight for AI mindshare.
The social network announced that it is partnering with IBM's decidedly more corporate and enterprise audience on launching the AI Alliance, an industry group to support open innovation and open science in AI.
But what will the AI Alliance actually do—and how will its work differ from the remarkably similar (at least when it comes to its general mission, membership and principles) Partnership on AI? The Partnership on AI promised years ago to publish research under open-source licenses and minutes from its meetings to, as the AI Alliance purportedly wants to do, inform the public about leading AI issues of the day.
Well, confusingly, the Partnership on AI is in reality a member of the AI Alliance. The Alliance claims to "use existing collaborations" (read: that and the Partnership on AI's among them) to "explore areas where open AI resources develop which balance needs of business and society as well as they address responsible challenges," a press release circulated last week to TechCrunch reads.
Members of the AI Alliance will author working groups, a governing board, and a technical oversight committee charged with special efforts in areas such as AI "trust and validation" metrics, hardware and infrastructure supporting AI training, open source AI models and frameworks. They will also create project standards and standards, and work with "important existing initiatives" — initiatives conspicuously not named in the press release — from government, nonprofit and civil society organizations "who are doing valuable and aligned work in the AI space."
If that all sounds a lot like what the inaugural members of the Alliance were already doing independently, you're not wrong. But in its release, the AI Alliance stressed that whatever form its work takes, it's meant to be complementary and additive rather than needlessly duplicative.
This community will be able to innovate quicker and more inclusively, identify specific risks, and mitigate those before placing a product in the world says the release, which stands in contrast to a vision that aims to relegate AI innovation and value creation to a small number of companies with a closed, proprietary vision for the AI industry.
That dart in the tail says a lot about Meta's ulterior motives, here.
Google, OpenAI and Microsoft, an investment and partnership close to OpenAI, have been among the chief critics of Meta's open source AI approach, calling it potentially dangerous and disinformation-encouraging. (Not surprisingly, none are members of the AI Alliance despite being long-time members of the Partnership on AI.) Now, those companies have a clear horse in the race and perhaps regulatory capture on the mind… but they're not wrong entirely. Meta is continuing to take targeted, open source risks (within regulators' tolerances), puts out text-generating models like Llama that bad actors have then turned around and abused but which plenty of developers have built useful apps on.
"The platform that will win will be the open one," Meta's chief AI scientist, Yann LeCun, was quoted as saying in an interview with The New York Times — and who's among the more than 70 influential signers of a letter calling for more openness in AI development. LeCun has a point; one estimate is that Stability AI's open source AI-powered image generator, Stable Diffusion--released last August--now accounts for 80 percent of all AI-generated imagery.
But wait, you might say – what does IBM get out of the AI Alliance? It is, after all, one of the co-founders. I would hazard a guess that more publicity for its nascent generative AI platform. IBM's last earnings got a boost from enterprises interested in generative AI, but it's got stiff competition from Microsoft and OpenAI, which are jointly developing enterprise-focused AI services that directly compete with IBM's.
I've sought omissions from IBM's PR, which first contacted me to let me know of the AI Alliance's founding, about the group's early membership, including such curious oversights as Stanford (which counts one of the world's most prominent AI research labs among its academic unions, Stanford HAI), MIT, that of the leader in robotics research, and high-profile AI startups like Anthropic, Cohere and Adept. A press rep hasn't replied as of publication time. But the same philosophical differences that kept Google and Microsoft at bay likely were at play; I'd wager it's no accident that Anthropic, Cohere and Adept have relatively few open source AI projects to their names.
I'll note that Nvidia isn't a member of the AI Alliance, either — a suspect absence given that the company is by far the dominant provider of AI chips and maintains many open source models in its own right. Perhaps the chipmaker perceived a conflict of interest in collaborating with Intel and AMD. Or maybe just decided it wanted to gamble its future with Microsoft, Google, and the rest of the tech dinosaurs who were opting out of the Alliance for strategic reasons. Who knows?
Via email, the VP of IBM's research AI division, Sriram Raghavan, said that the Alliance is, for now, focused on "members that are strongly committed to open innovation and open source AI"—implying that those who aren't participating aren't as strongly committed. I'm not sure they'd agree.
"This of course is just the starting point," he added. "We welcome and expect more organizations to join in the future.".
A diverse constituency
Coming in with some 45 member companies, from AMD and Intel, the research laboratory CERN, Yale and Imperial College in London and AI startups Stability AI and Hugging Face, the AI Alliance will focus on forming an "open" community and "enabling developers and researchers to accelerate responsible innovation in AI" while ensuring "scientific rigor, trust, safety, security, diversity and economic competitiveness," said the release.
"By bringing together leading developers, scientists, academic institutions, companies and other innovators, we will pool resources and knowledge to address safety concerns while providing a platform for sharing and developing solutions that fit the needs of researchers, developers and adopters around the world," the release states.
The AI Alliance's first class is strikingly diverse — occupying the overlap of not only AI and enterprise but healthcare, silicon and software-as-a-service as well. In addition to academic partners such as the University of Tokyo, UC Berkeley, the University of Illinois, Cornell and the aforementioned Imperial College London and Yale, Sony, ServiceNow, the National Science Foundation, NASA, Oracle, the Cleveland Clinic and Dell have all committed in some capacity.
MLCommons, which came up with the benchmarking suite that large semiconductor makers use in evaluating the AI performance of their hardware, is an original member of the AI Alliance, too. So are LangChain and LlamaIndex, two creators behind some of the more widely used tools and frameworks for building apps powered by text-generating AI models.
But without so many of the AI-industry majors on board, not even with deadlines or even a clear sense of objectives, can the AI Alliance prevail? What does success even look like?
Beats me.
Millions of competing interests-healthcare networks (Cleveland Clinic), insurance providers (Roadzen), and dozens of other groups-will not make it easy for the Alliance's members to coalesce around a singular, unified voice. And for all their talk of openness, IBM and Meta aren't exactly poster children for the future that the Alliance's release depicts-casting its genuineness into question.
Perhaps I am wrong, and AI Alliance will become an enormous success. Maybe it will simply collapse from mistrust and red tape. Time will tell.