To counter the perception that its "open" AI is helping foreign adversaries, Meta said today it's making its Llama series of AI models available to U.S. government agencies and contractors in national security.
Meta said in a blog post, "We are pleased to announce that we are making Llama available to U.S. government agencies, including those working on defense and national security applications, and private sector partners supporting their work." It is teaming with Accenture, Amazon Web Services, Anduril, Booz Allen, Databricks, Deloitte, IBM, Leidos, Lockheed Martin, Microsoft, Oracle, Palantir, Scale AI, and Snowflake for Llama to reach the government agencies.
For example, Oracle is using Llama to process aircraft maintenance documents, Meta says. Scale AI is fine-tuning Llama to support specific national security team missions. And Lockheed Martin is offering Llama to its defense customers for use cases like generating computer code.
Meta's policy normally bars developers from using Llama for any projects related to military, warfare, or espionage missions. The company made an exception this time, it said in a statement to Bloomberg, as well as exceptions for similar government agencies—and contractors—in the U.K., Canada, Australia, and New Zealand.
Chinese research scientists affiliated with the ruling party's military wing, called the PLA, apparently used an older version of the Llama model called Llama 2 last week to create a tool for defense purposes. Chinese researchers - including two tied to a R&D group of the PLA - developed a chatbot for military use designed to both gather and process information as well as share information intended for operational use in tactical decision-making.
Meta said in a statement to Reuters that the use of the "single, and outdated" Llama model was "unauthorized" and violated its acceptable use policy. But the report added much fuel to the ongoing debate over the merits and risks of open AI.
The use of AI, open or "closed," for defense is controversial.
According to the nonprofit AI Now Institute in a recent report, the sort of AI currently used to help feed military intelligence, surveillance, and reconnaissance is "toxic" in much the same way that any approach built on personal data that may be exfiltrated by bad actors can be weaponized by adversaries, but also with liabilities, including biases and a predisposition to "hallucinate," "which, at present are intractable." This would argue, the authors note, for keeping AI and other sophisticated machine learning techniques separated off from "commercial" models.
Several employees within Big Tech companies, such as Google and Microsoft, have been protesting against the contracts that their employer signs to build AI tools and infrastructure for the U.S. military.
Meta says open AI can speed defense research while furthering America's "economic and security interests." Yet the US military has been slow to adapt to the technology — and even slower to trust its ROI. So far, the only branch of the US armed forces with a generative AI deployment is the U.S. Army.