Which arrives as a brief note in Meta's Q3 earnings report last week, raising many eyebrows in response.
While speaking to Meta's Q3 earnings call, Zuckerberg made a note about:
This quarter, we launched Llama 3.2, consisting of the front-runner small models that run on-device and open-source multi-modal models. We are already working with enterprises for easier adoption and now with the public sector for widespread adoption of Llama across the US government."
Working with Meta on adoption of AI in government applications? What does it mean?
Well, today, Meta has added some context to this, saying its head of Government Affairs Nick Clegg says that indeed, Meta is indeed working with the U.S. government agencies on potential applications involving its Llama AI models, and also partnering with some big-name corporations on the same.
According to Clegg:
"Meta's open source Llama models are increasingly being used by a broad community of researchers, entrepreneurs, developers and government bodies.". We're pleased to say we're also making Llama available to U.S. government agencies, including those working on defense and national security applications and private sector partners supporting this important work. We're partnering with companies such as Accenture, Amazon Web Services, Anduril, Booz Allen, Databricks, Deloitte, IBM, Leidos, Lockheed Martin, Microsoft, Oracle, Palantir, Scale AI and Snowflake to bring Llama to government agencies.
So, these generative AI systems that regularly misinterpret facts from the web, and translate them into convoluted, unreal narratives are now going to be powering elements for the Defense Department. Got it, sounds good.
Nah, I'm pretty sure it should be fine. I mean, it is not like they're going to use it for mission planning or anything major like that.
"Scale AI is fine-tuning Llama to support specific national security team missions, such as planning operations and identifying vulnerabilities in adversaries."
Oh.
I mean that sounds sort of bad.
But generally speaking, Clegg claims that Meta's AI models are experimented with applications in which they might ease data analysis and interpretation.
"Large language models" can help to automate complex logistics and planning, monitor terrorist financing or enhance our cyber defenses. For decades, open source systems have been critical to helping the United States build the most technologically advanced military in the world and, in partnership with its allies, develop global standards for new technology.
This, therefore, will align adopting the latest AI models with keeping the U.S. government one step ahead of potential enemies, with using the best tools available in order to improve its operations.
So makes sense. Nothing to be concerned about.
Meaning, they are not going to come and make robots and hand over control of them to AI systems and those systems then start taking over and decide that the real enemy is the human race and eliminate us all.
To this end, in July a former chairman of the Joint Chiefs of Staff announced that in the next 10-15 years, robots and other intelligent machinery will comprise up to one-third of the U.S. military.´
…
Okay then.
Well, we're all stressed enough this week, so maybe don't really think about it when Meta surrenders AI models to the U.S. Defense Department, which is also developing autonomous robots equipped for the battlefield.