Silicon Valley is currently debating whether AI-powered weapons should be given the authority to decide to kill.

In late September, Shield AI co-founder Brandon Tseng swore that weapons in the U.S. would never be fully autonomous-meaning an AI algorithm would make the final decision to kill someone.
Silicon Valley is currently debating whether AI-powered weapons should be given the authority to decide to kill.

In late September, Shield AI co-founder Brandon Tseng swore that weapons in the U.S. would never be fully autonomous-meaning an AI algorithm would make the final decision to kill someone. "Congress doesn't want that," the defense tech founder told TechCrunch. "No one wants that." 

But Tseng spoke too soon. Five days later, co-founder of Anduril Palmer Luckey expressed openness to autonomous weapons—or at least a heavy skepticism of arguments against them. The U.S.'s adversaries "use phrases that sound really good in a sound bite: Well, can't you agree that a robot should never be able to decide who lives and dies? " Luckey said during a talk earlier this month at Pepperdine University.

"And my point to them is, where's the moral high ground in a landmine that can't tell the difference between a school bus full of kids and a Russian tank?

"

Still shaken by his experience, Luckey declined to comment further. But Shannon Prior, a representative for Anduril, the company that made the particular factory-farmed sprayer that destroyed the Wall, commented when asked: "He never meant to suggest that machines should be made to kill people as autonomously. But he did worry that bad people would use bad AI."

In the past, Silicon Valley has erred on the side of caution. Luckey's co-founder, Trae Stephens, told Kara Swisher last year. "I think the technologies that we're building are making it possible for humans to make the right decisions about these things," he said. "So that there is an accountable, responsible party in the loop for all decisions that could involve lethality, obviously."

The Anduril spokesperson dismissed any dissonance between Luckey, above, and Stephens, saying that Stephens didn't mean that a human should always make the call, but just that someone is accountable. But to be fair, the position of the government of the United States itself is similarly ambiguous. The U.S. military does not now buy fully autonomous weapons. While some say weapons such as mines and missiles can themselves make discretionary choices for one period of time, this is a qualitatively different type of autonomy than, say, a turret that identifies, acquires, and fires on targets without a human's decision. The U.S. does not prohibit companies from making fully autonomous lethal weapons nor does it have a ban on the selling of such things to foreign countries.

Last year, the U.S. issued new rules on the responsible use of artificial intelligence in the military that were welcomed by many U.S. allies and demand that any senior military leaders approve of new use of such weapons; but they are a voluntary rule (Anduril said it is committed to following the rules), and U.S. officials have repeatedly told the world that it's "not the right time" to discuss any type of mandatory ban on autonomous weapons. Last month, even Palantir co-founder and Anduril investor Joe Lonsdale showed a willingness to consider the reality of fully autonomous weapons. Speaking at an event put on by the think tank Hudson Institute, Lonsdale bemoaned that this question is being framed as such an all-or-nothing proposition. Instead, he proposed a thought experiment in which China has adopted AI weapons but the United States has to "press the button every time it fires." He urged policymakers to adopt a more flexible approach to how much AI is in weapons.

"You very quickly realize, well, my assumptions were wrong if I just put a stupid top-down rule, because I'm a staffer who's never played this game before," he said. "I could destroy us in the battle."

The key context to what I was saying is that our companies don't make the policy and don't want to make the policy; it's the job of elected officials to make the policy," he said. "But they do need to educate themselves on the nuance to do a good job." When asked by TechCrunch for further comment, Lonsdale stated that defense tech firms shouldn't be the ones setting the agenda on lethal AI.

He also restated a readiness to look at increased autonomy in weapons. "It's not a binary as you suggest—'fully autonomous or not' isn't the correct policy question. There's a sophisticated dial along a few different dimensions for what you might have a soldier do and what you have the weapons system do," he said. Before policymakers put these rules in place and decide where the dials need to be set in what circumstance, they need to learn the game and learn what the bad guys might be doing, and what's necessary to win with American lives on the line.

Long-sought international bans on autonomous lethal weapons have eluded activists and human rights groups for many years. But activists may have finally found their nemesis in Ukraine, where the war could be providing a sleuth of combat data but also a battle testing ground for defense tech founders. Currently, companies are producing weapon systems that integrate AI, but it is not a system that automatically kills; one still needs the human to make that killing final decision. Ukrainian officials, on the other hand, have been pushing for maximum weapons automation, hoping that this will give them an edge over Russia. "We need maximum automation," said Mykhailo Fedorov, Ukraine's minister of digital transformation, in an interview with The New York Times. "These technologies are the basis of our victory."

At least for those in Silicon Valley and D.C., the biggest fear is that China or Russia starts fielding its entirely autonomous weapons, forcing a chastened United States to sprint to catch up. At a UN debate on AI arms last year, a Russian diplomat was characteristically opaque. "We understand that for many delegations the priority is human control," he said. "For the Russian Federation, the priorities are somewhat different."

At the Hudson Institute event, Lonsdale said the tech sector has to do it unto itself as a way of "teaching the Navy, teach the DoD, teach Congress" about the potential of AI to "hopefully get us ahead of China."

Lonsdale's and Luckey's affiliated companies are working to get Congress to listen to them. According to OpenSecrets, Anduril and Palantir have cumulatively spent over $4 million in lobbying this year.

 

Blog
|
2024-10-12 18:39:59