Many feared that disinformation and perhaps even the outcome of the 2024 election might be touched, and perhaps even decided, by AI-generated alternatives. It turns out there was some, but it was a far cry from what was expected. Just don't be lulled by that: The disinfo threat is real — it's just not for you.
Or so claims Oren Etzioni, AI researcher of long standing whose nonprofit TrueMedia has its finger on the generated disinformation pulse.
There is, for lack of a better word, diversity of deepfakes," he said in a recent interview with TechCrunch. "Each one serves its own purpose, and some we are more aware of than others. Let me put it this way: For every thing you actually hear about, there are a hundred that aren't targeted at you. Maybe a thousand.". It's really only the very tip of the iceberg that makes it into the mainstream press.
The truth is, most people-and Americans more than most-tend to believe that what they see is what they get. This certainly isn't so for a variety of reasons. But in the case of disinformation campaigns, America is actually a hard target, given a relatively well-informed populace, readily available factual information, and a press that is trusted at least most of the time (despite all the noise to the contrary).
We tend to think of deepfakes as something like a video of Taylor Swift doing or saying something that wouldn't. But the really dangerous deepfakes are not those of celebrities or politicians but situations and people that cannot be easily identified and counteracted.
"The biggest thing people don't get is the variety," he noted. "One of Iranian planes today flying over Israel." Something that did not occur, but cannot easily be disproven by someone who does not stand on the ground there. "You don't see it because you're not on the Telegram channel, or in certain WhatsApp groups—but millions are."
It provides free identification as fake or real via web and API of images, video, audio, and other material. Not an easy task, cannot fully automate, but they're slowly building a basis of ground truth material that feeds back into the process.
"Our main mission is detection. The academic benchmarks [for evaluating fake media] have long since been plowed over," Etzioni said. "We train on things uploaded by people all over the world; we see what the different vendors say about it, what our models say about it, and we generate a conclusion. As a follow-up, we have a forensic team doing a deeper investigation that's more extensive and slower, not on all the items but a significant fraction, so we have a ground truth.". We don't attach a price to truth unless we're pretty sure; we could be wrong, but we're much better than any other lone solution."
The core work is to help to quantify the issue in three fundamental ways Etzioni identified:
How much is there? "We don't know, there's no Google for this. You see all these signs that it's rampant, but it's desperately hard, or maybe even impossible, to measure reliably."
How many people view it? "This is easier because when Elon Musk posts something, you see, '10 million people have viewed it.' So the number of eyeballs is easily in the hundreds of millions. I see things every week that have been viewed millions of times.".
How much of a difference did it make? "This is perhaps the most significant one. How many voters did not vote because of the fake Biden calls? We're just not set up to measure that. The Slovakian one [a disinfo campaign targeting a presidential candidate there in February] was last minute, and then he lost. That may well have tipped that election.".
All this, he said, are works in progress, some just beginning. But you have to start somewhere.
"Let me make a bold prediction: Over the next 4 years, we're going to become much more adept at measuring this," he said. "Because we have to. Right now we're just trying to cope."
For some of the industry and technology efforts to make generated media more visibly identifiable, such as watermarking images and text, they're harmless and maybe even beneficial, but don't even begin to solve the problem, he said.
"The way I'd put it is, don't bring a watermark to a gunfight." These best practices are useful in integrated ecosystems where everyone has an incentive to deploy them, but they don't really prevent bad actors who don't want to be caught.
It all sounds rather dire, and it is, but the most consequential election in recent history just took place without much in the way of AI shenanigans. That is not because generative disinfo isn't commonplace, but because its purveyors didn't feel it necessary to take part. Whether that scares you more or less than the alternative is quite up to you.