Conflicting strategies emerge to tackle AI’s "perpetual misinformation machine."

The AI stage at TechCrunch Disrupt 2024 opened with a fiery but constructive panel on fighting disinformation.
Conflicting strategies emerge to tackle AI’s "perpetual misinformation machine."

The AI stage at TechCrunch Disrupt 2024 opened with a fiery but constructive panel on fighting disinformation. But in a spirited exchange of views tempered by expressions of respect and agreement, all three panelists had harsh words for social media and generative AI.
None was harsher, though, than Imran Ahmed, CEO of the Center for Countering Digital Hate.

"We have always had BS in politics, and a lot of politicians use lying as an art, a tool of doing politics. What we have now is is quantitatively different, and to such a scale that it's like comparing the conventional arms race of BS in politics to the nuclear race," he said.

"It's the economics that have changed so radically: The marginal cost of the production of a piece of disinformation has been reduced to zero by generative AI, and the marginal costs of the distribution of disinformation [is also zero], he continued. "So what you have, theoretically, is a perfect loop system in which generative AI is producing, it's distributing, and then it's assessing the performance — A/B testing and improving. You've got a perpetual bulls–t machine. That's quite worrying!

"

Brandie Nonnecke, director of UC Berkeley's CITRIS Policy Lab, said self-regulation in the form of voluntary limits and transparency reports is totally insufficient. "I don't think that these transparency reports really do anything, in part because in these transparency reports, they'll say, look at what a great job we're doing: We removed 10s of 1000s of pieces of harmful content. Well, what didn't you remove? What's still floating around that you didn't catch?

It gives a false sense that they're actually doing due diligence, when I think underneath that all is a big mess of them trying to figure out how to deal with all of this content," she said.

She added in principle, but caution was given that not to throw the baby out with the bathwater. "I think it would be entirely wrong to say that any platform in social media is doing all it needs to — at least I would not do so with Meta, not one bit," Pamela San Martin, a Facebook Oversight Board co-chair. I agree to what you said, but we thought that this particular year that had 80 elections would be the year of AI and elections and that all elections throughout the world would be flooded by AI deepfakes such that that's what'll control the narrative, she continued. We do see a rise in that, but we haven't seen elections to be completely filled with AI generated content". That's why I say it? Not because I disagree with that-it's quite disturbing but also have to remember here and now when we start taking measures out of fear, we would lose the good part of AI.

Blog
|
2024-10-31 18:32:05