X Continues to Show Ads Next to Harmful Content

Hyundai is the newest brand to suspend its advertising on X.
X Continues to Show Ads Next to Harmful Content

Despite repeated assurances by X that its ad placement tools guarantee maximum brand safety, as the paid promotion would never appear in a context containing harmful or objectionable content in the app, even more advertisers have continued reporting concerns under X's revised approach of "freedom of speech, not reach.".

Hyundai said on Tuesday that it is suspending its ad spend on X after discovering that some of its ads were appearing next to pro-Nazi content.

This comes just a few days after NBC reported that at least 150 blue checkmark profiles within the app, and thousands of unpaid accounts, posted and/or amplified pro-Nazi content on X in the past few months.

X had previously rejected the NBC report that was published earlier this week, labeling it a "gotcha" article that "lacks comprehensive research, investigation, and transparency." However, now, one of X's largest advertising partners is embroiled in the same controversy as depicted in the NBC report. Which X has admitted to doing, suspended the profile that is now under scrutiny, and has been working to address its concerns with Hyundai.

But again, it happens again and again which would point out that X's new approach to free speech is unsustainable, at least from the perspective of an advertiser.

Under X's "freedom of speech, not reach" strategy, there is now greater content violating X's policy left live in the app versus being deleted by X's moderators although its reach may be restricted to mitigate its spread. X also claims that such posts that have been dinged with such reach penalties can't display ads next to them but a number of independent review reports show that brand advertisements are indeed showing next to such content, meaning they either aren't being caught by X's systems as in violation, or X's controls for ad placement aren't even working.

The real issue for X is that, having reduced its overall staff by 80% while at the same time letting go of many of the moderation and safety personnel, the platform is not at all geared to handle the extent of detection and action required to enforce its rules. That means that a lot of posts that break the rules are just plain missed in detection, and so X relies on AI as well as its crowd-sourced Community Notes to do a lot of the work here.

Which experts argue won't work.
All of them depend on AI for some part of their content moderation, but they all are pretty unanimous about the fact that AI does not seem good enough as a single method and continues to keep in an expense human moderators.
 
We can tell from disclosure of E.U. made for other platforms that the other platforms maintain a better ratio of moderators to users than X.

Based on the most recent E.U. moderator reports, TikTok has one human moderation staff member for every 22,000 users in the app, while Meta is slightly worse, at 1/38k.
X has one moderator for every 55k E.U. users.
So, although X can brag that its cuts in staff left it well prepared to meet its moderation needs, obviously it is now relying more heavily on its other, non-staffed systems and processes.

Other claims made by safety analysts also suggest that X's Community Notes are just ineffective on this aspect, considering the parameters set on how the notes are displayed, and the time it takes before appearing, which are filled with many gaps on the whole enforcement.
And as what Elon Musk said himself repeatedly and stood by it, it seems he would prefer having no moderation in place at all.

Musk always believed that every opinion should be voiced in the app, where users could then debate on its merits and decide what is true and what isn't. Which, in theory, should lead to more awareness but in reality makes it such that opportunistic misinformation peddlers misled internet sleuths could gain traction with their haphazard theories, though wrong, hurtful, and often dangerous for both collective and individual ends.

For example, last week, after a man attacked several people at a shopping center in Australia, the verified X account misidentified the killer and broadcast the wrong person's name and information to millions of users across the platform.

It used to be that those with blue checkmark accounts were ones you could depend on for accurate information, often the point of even getting an account verified to begin with, but this incident highlighted the erosion of trust X's changes have wrought, as conspiracy theorists can now rapidly boost unfounded ideas in the app simply by paying a few dollars a month.

And what's worse, Musk himself often engages with content that's somehow related to conspiracy, admitting he never checks any of it for factuality before posting it. And because he holds the most followed profile on the app, arguably he presents the greatest risk for causing this type of harm yet, paradoxically, it is also him who decides policy for the app.

Which looks like a deadly cocktail.

It's also one that, unsurprisingly, is still leading to ads being displayed alongside such content in the app, and yet, just this week, ad measurement platform DoubleVerify issued an apology for misreporting X's brand safety measurement info, while reiterating that X's actual brand safety rates are at "99.99%". That means that brand exposure of this type is limited to just 0.01% of all ads displayed in the app.

Is this minuscule margin of error the reason behind these repeated concerns being reported, or is X's brand safety worse than what it portrays to be?

On balance, it seems that X still has a few issues to clean up, especially since the Hyundai placement issue was only addressed after Hyundai brought this to the attention of X. It wasn't picked up by the systems of X.

And with X's ad revenue still reportedly down by 50%, a critical squeeze is also coming to the app, which is likely to make more staffers in this element a bit of a tough solution any way.

Blog
|
2024-10-31 00:26:43