So if it happens to come out at the end of the Christmas and New Year period, when message traction is specially low, then the media is going to receive it with a measure of skepticism.
It's that kind of week, with X's latest performance update. As ongoing concerns continue over the platform's revised content moderation approach, which has allowed for more objectionable and dangerous posts to remain on the app, prompting more ad partners to halt their X campaigns, now the company is asking to explain one major area of effort - on which Elon Musk himself had made a priority.
X's latest update is on its efforts at eradicating the existence of child sexual abuse material, a cause it says it has done much to decrease through process improvements for the last 18 months. Third-party reports contradict this, but raw numbers do seem to be doing a lot more in terms of detection and action on CSAM.
Details here are relevant.
X says it is suspending many more accounts than ever before for violating its rules on CSAM.
Said X:
"This marks a first six-month period since Twitter has released data on CSE that we suspended over 11 million accounts permanently between January and November of 2023. For comparison, in the entire year of 2022, the service suspended 2.3 million accounts."
So X is committing more abuses, though that would equally include unlawful suspensions and responses. Which is still preferable to doing less, but this, in itself, may not be a particularly great indication of improvement on this count.
X also reports significantly more CSAM cases:
"In the first half of 2023, X sent a total of 430,000 reports to the NCMEC CyberTipline. In all of 2022, Twitter sent over 98,000 reports."
Which is also impressive, but then again, X is also now employing "fully automated" NCMEC reporting, which means that every detected post is no longer subject to manual review. So a lot more content is subsequently being reported.
Again, you would think that translates to a better result because more reports should equal less risk. However, this figure is also not entirely indicative of effectiveness without data from NCMEC confirming the validity of such reports. So its reporting numbers are increasing, but there's not a heap of insight into broader effective's of its approaches.
For example, X, at one point, even claimed it had all but wiped CSAM off the face of the Earth overnight by blocking-dangered hashtags from circulation.
Which is probably what X is talking about here:
"Not only are we detecting more bad actors more quickly, but we are also creating new defenses to proactively reduce the discoverability of posts containing this kind of content. One example of such a measure recently implemented has reduced successful searches for known CSAM patterns by over 99% since December 2022."
That may well be true for the identified tags, but experts claim that the moment X blacklisted a few tags, CSAM peddlers have merely switched to other ones, so while activity on some searches might have reduced, it's hard to say whether this has also been highly effective.
But the numbers do look good, don't they? Well, for sure, it seems that more is being done, and CSAM is being restricted in the app. But without definitively broader research, we don't know for sure.
And as mentioned above, third-party insights reveal that CSAM is now more broadly accessible in the app under X's new rules and processes. Late last February, The New York Times did a study to determine just how easy it was to access CSAM in the app. It found that content was easy to find, that X was slower to action reports of such than Twitter has been in the past (leaving it active in the app for longer), while X was also failing to adequately report CSAM instance data to relevant agencies (one of agencies in question has since noted that X has improved, largely due to automated reports). Another NBC report did the same thing: despite extolling that Musk was making CSAM detection a key priority, much of X's action had been little more than surface-level, and had little actual effect. That Musk had also cut most of the team responsible for this element had also probably worsened the problem, rather than improving it.
That things couldn't get any worse followed when X reinstated the account of a major right wing influencer, who had already been banned for sharing CSAM content.
Yet, simultaneously, Elon and Co. are trumpeting their efforts to deal with CSAM as one of the reasons they're responding to brands pulling their X ad spend, as its numbers, in its view at least, show that such worries are unfounded, since it's actually doing more to address that aspect. But most of those worries pertain even more particularly to Musk's own posts and comments - not to CSAM specifically.
Hence, it is an odd report, shared at an odd time, which appears to hold a semblance of X's widening campaign but does not tend to seriously address the concern at hand.
And when you include in the mix the fact that X Corp is actively working to block a new law in California which would require social media companies to publicly reveal how they carry out content moderation on their platforms, the full slate of information doesn't quite add up.
Essentially, X is arguing that it's doing more, and that its numbers reflect such. But that does not prove definitively that X is doing a better job at curtailing the spread of CSAM.
But theoretically, it should be limiting the flow of CSAM in the app, by taking more action, automated or not, on more posts.
The data does, indeed, suggest that X is pushing harder in this area, though the efficacy is still to be proven.