As the Israel-Hamas war escalates, digital platforms have increasingly been used to spread information that is critical to impacted regions and audiences worldwide. Meanwhile, militant groups are trying to take advantage of social platforms for their influence on such messaging, sowing dissent and confusion, which each now has to mitigate as best it can.
Already now, big platforms are coming under scrutiny due to the new regulation of misinformation within the European Union. The latter has informed Meta, X, and TikTok regarding its new, more serious obligations.
Given this situation, officials of the European Union have announced already an investigation concerning X. Today, however, Meta supplied a comprehensive update on what it is doing in the wake of new EU requirements.
To Meta's claim that it had complied with the EU's demand for more information about its crisis process, it has also:
Set up a special operations center, manned by experts who are fluent in both Hebrew and Arabic, to monitor and respond to the evolving situation in real time.
Limited the recommendation of potentially violative content
Expanded its "Violence and Incitement" policy to remove content identifying hostages "even if it's being done to denounce or raise awareness over their situation"
Restricted hashtags that have been linked to violations of its Community Guidelines
It has limited the use of Live for users who have violated some policies in the past. Meta says it is also focusing on moderating live-streams from the affected area, specifically on Hamas' threats to broadcast footage of hostages
Added warning labels on content that's been rated "false" by third-party fact-checkers, and also applied labels to state-controlled media publishers.
These more advanced measures would give EU officials, especially, much greater insight into what Meta is doing in order to prevent false and misleading reports in its apps, which they would then have to measure against the new Digital Services Act criteria to monitor Meta's progress.
The EU DSA applies to online platforms with more than 45 million European users, contains specific provisions for crisis situations, and the obligations of "large online platforms" with regard to their responsibility to protect their users from mis- and disinformation within their apps.
The DSA documentation says:
Where a serious threat occurs, the Commission may adopt a decision imposing on one or more providers of very large online platforms or of very large online search engines an obligation to monitor whether the functioning and the use of their service[s] significantly contribute to, identify and apply specific effective and proportionate remedial measures capable of preventing eliminating or limiting any such contribution to the serious threat.
In other words, each one of those social sites by the EU has over 45 million subscribers and needs to implement proportionate measures against the spread of falsehood immediately after that event according to the assessment of the officials from that institution in the EU.
This is also because with these regulations, online space giants have to periodically make a report to the commission through a scheme with the outline of the steps that were taken to combat the prevailing situation.
Now that the EU has passed this request to Meta, X, and TikTok, it appears X didn't quite make the cut since it's now come out with its own inquiry too
Fines on failing those criteria will apply, but it will not take into account on what intake the Company made into the EU end across a company, up to a limit of 6 % revenue.
Meta is probably much less exposed in this area, since its mitigation programs are well established and have been evolving for a considerable amount of time.
In this regard, Meta states that it has "the largest third-party fact checking network of any platform", powering efforts to actively limit the spread of potentially harmful content.
X, which just eliminated 80% of its international workforce, may be more vulnerable than ever. In fact, its new approach, which relies much more on crowd-sourced fact-checking through Community Notes, seems to miss some of the misinformation that is going on about the attacks. Analysis by third parties shows how misinformation and false reports spread from X posts, which it may not be able to catch all of with limited resources.
In other words, X responded to the EU's request for more information on what it's doing in practice to address such issues but now will be left to the EU officials to decide if it's doing enough to meet its obligations under the DSA.
Which, of course, is the same situation that Meta is in, though again, Meta's systems are well-established, and are more likely to meet the new requirements.
It'll be interesting to see how EU analysts view such, and what that then means for each platform moving forward.
Can X actually meet these obligations? Will TikTok be able to adhere to tougher enforcement requirements, in line with its algorithmic amplification approach?
It is a crucial test because we are entering the next phase in which EU officials, to a large extent, dictate broader social platform policy.