Meta has publicly chronicled recent developments in its maturation efforts to confront platform manipulation and hate speech, both of which have been responsible for two of the most significant network take-downs in recent history.
Meta's latest "Adversarial Threat Report" details how the company was able to shutter not one, but two of the largest-known covert influence operations across the globe through a cooperative effort that can help shape a new road forward for enforcement in the future.
Of course, the two teams originated in China and Russia, and focused on tools targeting more than 50 applications for social media, including Meta's.
It is perhaps called 'Spamoflage' by the cybersecurity community, but basically involved a programmatic effort to seed positive commentary about China and the CCP within Western news media. It looked to attack the policies of the West and even specific journalists and researchers critical of the Chinese Government. The initiative spanned thousands of accounts and pages.
Meanwhile, the Russian operation had boasted thousands of malicious website domains, each running stories that mimicked mainstream news outlets and government entities, posting fake articles which had largely targeted weakening support for Ukraine. The program targeted France, Germany, Ukraine, the U.S. and Israel.
According to Meta, these massive operations, spanning across numerous social platforms and websites, had been active for some time, so this last takedown, which could also likely result in criminal prosecution in the two states where it was based, could really affect the influence operations space.
This is a significant step, and Meta has been talking up the overall collaborative approach that has contributed to this breakthrough, which it hopes will also serve as a disincentive to other bad actors going forward.
In addition to this, Meta has published a new study into the impact of six network disruptions of proscribed hate-based groups on Facebook.
This study found that de-platforming such entities through network disruptions might make the ecosystem less hospitable to designated dangerous organisations. While individuals closest to the core audience of these hate groups show signs of backlash in the short-term, the evidence shows they decrease their engagement with the network and hateful content over time. The evidence suggests that our strategies can impede the operational ability of hate organisations on the Internet.
This is also an important step because it suggests even better ways to face the online spreading of hate speech.
Network effects of social media help users connect with like-minded folk no matter where they may be, and that can obviously be great, but it also means that hate groups can amplify their message and recruit more members through the same process.
While this would be a step forward into creating better means of mitigating the same, research like this could also provide new guidance along the same front.
Meta also spoke to how it's approaching influence operations on Threads and how it's building that into the foundations of the new app, while also sharing new insight into how it's looking to tackle misuse of its generative AI tools through collaboration with researchers to seek out vulnerabilities.
Meta is currently working towards designing new methods to deal with these major issues by running real-time "stress tests," but they are already producing better output through increased partnerships.