Ten people were killed this past weekend in what is described as a racist attack on a Buffalo, New York supermarket. The eighteen-year-old white supremacist shooter streamed his attack live on Twitch, the video game streaming platform owned by Amazon. It doesn't matter even if Twitch did remove the video two minutes after its start; it is too late because gruesome footage of the terrorist attack is now open to the public on social media networks, Facebook and Twitter, even after companies committed to removing the video.
On Facebook, some users who reported the video were informed that it didn't break its rules. Company's spokesman told TechCrunch that was an error, and said it has 24/7 teams to remove the videos of shooting as well as links on other sites hosting the video. Facebook explained that it is also taking down copies of the shooter's racist screed and content praising him.
But when we searched a term as innocuous as "footage of buffalo shooting" on Facebook, one of the first results featured a 54-second screen recording of the terrorist's footage. TechCrunch encountered the video an hour after it had been posted and reported it immediately. The video wasn't taken down until three hours after posting, when it already had been viewed over a thousand times.
In principle, this should not be possible. A Facebook spokesperson told TechCrunch that it uploaded multiple copies of the video and included shooter's racist writings in the company database violated content, which contains data that helps Facebook to identify, remove, and block such material. We have addressed the company over this specific incident but did not get additional comments.
We're going to continue learning, honing our processes so that we can detect and take down violating content more quickly in the future, said Facebook integrity VP Guy Rosen, commenting in a tacked-on response to a question about why the company had trouble deleting copies of the video in an unrelated call on Tuesday.
We also hadn't been starved for access to reposts of the shooter's stream on Twitter. Indeed, when we typed "buffalo video" in the search bar, Twitter suggested searches like "buffalo video full video graphic," "buffalo video leaked" and "buffalo video graphic."
We found various videos of the attack that have been shared on Twitter for more than two days. One such video had over 261,000 views when we checked it on Tuesday afternoon.
In April, Twitter implemented a policy that would prohibit individual perpetrators of violent attacks from the platform. Beyond banning users, the company is also permitted to remove multimedia that may be associated with an attack as well as language based on terrorist "manifestos."
"We are removing videos and media relating to the event. We may also remove Tweets which circulate a manifesto or other material emanating from perpetrators," Twitter spokesperson said to TechCrunch. The company deemed this "hateful and discriminatory" content "harmful for society."
Twitter also suspects that some of the users are trying to circumvent takedowns, by posting manipulated or doctored content regarding the attack.
Whereas, video of the weekend's tragedy was somewhat difficult to find on YouTube. A simple search for the Buffalo shooting video mostly obtained coverage from mainstream news sources. Using the same terms we had searched with on Twitter and Facebook, we were able to locate a few YouTube videos using thumbnails of the shooting that were actually unrelated content once clicked through. According to TechCrunch, some posts on TikTok included links to external sites where users could view the video but no version of it could be found on the app during our searches.
Twitch, Twitter and Facebook say they are working with the Global Internet Forum to Counter Terrorism to stem the spread of the video. Both Twitch and Discord further assert that they are cooperating with the investigating government authorities. "The shooter explained in minute detail plans for the attack on a private Discord server just before carrying it out.
The Buffalo shooter chose to broadcast his attack on Twitch, according to documents reviewed by TechCrunch, because a 2019 anti-Semitic shooting at Halle Synagogue was still up for nearly 30 minutes before being taken off Twitch. The shooter considered streaming to Facebook but didn't use the platform because he thought that viewers were supposed to be logged in to view live streams.
It's also been the unwitting venue for mass shootings that weren't caught by algorithms in real time. That same year as Halle Synagogue attack, there was the attack on two mosques in Christchurch, New Zealand, where Islamophobia resulted in the killings of 50 people live streaming for 17 minutes. To this date, at least three perpetrators of mass shootings-including Buffalo-were cited to be inspired by a source of livestreamed attack during the Christchurch massacre.
Facebook, in response the day after the Christchurch shootings, reported it had taken down 1.5 million attack videos, 1.2 million of which it blocked at upload. Of course, that raises the question of why Facebook couldn't automatically detect 300,000 of them-a 20% failure rate.
I found it pretty easy to locate videos of the Buffalo shooting on Facebook, so I'm sure it will take some time to sort itself out.