Meta joins a new initiative to enhance the detection and enforcement of child abuse content.

Project Lantern will involve tech platforms collaborating to strengthen their collective efforts in addressing harmful content.
Meta joins a new initiative to enhance the detection and enforcement of child abuse content.

In a bid to keep young people safe on its apps, Meta has today announced that it is to be a founding member of a new initiative, "Project Lantern," through which it, along with other online platforms, will collaborate to detect and handle incidents of child abuse.

While the Tech Coalition will spearhead Project Lantern, this will allow cross-platform data sharing so that predators do not merely stop their activity on one app when they are detected and begin again elsewhere.
Meta says:

"Predators don't limit their attempts to harm children to individual platforms. They use multiple apps and websites, and adapt their tactics across them all in an effort to avoid detection. As soon as a predator is discovered and removed from a site for breaking its rules, they may head to one of the many other apps or websites they use to target children."

Project Lantern-the new project being launched together with Discord, Google, Mega, Quora, Roblox, Snap, and Twitch among its participating partners-will serve as the centralized platform for reporting and sharing of information to be used in stamping out such an activity.

Through the Lantern program, tech platforms will be able to share a wide variety of signals regarding accounts and behaviors that run afoul of their policies on child safety. Participants in Lantern will then be able to use that information to conduct investigations on their own platforms and take action, which they'll then upload into the Lantern database.

It is another important initiative, potentially huge in impact, and also only an extension of Meta's broad efforts to collaborate on better detection and removal of harmful content, including coordinated misinformation online.

At the same time, however, Meta's internal processes around protecting teen users are once again called into question.

This week, former Meta engineer Arturo Béjar testified before a Senate judiciary subcommittee to share his concerns over the dangers of exposure on Facebook and Instagram.

"The amount of harmful experiences that 13- to 15-year olds have on social media is really significant.". For example, if you knew, to the degree that these are what Meta saw in bullying and harassment or unwanted sexual advances, I don't think you would send your kids to school. Speaking from a position of direct experience, Béjar worked on cyberbullying countermeasures for Meta between 2009 and 2015, during which his own teenage daughter had unwelcome sexual advances and harassment on IG.

It is time the public and the parents understand the true extent of the damage these 'products' have been causing, and it is time young users have the tools to report and suppress online abuse.

As Béjar argued for tightened regulation of social platforms in the interest of teen safety, executives at Meta are "fully aware" of such concerns but avoid taking action because doing so might harm user growth and other effects.

Though it may have to do this soon, with the U.S. Congress looking to pass new legislation that would force social media platforms to give parents more tools to keep their kids safe online.

Meta already has a wide array of tools like this, but Béjar says Meta can do better at app design and making such tools more accessible in-stream.

It is yet another thing Meta would have to address, and it could, in some way, be related to this new Lantern Project, giving more insight into how such incidents occur across platforms and what are the best approaches to curb such.

But the bottom line is that it remains a significant issue, for all social apps. And as such, any progress toward detection and enforcement remains a worthwhile investment.

Blog
|
2024-11-15 03:57:20