The owner of Facebook said Monday that it is expanding tests of facial recognition as an anti-scam measure to fight celebrity scam ads and more broadly.
For example, says VP of content policy Monika Bickert in a blog post, some of the tests are designed to make its existing anti-scam efforts more effective such as automated scans using machine learning classifiers through its ad review system-which will help ensure fraudsters find it harder to fly under its radar and dupe Facebook and Instagram users into clicking bogus ads.
Thieves often attempt to use images of public figures, including content creators or celebrities, to bait people into engaging with ads that lead them to scam websites where they are asked to share personal information or send money. This scheme, which has become known as 'celeb-bait,' breaks our policies and is bad for people that use our products, she concluded.
"Of course, celebrities are featured in many legitimate ads. But because celeb-bait ads are often designed to look real, they're not always easy to detect."
The tests appear to depend on facial recognition as a back-stop for judging ads flags as suspicious by existing Meta systems when they include the image of a public figure at risk of so-called "celeb-bait."
Bickert wrote that "we will try to use facial recognition technology to compare faces in the ad against the public figure's Facebook and Instagram profile pictures." "If we confirm a match and that the ad is a scam, we'll block it," she added.
The feature was never used to do anything other than fighting scam ads, Meta said. "We immediately delete any facial data generated from ads for this one-time comparison regardless of whether our system finds a match, and we don't use it for any other purpose," she said.
The company said early tests of the approach-using "a small group of celebrities and public figures" (it did not say who)-have yielded "promising" results in speeding up and improving the efficiency of detecting and taking action against this type of scam.
Meta said it also believes facial recognition would work well to catch deepfake scam ads, where generative AI has been used to create imagery of famous people.
The social network has faced long-standing complaints for years that it hasn't done enough to prevent scammers from stealing famous people's faces in an effort to use its ad platform to solicit scams such as sketchy cryptocurrency investments from unsuspecting users. So, of course, it's fascinating timing for Meta to be touting facial recognition-based anti-fraud measures to solve this problem now when the company is simultaneously trying to scoop up as much user data as it can train its commercial AI models against (as part of the wider industry-wide scramble to build out generative AI tools).
Meta said that it will start displaying in-app notifications in the next few weeks to a larger group of public figures hit by celeb-bait which informs them they're being enrolled in the system, the company said.
"Public figures enrolled in this protection can opt-out in their Accounts Center anytime," added Bickert.
Meta is also testing use of facial recognition to identify celebrity imposter accounts-that is, where fraudsters try to impersonate public figures on the service in an attempt to expand their scope for fraud-by using AI to match images of a person's profile picture on a suspicious account against images of the public figure's Facebook and Instagram profile pictures.
"We're looking forward to testing this, as well as other new approaches, soon," Bickert said.
Video selfies plus AI for account unlocking
But Meta meanwhile reported that it's testing facial recognition applied to video selfies to let people quickly recover their Facebook/Instagram accounts if they've been locked out of them through takeover scams-i.e., if someone has been conned into giving away their passwords.
This is going to appeal best to users who are convinced by the touted utility of facial recognition technology to authenticate identities-that Meta is promising it would be a quicker and easier route than uploading an image of a government-issued ID, which happens to be the usual route through which unlocking access access is now achieved.
"Video selfie verification doubles the choices available to individuals seeking to regain access to their accounts, requires just a minute to conduct and represents the least burdensome approach through which people can prove their identity," Bickert noted. "As we are also aware hackers will continue trying to exploit account recovery mechanisms, this new form of verification will, in fact, be harder to exploit than the classic, old-fashioned paper-based form of identification verification."
The facial recognition-based video selfie identification method Meta is piloting will involve uploading a video selfie that will be processed by facial recognition technology for comparison purposes with the profile pictures uploaded on the account in question.
According to Meta, the method is comparable to the identity verification typically used to unlock a phone or other apps, like Apple's Face ID on iPhone. It will be encrypted and stored securely as soon as a video selfie is uploaded, Bickert added. It will never be visible on their profile, to friends, or to other people on Facebook or Instagram. We immediately delete any facial data generated after this comparison regardless of whether there is a match or not.
Perhaps one way for Meta to expand its offerings in the digital-identity space is by conditioning users to upload and store a video selfie for ID verification — if, that is, enough users opt in to uploading their biometrics.
No tests are being conducted in the U.K. or in the EU — at least, not yet.
All these facial recognition tests are running all over the world, said Meta. It brought attention though, that at the moment testing is not happening in the U.K. nor in the European Union - and where wider data protection regimes apply. In general, for biometrics as used in ID verification, the EU's data protection framework requires explicit consent of the parties involved in such use case.
Given this, Meta's tests seem to fit into a broader PR strategy it's been advancing in Europe over the past few months aimed at trying to push lawmakers there to weaken citizens' privacy protections. This time, the reason it's using to push for unfettered data-processing-for-AI isn't a (self-serving) concept of data diversity or lost economic growth but a far more sensible aim: blocking scammers.
"We are engaging with the UK regulator, policymakers and other experts while testing continues," Meta spokesman Andrew Devoy told TechCrunch. "We will continue to seek feedback from experts and evolve based on the changes in features.".
On the one hand, the use of facial recognition for a security purpose narrow might be acceptable to some-and, indeed, might be possible for Meta to undertake under existing data protection rules-but using people's data to train commercial AI models is an entirely different kettle of fish.