Meta will use AI to spot those who lie regarding their age on its apps in another move by the company to curb potential harm which youths may have to face on Facebook and Instagram.
Meta is developing an AI-based system that can detect youngsters who have lied about their age, according to a new report from Bloomberg.
According to Bloomberg:
With what it labels a proprietary software tool branded as an "adult classifier," Meta will categorize users into two broad ranges-generalizing the age of a person to be over or under 18 years, all based on information available from the user account. This software can scan through a user's profile, see their followers' list, and even take a glimpse at unsuspecting "happy birthday" posts from friends for predicting the age of the user.
Meta flagged this new technology in September. It detected that it will be shipped for live testing to Instagram in the early days of the new year.
That way, Meta would have yet another tool in its efforts to protect youngsters in the face of rising scrutiny of its apps and their hurtful impacts on teens.
Indeed, many regions are now debating age regulations for social media access as a way of increasing the level of protection in place for young children from such risks. Other places under discussion include Australia, Denmark, the U.S., and the U.K., among others to bring in age limits that would also undermine the users of Meta's apps.
Accordingly, Meta is also looking to do more to stop that, and Meta has also submitted that it should also be the responsibility of the app stores to enforce age restrictions for app use so that it becomes difficult for young people to open an account.
That would have been a proposal that appears to carry little hope of gaining any type of traction, at least for now, at least to shift the pressure back to the platforms themselves. And therefore, new systems such as this give greater comfort through systematic improvements to detection means.
Although Instagram has in place some other controls to prevent children from signing up as adults, such as age verification, which requires the platform to ask certain users for a government ID or to have friends or parents verify their age, it has advanced protections in place for teen accounts to better protect young users from harmful exposure in the app.
So, meta is acting, and this detection AI would be another step up on that side.
Of course, kids will find a way around those as well. Aiming for that consideration, there's some sense to the case Meta makes that it is the best-placed group to implement restrictions of this kind.
But till that happens, we'll have to make do with whatever steps the platforms can take.