Hey, UK! Here’s How to ‘Opt Out’ of Meta Using Your Facebook and Instagram Data for AI Training

The social networking giant has taken the next step and begun to notify local users that it will soon start helping itself to their information again, after Meta recently reignited contentious plans to exploit the public posts of U.K
Hey, UK! Here’s How to ‘Opt Out’ of Meta Using Your Facebook and Instagram Data for AI Training

The social networking giant has taken the next step and begun to notify local users that it will soon start helping itself to their information again, after Meta recently reignited contentious plans to exploit the public posts of U.K. Facebook and Instagram users as AI training fodder.

The bad news is that the opt-out process Meta has come up with for this data-for-AI grab is almost as onerous as it was the first time around.

Read on for a break down of the latest changes and details on how to object…

'New AI features for you…'
The company last week began informing users of its plan to make another data scrape, but as before, this user notice of Meta's plan for using users' data can easily get lost in the tide of messages like friend requests and updates in groups. You know how when Facebook shows you that it's asking you vote in an election that message goes at the top of the feed.

The notice wording also implies users have no choice, as Meta just brags about "new AI features for you" and says users can find out "how we use your information." Rather than certainly letting people know they have an option to opt out of processing,.

Moreover, even if the user finds the notice, the objection process is not easily accessible; they have to click several and scroll through just to lodge an objection. Meta also asserts that it is in its discretion whether to honor it, which might make the users less likely to take the trouble to file an objection.

'Legitimate interest'
Meta's been appropriating user-generated content in many markets to train its AI for a while now. But Europe's robust framework of data protections, in other words the GDPR, has caused issues for the social networking giant -- and other tech giants -- from doing the same around the region.

Meta's argument is that it needs local user-generated content to help improve its large language models, including public social media posts, comments, interactions, photos and more—and it claims that such access will help it better reflect the diversity of the European population. However, the GDPR requires that it has a valid legal basis for processing information about people in order to train AIs.

The European Union and UK regulators said last month that the people whose data would be used have to actively agree rather than be asked to opt out of this new use. Meta was forced in June to halt its plans to train its AIs on Europeans' data.

Meta says it is relying on a GDPR provision called "legitimate interests," (LI) which the company claims excuses the company from having to secure people's consent first. However, the same legal basis it used to process personal data for its micro-targeted advertising business was stricken down by the Court of Justice of the European Union in a July 2023 ruling. Experts at Privacy say that LI is equally inappropriate for Meta to scoop up people's data for training AIs.

Because Meta's business now sits outside of the jurisdiction of the EU, it has — notwithstanding this situation — continued training its datasets in the UK, which had only made small changes to the opt-out process provided for local users. In so doing, it acted even though domestic data protection laws in the U.K. remained based on the GDPR as set out in the EU. It was also not processing the data of the users from the EU for purposes of training AIs.

Objection, your honour
The single major cause of contention with UK users is that Meta has not made things easy to object to having people's posts turned into AI training fodder.

Of course, Meta's new opt-out mechanism requires a few fewer clicks than the one that finally provoked the U.K.'s ICO to protest. There is less corporate jargon for consumers to wade through compared with then. But asking to opt out remains a laborious process far more so than it need be.

The broader problem remains the same: Meta is giving users an opt-out but not actually giving them a free choice over using their data for training AI. If that were the case, users would have to affirmatively "opt in" before it could use their information and that's still not the case here. Unless the user objects, Meta will be using their information to train its AI — assuming it does honor the objection.

So how do you object? Once the user clicks on Meta's notification (assuming they see it), they're taken to a page that informs them of Meta's plans, and also tells them they have the "right to object" to this use of their information.".

"If your objection is honored, from that point forward, we won't use your public information from Facebook and Instagram to develop and improve generative AI models for our AI at Meta features and experiences," the notice reads.

Should the user want to object, the click of a hyperlinked word "object" takes one to a form to complete.

The form is pre-filled with the email address tied to the account of the user. One notable difference here compared to Meta's last attempt at an opt-out is that a box asking the user to tell Meta how its data processing affects them has been described as "optional" — whereas when Meta attempted to push this months ago, the user was forced to write something.

Despite a few tweaks, the redesigned process Meta has come up with still doesn't meet the measure of a strict opt-out, either—although Meta publicly claims it will honor every objection, the wording throughout the process states that it's at Meta's discretion.

Asked about this, Meta spokesperson, policy communications manager Matt Pollard replied in an email that language on the whole "if the objection is honored" bit, comes from how it requires an account holder to provide an email address connected to their account.

However, the user must be logged in to their Facebook account in order for the form to be submitted, and an email address field is already pre-filled with the user's linked email address, so it is unclear how an invalid email address would be submitted unless the user were to manually edit their email address that's already in there.

There is no confusion here in any way, it's very clear — we will respect all objection forms received, Pollard added.

However, based on our testing, a valid email address isn't even necessary to be successful at opting out-any random string of letters can go in the email address field, and Meta will probably honor the request. The company clarified that the email address is actually just for if the user would like "a receipt" for their objection, though the field is still marked as required to fill out.

So sue me.
'Unlawful processing'?
Some legal commentators were then arguing in the wake of Meta's improved notice procedure that it may not be harmonious with parts of the GDPR. Of course, Dr. Jennifer Cobbe, assistant professor in law and technology at Queens' College, Cambridge argued that this was, in fact "unlawful processing".

This is what I said: though the threat of a complaint to a regulator as toothless as the ICO is unlikely to make any company change their mind pic.twitter.com/BXj2lTC731

One legal issue she highlights is that under the U.K.'s GDPR, so-called "special category data" requires extra protection because of its sensitivity. This is significant, as sensitive characteristics — such as a person's racial or ethnic origin, political opinions, beliefs, health information, sexual orientation, and more — could easily be conveyed publicly to friends on Facebook. And Article 9 of GDPR explicitly states that the data subject (i.e., a Facebook user) must give explicit consent for special category data to be processed—which means it should be opt-in.

So, while Meta goes on with its plans to continue training of data in the U.K.-this time under the guise of having a "legitimate interest" in capturing people's data-it may find it more difficult if users decide to make formal complaints before the regulator.

Asked whether Meta's new approach to processing people's data for AI passes muster, the ICO referred TechCrunch to its earlier statement, issued three weeks ago. In it, Stephen Almond, its executive director for regulatory risk, said it would "monitor the situation as Meta moves to inform UK users and commence processing in the coming weeks." So if enough users raise a stink, the ICO could be forced to act.

Worth noting, Almond continues, the ICO hadn't approved Meta's approach, and that "it is for Meta to ensure and demonstrate ongoing compliance.".

Blog
|
2024-11-07 19:21:29