In short, it may be training an AI model on user data without updating its terms.
Opt-out toggle in LinkedIn settings screen discloses that the professional network scrapes personal data to train "content creation AI models." While not new, the toggle - as reported first by 404 Media was not immediately reflected in LinkedIn's refreshed privacy policy related to use of data.
Updating the terms of service, under normal circumstances, occurs long before the implementation of a significant change of the sort contemplated in this casewhere user data is used for a new purpose. It is supposed to offer users the chance to update their accounts or to leave the service if they don't like those changes. Not so here, it appears.
So what models is LinkedIn training? Its own, the company says in a Q&A, including models for writing suggestions and post recommendations. But LinkedIn also says that generative AI models on its platform may be trained by "another provider," like its corporate parent Microsoft.
According to the Q&A, as with most features on LinkedIn, when you use our service we collect and process data about your use of the service, which may include personal data. This may include your use of generative AI (AI models used to create content) or other AI features, your posts and articles, how often you use LinkedIn, your preference for which language you read, and any feedback that you may have submitted to our teams. We use this data, in compliance with our privacy policy, to improve or develop the LinkedIn services.
LinkedIn told TechCrunch earlier that it used "privacy enhancing techniques, including redacting and removing information, to limit the amount of personal information contained in datasets used for training generative AI.".
Opt out of LinkedIn's data scraping by navigating to the "Data Privacy" section of the LinkedIn settings menu on desktop, scrolling down to "Data for Generative AI improvement," and toggling off the "Use my data for training content creation AI models" option. It is also possible to opt out in a more general sense with this form; LinkedIn simply responds that any opt-out will not impact training that has already occurred.
The nonprofit Open Rights Group (ORG) called Thursday for the Information Commissioner's Office-the U.K.'s independent regulator for data protection rights-to investigate LinkedIn and other social networks that automatically train on user data. Meta had earlier this week announced it was resuming plans to scrape user data for AI training after working with the ICO to make the opt-out process simpler.
"LinkedIn is the latest social media company discovered to be processing our data without asking for consent," according to the statement from ORG's legal and policy officer, Mariano delli Santi. "The opt-out model proves once again to be wholly inadequate to protect our rights: the public cannot be expected to monitor and chase every single online company that decides to use our data to train AI. Opt-in consent isn't only legally mandated but a common-sense requirement."
Ireland's Data Protection Commission, the supervisory authority tasked with policing compliance over the GDPR-the EU's overarching privacy framework-said TechCrunch that LinkedIn had told it last week that its global privacy policy clarifications would be out today.
"LinkedIn said it's made clear to us that the policy would include an option for its members to opt out from the use of their data to train content-generating AI models," said a representative for the DPC. The opt-out is not available to EU/EEA members because LinkedIn is not currently training nor fine-tuning these models to use any EU/EEA member data.
We have reached out to LinkedIn for comment. We will update this story if they respond to our inquiry.
In response to these demands for more data to train generative AI models, an ever-growing number of platforms have begun to repurpose or reuse in other ways the vast reservoirs of user-generated content at their disposal. Some are actually monetizing this material: among the networks licensing data to AI model developers are Tumblr owner Automattic, Photobucket, Reddit, and Stack Overflow.
Not all of them have made it easy to opt out. When Stack Overflow announced that it would begin licensing content, several users deleted their posts in protest—only to see those posts restored and their accounts suspended.