As LinkedIn continues to integrate generative AI features—from post creation to job applications—it has updated both its User Agreement and Privacy Policy to clarify how your data is utilized to train its AI models.
Spoiler alert: LinkedIn is utilizing all publicly shared content on the platform to enhance its AI tools.
According to LinkedIn:
“In our Privacy Policy, we’ve added wording to specify how we use the information you provide to develop LinkedIn and its affiliates' products and services, including training AI models for content generation (referred to as 'generative AI') and for security and safety measures.”
The relevant section of the policy now states:
“We may use your personal data to enhance, develop, and deliver products and services, train artificial intelligence (AI) models, personalize our offerings, and derive insights with the aid of AI, automated systems, and inferences, so our services can be more relevant and useful to you and others.”
In its User Agreement, LinkedIn indicates that by using the app, you agree to the terms of its Privacy Policy, which encompasses these usage clauses.
It's important to note that LinkedIn does not explicitly exempt direct messages (DMs) from this agreement. Therefore, it’s conceivable that LinkedIn could use information shared in messages for AI training and ad targeting, which may concern some users. In contrast, Meta has consistently stated that it does not use private messages for AI training and does not utilize data from accounts belonging to individuals under 18.
LinkedIn has not provided similar assurances in its legal documents, which is significant to mention.
Additionally, LinkedIn has introduced an option to opt out of AI training, allowing users to disable this feature if they prefer not to have their information used in this manner.
However, similar to most security settings, the majority of users are unlikely to disable this option, resulting in LinkedIn effectively defaulting most of its users into this new agreement—except for those in areas still discussing AI training permissions.
This includes the EU, where data from European LinkedIn users is currently excluded from AI training altogether, as well as Switzerland, which is reviewing the terms of such agreements.
As mentioned, Meta is also clarifying its regional requirements regarding AI training permissions. Recently, it received approval to use data from UK users for this purpose, while X has introduced an AI training opt-out to comply with regional regulations.
Essentially, if you haven’t specifically indicated to a social platform that you do not want your personal information used for AI training, it’s likely being utilized for that purpose. This means your posts and updates could be contributing to a large language model somewhere.
Is this a significant issue?
Probably not, as the information is aggregated and heavily filtered, making it nearly unrecognizable. However, sharing personal details with large language models (LLMs) could potentially lead to problematic outputs, especially depending on what you're sharing online.
Regardless, users should have the right to choose, which is something LinkedIn has implemented and other platforms are now adopting. Even if these options are being added retroactively, many have already used your historical information without explicit consent.
This raises a larger concern: while you may want to opt out now, many of us have been active on social media for over a decade, and much of that information has likely been incorporated into various AI models.
So, does opting out now truly make a difference?
It ultimately depends on your perspective regarding the process and what you share online. However, more apps are beginning to offer options for disabling data sharing, which is a positive development, even if it may feel somewhat delayed in a broader context.