How to Opt-Out of LinkedIn using your data for its Generative AI Models
LinkedIn will use your data to help train its generative AI models unless you opt-out.

To the surprise of many, LinkedIn updated its Terms of Service to allow the company to use posts and content from accounts within the United States to train its proprietary generative AI models. By default, U.S. user accounts are opted-in to LinkedIn using your data to train its AI models, forcing users to opt-out. However, according to an investigative report by 404 Media, it may have scraped massive amounts of user data before updating its policies – using your data before you even had a choice.

This change and automatic opt-in appear to be isolated to just United States accounts, likely due to privacy and regulatory laws that the European Union and other nations enforce.

How to Opt-Out of LinkedIn using your personal data

Not surprisingly, the news has led to enormous public backlash, driving many to opt-out of the policy as fast as possible. But LinkedIn hasn’t made it easy to do so, burying the opt-out within its user settings.

LinkedIn lists the toggle within its “Data privacy” section under account settings, titled “Data for Generative AI Improvement.”

Every U.S. user who wants to disable this feature must manually do so. To disable this, follow these steps:

  1. Login to LinkedIn on a desktop or mobile device
  2. Desktop: To access Account Settings, click “Me > Settings and Privacy” on the top navigation bar on a desktop.

    Mobile: Tap your profile image in the top left corner and tap “Settings” in the bottom left-hand corner.
  3. Desktop: Click “Data privacy” on the left side menu, and under “How LinkedIn uses your data,” click “Data for Generative AI Improvement.” Toggle this setting to No to opt-out.

    Mobile: Under Settings, tap “Data privacy,” and under “How LinkedIn uses your data,” tap “Data for Generative AI Improvement.” Toggle this setting to No to opt-out.
Cybersecurity researcher and ethical hacker Rachel Tobac demonstrates how to opt out of the LinkedIn “Data for AI Improvement” setting, which allows the company to use your data to train its AI and generative AI models. (source: X)

What data LinkedIn is using to train its AI models

According to LinkedIn’s Data Rights Q&A on its website, the company is “using its own data” to train its AI models. The data helps suggest new content for its users to write, share, or recommend.

“As with most features on LinkedIn, when you engage with our platform we collect and use (or process) data about your use of the platform, including personal data,” the Q&A reads.

LinkedIn cites within its policy the following data points are used to train its models:

  • The frequency you access LinkedIn
  • What languages you use
  • What content you share
  • What content you write
  • Usage of LinkedIn’s generative AI models or other AI capabilities
  • Any feedback you may have provided to our teams

It also states that generative AI models on its platform “may be trained by another provider,” which likely suggests its corporate parent, Microsoft, and its partnership with OpenAI.

A LinkedIn spokesperson told TechCrunch that “privacy enhancing techniques, including redacting and removing information, to limit the personal information contained in datasets used for generative AI training” are used.

However, a company that defaults to harvesting and using your data for training a generative AI model doesn’t sound privacy-conscious.

Why you should care and disable LinkedIn using your data

While it’s expected that tech companies will use data generated by its users to improve their products and solutions, the extent to which LinkedIn leverages this data and defaults to users opting in is egregious.

As the adage goes, if the product is free, you are the product. Your data is worth more than the company could charge for access. You’re effectively donating your data to the company by using the product or platform. However, most users expect the company to adhere to reasonable privacy and data security.

LinkedIn may be worth avoiding if you’re a creator or seeking to protect your creative work—even after opting out of its generative AI setting. Just ask Rachel Tobac, a cybersecurity researcher and ethical hacker, who found that the generative AI tool LinkedIn provides users scraped her content on the privacy implications of LinkedIn.

Rachel Tobac demonstrates how LinkedIn’s generative AI tool scraped her content from an earlier post, advising users on how and why they should opt out of sharing data in LinkedIn’s Data for Generative AI setting. (source: X)

Why LinkedIn wants to use your data

Generative AI and the need for quality training data are applying new pressure on companies to bend policies in search of new growth. At least within the United States, it’s a nebulous legal area as generative AI is nascent and data scraping is so prominent.

Quality data at scale is the holy grail of training and developing any AI/ML capability. Companies increasingly find creative ways to tap into new data sources to build the next killer AI capability. As massive generative AI large language models continue to vacuum as much publicly available data as possible (within legal boundaries or not), users may need to remain cognizant of and ensure they can opt out of this trend.

It remains to be seen what the long-term fallout is for LinkedIn and this policy. It may lead to future class action lawsuits or pass in time. Regardless, LinkedIn has a long history of being weaponized and exploited for identity theft and fraud. This may be a tipping point for the masses.


Discover more from Cybersecurity Careers Blog

Subscribe to get the latest posts sent to your email.

Join the Discussion