LinkedIn joins Meta and WeTransfer in leveraging European user data for AI.
From November 3, 2025, the platform will use your collaborators’ profiles, publications and activities by default to train its generative models. A problematic “default consent” that requires immediate action on your part.
A decision in line with a worrying trend
LinkedIn is following in the footsteps of Meta (April 2025) and WeTransfer in exploiting European data for AI. After limiting this practice to the US, the platform is now extending it to Europe, Switzerland and the UK with a particularly problematic “default consent”.
Data concerned by this collection :
- Complete profiles: information on your employees, skills, experience, connections
- Public content: publications, articles, comments from your team and company
- Professional activities: job searches, interactions, consultation history
The dubious legal justification: LinkedIn relies on the RGPD, claiming that this exploitation “does not create an imbalance to the detriment of users’ rights”. A legal terrain that the CNIL itself describes as “subject to interpretation” and dependent on a fragile balance between corporate profits and respect for privacy.
Risks for your company
1. Intellectual property and trade secrets
Your teams regularly share strategic insights, industry analyses or innovations on LinkedIn. This content can now feed the platform’s AI models, potentially benefiting your competitors.
2. Regulatory compliance and irreversibility
Critical point: LinkedIn explicitly states that opt-out does not retroact. Data already integrated into AI models will remain permanently used, even after deactivation.
In Switzerland, with the revision of the DPA coming into force in September 2023, you must be able to justify and control the use of your employees’ personal data. This irreversibility poses a major compliance problem.
3. Security and confidentiality
Limited guarantee: LinkedIn assures us that private messages are not affected, but this promise is subject to future changes to the terms of use.
AI models may be trained by LinkedIn or “another vendor”, potentially including Microsoft’s Azure OpenAI service. Your corporate data could therefore pass through third parties without any control on your part.
Immediate actions (before November 3)
For your organization
- Emergency audit of LinkedIn profiles: take a census of all your employees’ accounts and identify sensitive shared information.
- Internal communication: inform your teams immediately of this development and the deactivation procedure.
- Review your social media policy: integrate these new risks into your external communication guidelines.
Deactivation procedure (to be completed by November 3)
Precise steps according to LinkedIn documentation:
- To access the settings: click on your profile photo > “Preferences and privacy”.
- Navigation: select the “Data privacy” tab
- Deactivation: click on “Data for improving generative AI”.
- Action: uncheck the default “Use my data to train content creation AI models” checkbox.
Important: this deactivation will not affect your ability to use future LinkedIn AI tools (writing assistance, content suggestions).
Complementary measure
For even greater protection, you can also fill in the LinkedIn data processing objection form, which can be accessed via the platform’s customer support.
Strategic perspective: beyond LinkedIn
This LinkedIn decision is part of a broader wave of European data exploitation by American tech giants. After Meta in April 2025 and the WeTransfer scandal, we are witnessing a normalization of these practices.
The issue of “default consent” is fundamentally transforming the user-platform relationship. Instead of asking for permission, these companies impose their conditions and leave it up to users to object – often in ignorance of the implications.
Strategic questions for your management
- How can you maintain your LinkedIn presence while protecting your information assets?
- How do you govern the external platforms used by your teams?
- How can we anticipate the next evolutions of these technological giants?
Long-term recommendations
- Data governance: develop a structured approach to data management on external platforms.
- Regulatory watch: keep abreast of changes in the practices of major technology platforms.
- Alternative solutions: evaluate professional platforms that respect confidentiality.
- Ongoing training: integrate these issues into your IT security awareness programs.
The race towards generative AI is fundamentally transforming the digital landscape. As an executive, anticipating these developments and protecting your organization’s information assets is becoming a decisive competitive advantage.
This article is based on official LinkedIn announcements and the analysis of data protection experts. For a personalized assessment of your situation, consult your legal and IT teams.

