LinkedIn Faces User Concerns Over Default Data Usage for AI Training Without Explicit Consent

LinkedIn users in the U.S., India, and other regions have raised concerns about the platform’s use of their personal data to train generative AI models without prior consent. These issues came to light when users discovered a setting labeled ‘Data for Generative AI Improvement’ within their account settings. This feature, which grants LinkedIn and its affiliates permission to use users’ personal data and content for AI model training, was automatically enabled.

LinkedIn explained that the AI models would be used to enhance content creation on the platform. However, users do have the option to manually disable this setting, opting out of their data being used for AI training purposes.

Many users expressed their discomfort on social media, criticizing the platform for activating the feature without their consent and raising concerns about potential plagiarism risks associated with their data usage.

It remains uncertain whether EU privacy regulations will block LinkedIn from using customer data within the region for AI training.

LinkedIn, acquired by Microsoft in 2016 for $26.2 billion, is also facing legal challenges regarding the alleged use of individuals’ data for training AI systems.

- Advertisement -

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles

error: Content is protected !!

Sign Up for CXO Digital Pulse Newsletters

Sign Up for CXO Digital Pulse Newsletters to Download the Research Report

Sign Up for CXO Digital Pulse Newsletters to Download the Coffee Table Book

Sign Up for CXO Digital Pulse Newsletters to Download the Vision 2023 Research Report

Download 8 Key Insights for Manufacturing for 2023 Report

Sign Up for CISO Handbook 2023

Download India’s Cybersecurity Outlook 2023 Report

Unlock Exclusive Insights: Access the article

Download CIO VISION 2024 Report

Share your details to download the report

Share your details to download the CISO Handbook 2024