UK

LinkedIn suspends training of AI models using UK user data

The UK’s data protection regulator said it had raised ‘concerns’ with the platform over the issue.

The professional networking platform said it had ‘suspended’ AI model training based on data from UK users
The professional networking platform said it had ‘suspended’ AI model training based on data from UK users (Dominic Lipinski/PA)

LinkedIn has paused training generative AI models using data from UK users after engaging with the UK’s data protection regulator.

The Information Commissioner’s Office (ICO) said on Friday that it had received confirmation from the professional networking platform that it had “suspended” AI model training based on data from UK users.

On Thursday, it was reported that a policy update by the online giant noted that personal user data, including posts and articles on the site, could be used to train AI models and other tools as well as to better personalise services on LinkedIn.

But in a statement, Stephen Almond, executive director for regulatory risk at the ICO, said the regulator had raised concerns with LinkedIn about this, and that the company had now altered its approach.

“We are pleased that LinkedIn has reflected on the concerns we raised about its approach to training generative AI models with information relating to its UK users,” he said.

Join the Irish News Whatsapp channel

“We welcome LinkedIn’s confirmation that it has suspended such model training pending further engagement with the ICO.

“In order to get the most out of generative AI and the opportunities it brings, it is crucial that the public can trust that their privacy rights will be respected from the outset.

“We will continue to monitor major developers of generative AI, including Microsoft and LinkedIn, to review the safeguards they have put in place and ensure the information rights of UK users are protected.”

In a statement, a LinkedIn spokesperson said that as well as not enabling AI training in the European Economic Area, Switzerland and the UK, it was making an “opt out” setting available for users in other areas who did not want to have data used in the scheme.

“We believe that our members should have the ability to exercise control over their data, which is why we are making available an opt out setting for training AI models used for content generation in the countries where we do this,” the spokesperson said.

“We’ve always used some form of automation in LinkedIn products, and we’ve always been clear that users have the choice about how their data is used.

“The reality of where we’re at today is a lot of people are looking for help to get that first draft of that resume, to help write the summary on their LinkedIn profile, to help craft messages to recruiters to get that next career opportunity.

“At the end of the day, people want that edge in their careers and what our gen-AI services do is help give them that assist.

“At this time, we are not enabling training for generative AI on member data from the European Economic Area, Switzerland and the United Kingdom.

“We welcome the opportunity to continue our constructive engagement with the ICO.”

Last week, social media giant Meta confirmed it would begin the training of its AI models using UK user data following a consultation with the ICO on the matter.

That announcement followed a similar stand-off with the ICO on the issue, but Meta said it had “engaged positively” with the regulator and said that it had now been given the green light on the scheme.

As part of its programme, the Facebook and Instagram parent firm said it would not use people’s private messages with friends and family to train AI, and that it does not use information from accounts of those in the UK under the age of 18.