Meta’s Data Privacy Tactics Challenged In EU Court Ruling

Meta has faced a privacy challenge in the European Union’s top court, which has ruled against its data retention policies. The court ruled that social media platforms such as Facebook are not allowed to keep exploiting people’s personal information for the purpose of targeting ads continuously.

The decision may have a big effect on how Meta and other regional ad-supported social networks operate. There must be restrictions on the amount of time that personal data can be kept adhering to the EU’s General Data Protection Regulation’s (GDPR) data minimisation principles.

Meta could face billions of dollars in fines if these rules were broken, which could equal up to 4% of the company’s yearly global revenue. The GDPR’s Article 5(1)(c) mandates that data processing restricted to what is absolutely required.

This implies that personal data collected on the platform or from third-party sources cannot be aggregated, analysed, or used for targeted advertising unless there are clear, time-bound restrictions.

The European Court of Justice (CJEU) issued a statement:

An online social network such as Facebook cannot use all of the personal data obtained for the purposes of targeted advertising, without restriction as to time and without distinction as to type of data.

The CJEU decision is consistent with an earlier ruling from April that upheld limiting the keeping of personal data for the purpose of ad targeting. According to Matt Pollard, a spokesperson for Meta, the company is awaiting the complete decision.

Meta’s business strategy is based on tracking and profiling people across its platforms and the internet using cookies, pixels, and plug-ins to offer micro-targeted advertisements. As a result, any limits on profiling in such a strategic location might have an impact on revenue.

A Significant Setback for Meta’s Data Privacy Practices

Max Schrems, a privacy advocate, alleged that Facebook misused his personal information to target advertisements based on his sexual orientation. The Court of Justice of the European Union (CJEU) has now ruled in Schrems’ favor.

Schrems reported that Facebook continued to show him advertisements targeted at LGBTQ+ individuals, despite the fact that he had never disclosed his sexual orientation on the platform. The case was initially filed with Austrian courts in 2020.

Under EU legislation, information about an individual’s gender identity, cultural background, or medical history is considered sensitive and is subject to strict rules regarding its use. Although it was predicted, Katharina Raabe-Stuppnig, Schrems’ attorney, expressed satisfaction with the decision.

In 2022, Vice’s Motherboard discovered a leaked internal memo from Meta developers that related the company’s data harvesting techniques to throwing bottles of ink across a large lake.

The document pointed out that Meta’s collection of personal data lacked sufficient security measures, making it difficult to separate various categories of data or enforce data retention limitations.

Although Meta responded that the document “does not reflect our extensive processes and controls to comply with privacy regulations,” it is unclear how Meta would modify its data retention policies in the wake of the CJEU decision. The legislation does, however, make it clear that boundaries must be upheld.

Although the ruling of the EU court isn’t legally enforceable on UK courts, Will Richmond-Coggan, a partner at Freeths law firm, said it will nonetheless have a big impact.

Will continued by saying:

Meta has suffered a serious challenge to its preferred business model of collecting, aggregating and leveraging substantial data troves in respect of as many individuals as possible, in order to produce rich insights and deep targeting of personalised advertising.

There may be more implications from this aspect of the CJEU decision than just social media. Recently, tech giants like Meta have been looking for new uses for user data to build AI models.

For the purpose of training large language models (LLM), some AI engineers also use web scraping to gather huge amounts of data. In both cases, it could be against the GDPR’s purpose limitation principle to use people’s data for a new purpose (like AI training).

Has your organisation started to increase cyber security measures yet? Start your two-week free trial today.

Recent posts