HmbBfDI vs. WhatsApp: an Update

In an order with immediate enforcement, Johannes Caspar, the Hamburg Commissioner for Data Protection and Freedom of Information (HmbBfDI), has prohibited Facebook Ireland Ltd. from further processing WhatsApp user data in Germany, if this is done for their own purposes. As part of the emergency procedure under Art. 66 GDPR already discussed on this blog, this measure will remain valid for three months in the respective territory. In light of this short time frame, the Commissioner’s aim is to refer this issue to the European Data Protection Board (EDPB) in order to find a solution on a European level.

In the last months, WhatsApp had requested their users to agree to the new terms and conditions of use and privacy by May 15, 2021. With the new privacy terms and conditions, WhatsApp would receive wide-ranging data processing powers. This applies, among other things, to the evaluation of location information, the transfer of personal data to third-party companies including Facebook and cross-company verification of the account. Furthermore, the companies’ legitimate interest for data processing and transfer is brought forth in a blanket manner – even with regard to underage users.

After hearing Facebook Ireland Ltd. – and notwithstanding a consent to the terms of use – the HmbBfDI believes that there is no sufficient legal basis for justifying this interference with the users’ rights and freedoms. This is especially true when considering that the data transfer provisions are unclear, misleading and contradictory. The users’ consent is neither transparent, nor voluntary since users would have to agree to the new terms in order to continue using WhatsApp.

While this close connection between the two companies was to be expected, many stakeholders find it surprising that WhatsApp and Facebook actually want to expand their data sharing. At the same time, Johannes Caspar is confident that on the basis of the GDPR procedure, he will be able to “safeguard the rights and freedoms of the many millions of users who give their consent to the terms of use throughout Germany. The aim is to prevent disadvantages and damage associated with such a black-box procedure.”

In view of the upcoming elections in Germany, it is to be hoped that – in dialog with the companies – data protection-compliant solutions will be found quickly.

Dario Henri Haux

Anordnung des HmbBfDI: Verbot der Weiterverarbeitung von WhatsApp-Nutzerdaten durch Facebook (11.05.2021): https://datenschutz-hamburg.de/pressemitteilungen/2021/05/2021-05-11-facebook-anordnung

The Italian Competition Authority fines Facebook for misleading practices regarding data, chapter two

Today, the Italian Competition Authority (ICA) issued a 7 million euro fine to Facebook Ireland Ltd and Facebook Inc for failing to comply with the request to end their unfair practice regarding the use of users’ personal data, and to publish a due rectification (full text in Italian here).

Indeed, in November 2018 the ICA found the information given by Facebook to the users at the moment of the creation of the account misleading, due to the lack of adequate disclosure about the economic value of their personal data, which are monetized through the supply of targeted advertising (case PS1112). On the contrary, the ICA found that Facebook emphasized the free nature of the service, inducing users into making a transaction that they would not have taken otherwise. The misleading nature of such practice has also been confirmed by the Italian administrative judge (T.A.R. Lazio, decisions nn. 260/2020 and 261/2020).

The ICA advances that Facebook, despite having removed the claim about the free-of-charge nature of the service, still fails to provide users with clear information on the role of data as a means of payment in the exchange.

Alessandro Cavalieri

Italian Competition Authority, decision of 17 February 2021, ICA v. Facebook Ireland Ltd and Facebook Inc

The Court of Rome orders a mother to stop publishing on social networks contents relating her underage son

With the countdown to the GDPR almost at its end, this interim order from the Court of Rome (full Italian text here) has been largely debated in Italy in the last days.

These are the facts, in sum:

  • During a divorce proceedings, a mother has been publishing on social networks many photos, videos, posts relating this lawsuit, including information about her son – a 16 years old guy;
  • The son was really frustrated by this situation. Details about him were constantly disseminated on social networks by his mother and his history – in all details – became known to all his schoolmates. He started suffering serious psychological effects – particularly he was scared of being discriminated and considered “different” by his mates due to his private “issue”.
  • For this reason, the guardian of the son (previously appointed in another proceedings) asked the Court to confirm his right to attend a US college to get a new life far away from this nasty situation.

In the interim injunction, the Court ordered:

(i) the mother to stop publishing on social networks and any other media, images, information and any other data relating her son, (ii) to remove all these contents published so far on social networks; additionally the Court (iii) fixed a monetary sanction for any violation of these orders.

(iv) the guardian of the son to ask search engines to de-list and social networks to remove all images, information and any data relating the young guy.

The interim decision is of course reasonable. The Court has not relied on a particular legal qualification of the matter, however apparently based on general civil law principles considering psychological damages suffered by the young guy. Thus the decision seems to refer to a general right to privacy (even before considering a question of fair processing of personal data).

The case confirms how social networks can be risky for our privacy due to their media massive effect. A part from this specific case, we should wonder about possible negative consequences of posting certain contents about third parties, that we might not foresee. This is particularly true for underage people, where particular attention is to be paid to their privacy – also in the long term (for a view on the possible negative consequences of parental oversharing, see for example here: ‘it’s difficult for an individual to control that information once it’s out there. When it comes to our children, we’re making the decision to put things out on their behalf, and what seems appropriate now may not be appropriate in ten years’ time’).

Even before the GDPR, the EU data protection legislation required consent for publishing contents about third parties on social media, with some exception as in case of news reporting (that remains mostly a matter of national law). But not – of course – if it is a parent publishing data about his/her underage son. The GDPR is now paying new attention to:

(i) the processing of data of underage people (Recitals 38, 58, 65, 71, 75) – the GDPR requires parental consent for the use of information technology services (Art. 8). Although limited to this type of services, it sets the legal age for data protection choices (i.e., a “digital consent”) at 16 years old (there is some flexibility for national legislation, but this age cannot be below 13 years). If this applies also to the exercise of privacy rights is difficult to say (Recital 65 seems to confirm this option) and it shall probably consider national legislation as well. Recital 38 states that consent by a parent or guardian is not required in the context of preventive or counselling services offered directly to a child. For example, the provision of child protection services offered online to a child by means of an online chat service does not require prior parental authorization, clarifies the WP29 Guidelines on Consent under the GDPR (here).

(ii) the risk of discrimination – a risk we should often consider in certain personal data processing operations (we have discussed this aspect in relation to profiling activities here).

Francesco Banterle

Court of Rome – Judge Monica Velletti – order 23 December 2017

 

Early thoughts on behavioral advertising and the GDPR: a matter of discrimination?

How is behavioral advertising affected by the new EU General data protection regulation (GDPR)? This is probably one of the trickiest part of the new piece of legislation.

In its 2010 opinion (here), the group of EU data protection authorities (WP29) defined -online- behavioral advertising as “the tracking of users when they surf the Internet and the building of profiles over time, which are later used to provide them with advertising matching their interests”. The advertising matching is the result of an automated processing of personal data and is traditionally included in the concept of “profiling”.

When does personalized advertising entails profiling?

The threshold of profiling in perzonalized advertising is not always straightforward. The WP29 opinion distinguished among:

  1. Contextual advertising: advertising content selected based on the content currently being viewed by a user. E.g., for a search engine, the content derived from (i) the search keywords or (ii) the user’s IP address if connected to geographical location. It does not entail profiling.
  2. Segmented advertising: advertising content selected based on known characteristics of the data subject (age, sex, location, etc.), which the data subject has provided at the sign up or registration stage. It does not entail profiling.
  3. Behavioral advertising: advertising content selected based on user’s interests derived or inferred from his behavior. It entails profiling. In a recent decision (here), the Italian Garante stated that behavioral advertising entails profiling when the segmentation of the public is based both on generic data (sex, age, location) and the purchase history, as displaying different advertisement based on this data causes a diversification in the treatment of customers. This can be therefore an example of minimum level of profiling in personalized advertising.

Behavioral advertising and profiling under the Data Protection Directive

The Data Protection Directive (95/46/EC) did not specifically regulate the concept of profiling, though it already posed attention to automated data processing (which could include profiling). Article 15 was stressing that each individual should have the right not to be subject to decisions solely based on automated processing of data if they might (i) produce legal effects or (ii) significantly affect him. It left rooms for Member States to allow exceptions only in case this automated processing is (i) necessary for the performance of an agreement or (ii) authorized under national law. But behavioral advertising has not been generally included under the scope of this provision.

…and the GDPR

In the GDPR, profiling takes center stage as one of the main type of automated processing. The GDPR introduces a legal definition of profiling (Art. 4.4), that is based on the automated evaluation of individuals’ aspects to analyze or predict his/her situation, preferences, interests, reliability, behavior, location or movement. And adds right to explanation for individuals (Art. 13.2.f) and to require human intervention (art. 22.3).

Art. 22 GDPR also updates Art. 15 of Data Protection Directive, by adding “profiling” among the processing operations entailing automated decisions one has the right not to be subject to, i.e. when it might (i) produce legal effects or (ii) similarly significantly affect individuals. And introduces consent as a new legal ground for this automated processing. However, given the high risk entailed, it requires an “explicit” consent. Other legal grounds, e.g., legitimate interest, cannot be invoked. This profiling/automated processing is thus compared to special (risky) categories of personal data regulated by Art. 9 GDPR, for which “explicit” consent shall be sought.

What type of profiling entails a significant effect? Is this applicable to behavioral advertising?

Some guidance can be taken from the regime of the Data Protection Impact Assessment (“DPIA”) required for processing activities resulting in high risk. Indeed, Art. 35 GDPR states that a DPIA is in particular required in case of “systematic and extensive evaluation of personal aspects relating to natural persons which is based on automated processing, including profiling, and on which decisions are based that produce legal effects concerning the natural person or similarly significantly affect the natural person”. This provision basically recalls Art. 22 GDPR.

big bother eye.png

The WP29 guidelines on DPIA (here) mention as an example of profiling that does not significant affect individuals (meaning, not resulting in high risk): “An e-commerce website displaying adverts for vintage car parts involving limited profiling based on past purchases behaviour on certain parts of its website”. Probably not the most illuminating example, but in sum: profiling not systematic nor extensive. The guidelines then list the elements entailing a high risk:

  1. the use of a new technology;
  2. the time factor, i.e., the duration of the processing. For instance, in line with this, the Italian DPA (“Garante”) –in a landmark 2005 decision about loyalty programs– identified a maximum retention period for customer profiling data and their use for marketing of respectively 1 and 2 years (here). Retaining profiling data for longer periods entails a high risk and requires the Garante’s prior check –once the GDPR applies a DPIA would be required instead–;
  3. the large scale of data processed;
  4. matching or combining datasets.

An example of processing raising an high risk is: “The gathering of public social media profiles data to be used by private companies generating profiles for contact directories”. The use of social capturing tools to profile and match customer data on a large scale can be thus relevant.

“Strong” v “soft” profiling and the risk of discrimination in behavioral advertising

Based on this, one may apparently distinguish between a sort of “strong” and “soft” profiling, subject to different regimes. But the guidelines continue stating that a risk occurs also when processing may lead to discrimination against individuals.

This point is quite obscure, particularly if applied to behavioral advertising. Many say that behavioral advertising cannot have significant effects on individuals (see here and here). However, it is worthy noting that the UK DPA (“ICO”), in its public consultation about profiling (here – yet still provisional), warned about potential risks for individual fundamental rights connected even to behavioral advertising:

Profiling technologies are regularly used in marketing. Many organisations believe that advertising does not generally have a significant adverse effect on people. This might not be the case if, for example, the use of profiling in connection with marketing activities leads to unfair discrimination. One study conducted by the Ohio State University revealed that behaviourally targeted adverts can have psychological consequences and affect individuals’ self-perception. This can make these adverts more effective than ones relying on traditional demographic or psychographic targeting. For example, if individuals believe that they receive advertising as a result of their online behaviour, an advert for diet products and gym membership might spur them on to join an exercise class and improve their fitness levels. Conversely it may make them feel that they are unhealthy or need to lose weight. This could potentially lead to feelings of low self-esteem”.

One may argue that any kind of customer segmentation and profiling –resulting in different contents showed to individual– entails discrimination. It is probably too extreme. Profiling entails diversification in the treatment of individuals. It is for sure an act of processing (as it has an effect on individuals) though does not necessarily discriminate. But this risk shall not be underestimated: based on the opinions above, it appears that attention should be given to the possible effects (price discrimination, psychological effects, etc.) the envisaged targeted marketing campaigns can cause, considering the type products advertised, the way they are advertised, the type of segmentation and matching of interests, the scale, etc. Intrusive effects of profiling shall thus being considered. Not an easy exercise, but as profiling techniques are dramatically increasing their capacity of analyzing individual intimate aspects (e.g., AI advances are used to spot signs of sexuality, see here) authorities are calling for broader protection of individual fundamental rights.

What are the practical consequences?

“Risky” (or “strong”) profiling is subject to stricter formalities, requires a DPIA and explicit consent. This apply to behavioral advertising as well.

There is no definition of “explicit” in the GDPR. The WP29 (here) defined it as:  “all situations where individuals are presented with a proposal to agree or disagree to a particular use … and they respond actively … orally or in writing […] by engaging in an affirmative action to express their desire to accept a form of data processing. In the on-line environment explicit consent may be given […] through clickable buttons depending on the context, sending confirmatory emails, clicking on icons, etc.” An active choice shall be thus required.

What about behavioral advertising based on tracking cookies?

An explicit consent would require an opt-in for the use of tracking cookies. In the past, EU DPAs have exempted tracking cookies from explicit consents (see for instance the Italian Garante and the ICO decisions, here and here). Similarly, Recital 32 GDPR lists as a form of “express” consent (though not “explicit”) the setting of the browser. This seems evidently addressing the use of cookies. Some further guidance on the consent through the set up of browsers will be surely given under the e-Privacy Regulation. The current draft (here), at Art. 4a (former art. 9) and Recitals 20-22, provides further details on how browser settings will be presumably confirmed as a form of consent (though they are abandoning “cookie banner/cookie walls” solution endorsing advanced browsers settings “privacy by design” instead). But, based on the above, this kind of consent would be difficulty allow “strong” profiling (though e-Privacy Regulation is lex specialis and some guidelines will help).

Behavioral advertising and legitimate interest

On the other hand, “soft” profiling can be also based on “normal” consent, or on legitimate interest. This is clearly confirmed by Article 21 GDPR, that is however stressing that if profiling is based on legitimate interest controllers must grant data subjects with “objection rights”. The WP29 (see the guidance on legitimate interests, here) has already stated that: “controllers may have a legitimate interest in getting to know their customers’ preferences so as to enable them to better personalise their offers and ultimately, offer products and services that better meet the needs and desires of the customers.” Although it then excluded that legitimate interest can justify certain “excessive” behavioral advertising practices (see our past analysis here).

There is for sure a legitimate interest of companies in matching promotional communications to individual preferences and interests. This is the future of advertising as well as one of the main business models for free services. Without personalisation many services would loose appeal and users nowadays are expecting a certain level of personalization based on their interests. Conditioning behavioral advertising to consent collection can be burdensome. And consent does not always represent a real safeguard. For this reason, some are calling for more flexible interpretation of the above rules to avoid the risk of having Europe’s advertising market (as well as data intelligence industry) limited and less competing compared to other countries (that do not subject profiling to consent).

This shall be however balanced with the right to privacy and to have personal data processed fairly, which is a fundamental right under the EU Charter (art. 8).

The final answer is left to the balance between the conflicting interests of companies and individuals. A balancing test for measuring the legitimate interest and excluding significant effects and discrimination shall be of crucial importance. This fits in with the spirit of the new accountability principle, that leaves to data controllers the ultimate decision on how to treat the risks of the concrete processing. Data controllers shall put in place privacy safeguards. As balancing tools, transparency and granular control for individuals on how personal data are processed will play a key role, together with an real analysis of potential risks. These efforts should concretely mitigate privacy risks in behavioral advertising.

Let’s see how EU authorities guidelines on profiling (expected before December) will clarify these aspects and ultimately discrimination risks.

Francesco Banterle

IP addresses as personal data under the CJEU, French Supreme Court, and the GDPR approach: towards an expanding protection of data

Are IP addresses personal data? The answer has been debated in recent times.

An IP (internet protocol) address is a series of digits assigned to a networked device to facilitate its communication over the internet. IP addresses do not directly reveal the identity of users: additional information is necessary to identify them. However they can show some patterns of user behaviour. Static IP addresses are invariable and allow continuous identification. Dynamic IP addresses are provisional and change each time there is a new connection.

A study commissioned by the EU Commission (available here) revealed how EU members’ traditions with regard to the IP address “personal” nature have substantially diverged.

Recently, in the Breyer case (full text here), the CJEU held that under Directive EC/95/46 (“Privacy Directive”) IP addresses can be personal data. The action was brought by Mr Patrick Breyer, a member of the German Pirate Party. He objected that websites of Federal German institutions store visitors’ IP addresses with the aim of preventing cyber-attacks and allowing criminal proceedings. Eventually, the Bundesgerichtshof asked the CJEU whether in that context (where only internet service providers – ISP – hold data to identify users) ‘dynamic’ IP addresses constitute personal data.

A personal data is any information relating to an identified or identifiable individual, including in by reference to an identification number. Recital 26 of Privacy Directive says that to determine whether a person is identifiable, account should be taken of all the means “likely reasonably” to be used either by the controller (or by any other person) to identify him/her. It is not required that all the information necessary to identify the data subject is in the hands of one person. The possibility to reach the data with reasonable efforts is enough. Thus, the CJEU held that, since the website owner is able to contact the competent authority, so that the latter orders the ISP to disclose additional data on the individual, the website owner has the means which may likely reasonably be used to identify the data subject on the basis of the IP address. In light of this, a dynamic IP address can constitute personal data.

The Court thus embraced an extensive interpretation, previously suggested by the Article 29 Working Party (see opinion 4/2007 on the concept of personal data, p. 16).

Personal nature of IP addresses has been recently confirmed by the French Cour de Cassation (French text here), which held – although briefly – that “les adresses IP, qui permettent d’identifier indirectement une personne physique, sont des données à caractère personnel”.

In this context, the General Data Protection Regulation (“GDPR”) has strengthened this approach. It has specifically recognized that online identifiers, including IP addresses, may potentially identify users and create profiles, especially when combined with unique identifiers (e.g., usernames, see Recital 30). Therefore, the GDPR now explicitly includes online identifiers in the definition of personal data (Article 4). Thus, apparently the rule set by the GDPR could be as follows: IP addresses (likewise online identifiers) are presumed to be personal data unless under the circumstances a data controller can demonstrate that it does not have means “likely reasonably” to identify individuals. However, based on Breyer such proof will not be easily reached.

Finally, as a confirmation of the tendency to expand the definition of data deserving protection, the e-Privacy Regulation proposal (available here, which should repeal Directive 2002/58/EC) seems in line with the GDPR’s approach. The draft includes metadata (e.g. time of a call and location, numbers called, the websites visited, etc.), which are highly intrusive in the privacy sphere, within the scope of the protected data. Thus, except their use for billing purposes, metadata will require users’ consent to be used.

In sum, the concept of protected data is in the process of being updated and expanded, probably having the IoT and big data in mind, to embrace all aspects of virtual identities. And shall be shortly re-defined also vis-à-vis new tracking techniques.

Francesco Banterle

CJEU, decision of 19 October 2016, C-582/14, Patrick Breyer v. Bundesrepublik

French Supreme Court, decision of 3 November 2016

Personal data processing for marketing purpose under the new GDPR: consent v legitimate interest and Recital 47 – first thoughts

Under EU privacy law, we are used to think about “opt-in consent” as the ground normally used to legitimise the processing of personal data for marketing purposes (i.e. you can market individuals only with their explicit consent to do so). “Opt-out mechanisms” (i.e. you can market individuals if you have previously given the option not to receive communications) are instead an exception allowed only (i) for using email addresses already obtained by the data controller in the context of the sale of a product/service and (ii) for direct marketing of its own similar products or services, i.e. excluding direct marketing of third party’s products (see Article 13 of Directive 2002/58/EC – “E-Privacy Directive”).

The General Data Protection Regulation (“GDPR”) apparently has strengthened this approach (although it does not formally repeal the E-Privacy Directive, the latter will be soon amended to conform with it – otherwise a dual regime would make little sense). Personal data shall be processed on the basis of the consent of the data subject or some other legitimate basis including “legitimate interest” (Recital 40).

Consent (as the rule)

The GDPR requires consent to be a clear affirmative act, freely given, specific, informed and unambiguous, whereas “silence, pre-ticked boxes or inactivity” cannot instead constitute consent (Recital 32). Moreover, Article 7 states that data subjects have the right to withdraw consent at any time and such withdrawal shall be easy as to give consent. Thus, at a first reading, it seems that the GDPR lives no more rooms for opt-out mechanisms.

New exceptions are set out although limited. Recital 32 states that affirmative acts include (i) choosing technical settings for information society services (cookie settings on a browser?); or (ii) another conduct which clearly indicates in this context the acceptance of the processing (a form of implied consent?). Additionally, Article 6 states that consent is not necessary for subsequent “compatible” processing operations. Recital 50 says that compatibility should be assessed in light of the link between the processing purposes, the reasonable expectations of the data subjects, the nature and the consequences of further processing and the existence of appropriate safeguards for the data. Further example of lawful compatible operations are processing for archiving purposes, scientific or historical research purposes, or statistical purposes (we assume mainly scientific statistics without commercial nature, i.e. no big data analysis in most cases).

Legitimate Interest

On the other hand, among the other legitimate basis legitimising the processing Article 6 counts the “legitimate interest”. Interest is the stake or benefit that the controller (or a third party) has in the processing of data. Similarly to Directive 95/46/EC (see Article 7, letter f), the GDPR excludes the legitimacy of controller’s interest when it is overridden by the interests or fundamental rights of the data subject. In sum, the legitimate interest requires a true balancing test between the interest of the controller and the data subjects rights. This test shall also take into account reasonable expectations of data subjects and their particular relation with the data controller

What is new in the GDPR is Recital 47 stating that “the processing of personal data for direct marketing purposes may be regarded as carried out for a legitimate interest”. Does this mean that data subject’s consent is no more required for processing personal data for marketing purposes?

Some helpful indications are provided by the Working Party’s opinion on the notion legitimate interest (WP 6/2014 – the Working Party – “WP” – is the EU Privacy Advisory Body). Actually the WP already acknowledged that direct marketing and marketing research can constitute a valid legitimate interest under Directive 95/46/EC, and provided suggestions on how to conduct the balancing test. In marketing activities the object of the balancing test is about companies’ interest in knowing their customers and promoting their products against individuals’ interest not to be unduly monitored and spammed.

In general, the outcome of the test depends mainly on:

  • the intrusion/impact the processing entails (e.g. in case of profiling operations and combined analysis on vast amounts of data the intrusion is significant, and thus the test probably negative); and
  • the safeguards put in place by the data controller, and particularly the mechanisms to object to the processing and opt-out solutions.

The WP considered the outcome of the test in the following examples:

y general marketing by post to users of a food-delivery mobile app, when: (i) data are gathered when the user used the app to place a food-order; (ii) the app included an easy-to-use tool to opt-out from marketing ; (iii) limited information collected and used for marketing, i.e. contact details only (name, postal address); (iv) the marketing is operated by post and concerns similar products to those purchased (thus meeting users’ expectations) – the data and the context are of relatively innocent nature.

x targeted marketing (both online/offline) to users of a food-delivery mobile app combined with other data: same situation above, but: (i) the app uses users’ recent order history (a 3-year period), additionally combined with location data and browsing history via the mobile phone, and the data from a supermarket operated by the same company running the app; (ii) marketing is targeted based on the order history and operated both online and offline; (iii) the app lacks a user-friendly information and an easy-to-use opt-out tool – the data and the context are intrusive and there is a strong impact on users.

In conclusion

The WP’s opinion provides further examples that however confirm the above reasoning. In sum, it appears that in fact “soft” marketing can rely on the legitimate interest rule (substantially aligned to the opt-out exception of Article 13 of E-Privacy Directive), whereas advanced marketing (targeted emails, location based advertising, automated calling systems, etc.) always require consent.

The line between the two categories is not always clear. In those cases, relying on legitimate interest to justify marketing requires demonstration that the outcome of the test is positive (see Recital 69), due to low intrusion of that particular marketing and/or safeguards that are in place (e.g. mechanisms to access or modify personal data; or in case of free services which are in fact “paid” by allowing the use of personal data, alternative basic versions which do not require processing of data for marketing). In addition, Article 13 of the GDPR requires a clear mention of the legitimate interest pursued by the controller within the privacy notice.

Francesco Banterle

Right to be forgotten: the first Italian decision after Google Spain

By its judgment of 3 December 2015 (full text here), the Court of Rome issued the first decision of an Italian court dealing with the so called “right to be forgotten” after the ECJ leading case of 13 May 2014, C- 131/12, Google Spain SL, Google Inc. v Agencia Española de Protección de Datos, Costeja Mario González.

The applicant, a lawyer, sued Google, asking the de-listing of 14 links resulting from a list of results displayed following a search made on the basis of his name, on the assumption of the existence of a right to be forgotten. He argued that the links were referring to a court case dating back to the years 2012/2013 and dealing with an alleged fraud in which he was involved (but never condemned) with some representatives of the clergy and other subjects linked to the criminal organization known as “Banda della Magliana”. As a consequence, the lawyer called for the monetary compensation due to the illegal treatment of its personal data.

The Court of Rome dismissed the plaintiff’s request on the assumption that the disclosed personal data were both recent and of public interest.

The Court based its decision on the principles recently recognized by the Court of Justice in Google Spain (and already accepted by Italian previous case law, cfr. Cass. Civ. Sec. III, 05-04-2012, n. 5525).

In this case the ECJ ruled that the data subject may, in the light of his fundamental rights under Articles 7 and 8 of the EU Charter of Fundamental Rights (and in application of Article 12(b) and subparagraph (a) of the first paragraph of Article 14 of Directive 95/46/EC), request that the personal data in question no longer be made available to the general public by its inclusion in such a list of results. However, inasmuch as the removal of links from the list of results could, depending on the information at issue, have effects upon the legitimate interest of internet users potentially interested in having access to that information, “a fair balance should be sought in particular between that interest and the data subject’s fundamental rights under Articles 7 and 8 of the Charter” (par. 81).

Whilst “it should be held that those rights override, as a rule, not only the economic interest of the operator of the search engine but also the interest of the general public in finding that information upon a search relating to the data subject’s name”, the Court also recognised the existence of an exception to this general rule when “for particular reasons, such as the role played by the data subject in public life […], the interference with [the] fundamental rights [of the data subject] is justified by the preponderant interest of the general public in having, on account of [the] inclusion [of the information] in the list of results, access to the information in question” (par. 97).

The Article 29 Data Protection Working Party (hereinafter only “WP”) in its Guidelines on the implementation of the ECJ Judgement on Google Spain, adopted on 26 November 2014 for the purpose of establishing a list of common criteria to be used by European data protection authorities to evaluate whether data protection law has been complied with, stated that “no single criterion is, in itself, determinative”.

However among these criteria there are both whether the data are temporally relevant and not  excessive (i.e. closely related to the data’s age) and whether the data subject play a role in public life (s.c. public figures criterion).

With reference to the second criterion, even if it is not possible to establish with certainty the type of role in public life an individual must have to justify public access to information about them via a search result, the WP pointed out that “by way of illustration, politicians, senior public officials, business-people and members of the (regulated) professions can usually be considered to fulfil a role in public life”.

Under this test, the Court of Rome rejected the plaintiff’s request on the assumption that the treated personal data were both recent and of public interest and denied that the data subject had a right that the information relating to him should, at this point in time, no longer be linked to his name.

The decision can be welcomed to the extent it shows the benefits of the process of EU harmonization realized by means of the interpretative ruling of the ECJ and of the WP on the right to prevent indexing of personal data published on third parties’ web pages.

The judgement, in any case, works in the direction to limit the scope of application of the right to consign personal data to oblivion, since it affirms that the “public figure role” can be recognized not only to politicians and public officials but also to the large class of “business-people”, belonging to regulated professional orders.

Jacopo Ciani

Court of Rome, 3 December 2015, No. 23771, Dott.ssa Damiana Colla