Early thoughts on behavioral advertising and the GDPR: a matter of discrimination?

How is behavioral advertising affected by the new EU General data protection regulation (GDPR)? This is probably one of the trickiest part of the new piece of legislation.

In its 2010 opinion (here), the group of EU data protection authorities (WP29) defined -online- behavioral advertising as “the tracking of users when they surf the Internet and the building of profiles over time, which are later used to provide them with advertising matching their interests”. The advertising matching is the result of an automated processing of personal data and is traditionally included in the concept of “profiling”.

When does personalized advertising entails profiling?

The threshold of profiling in perzonalized advertising is not always straightforward. The WP29 opinion distinguished among:

  1. Contextual advertising: advertising content selected based on the content currently being viewed by a user. E.g., for a search engines, the content derived from (i) the search keywords or (ii) the user’s IP address if connected to geographical location. It does not entail profiling.
  2. Segmented advertising: advertising content selected based on known characteristics of the data subject (age, sex, location, etc.), which the data subject has provided at the sign up or registration stage. It does not entail profiling.
  3. Behavioral advertising: advertising content selected based on user’s interests derived or inferred from his behavior. It entails profiling. In a recent decision (here), the Italian Garante stated that behavioral advertising entails profiling when the segmentation of the public is based both on generic data (sex, age, location) and the purchase history, as displaying different advertisement based on this data causes a diversification in the treatment of customers. This can be therefore an example of minimum level of profiling in personalized advertising.

Behavioral advertising and profiling under the Data Protection Directive

The Data Protection Directive (95/46/EC) did not specifically regulate the concept of profiling, though it already posed attention to automated data processing (which could include profiling). Article 15 was stressing that each individual should have the right not to be subject to decisions solely based on automated processing of data if they might (i) produce legal effects or (ii) significantly affect him. It left rooms for Member States to allow exceptions only in case this automated processing is (i) necessary for the performance of an agreement or (ii) authorized under national law. But behavioral advertising has not been generally included under the scope of this provision.

…and the GDPR

In the GDPR, profiling takes center stage as one of the main type of automated processing. The GDPR introduces a legal definition of profiling (Art. 4.4), that is based on the automated evaluation of individuals’ aspects to analyze or predict his/her situation, preferences, interests, reliability, behavior, location or movement. And adds right to explanation for individuals (Art. 13.2.f) and to require human intervention (art. 22.3).

Art. 22 GDPR also updates Art. 15 of Data Protection Directive, by adding “profiling” among the processing operations entailing automated decisions one has the right not to be subject to, i.e. when it might (i) produce legal effects or (ii) similarly significantly affect individuals. And introduces consent as a new legal ground for this automated processing. However, given the high risk entailed, it requires an “explicit” consent. Other legal grounds, e.g., legitimate interest, cannot be invoked. This profiling/automated processing is thus compared to special (risky) categories of personal data regulated by Art. 9 GDPR, for which “explicit” consent shall be sought.

What type of profiling entails a significant effect? Is this applicable to behavioral advertising?

Some guidance can be taken from the regime of the Data Protection Impact Assessment (“DPIA”) required for processing activities resulting in high risk. Indeed, Art. 35 GDPR states that a DPIA is in particular required in case of “systematic and extensive evaluation of personal aspects relating to natural persons which is based on automated processing, including profiling, and on which decisions are based that produce legal effects concerning the natural person or similarly significantly affect the natural person”. This provision basically recalls Art. 22 GDPR.

big bother eye.png

The WP29 guidelines on DPIA (here) mention as an example of profiling that does not significant affect individuals (meaning, not resulting in high risk): “An e-commerce website displaying adverts for vintage car parts involving limited profiling based on past purchases behaviour on certain parts of its website”. Probably not the most illuminating example, but in sum: profiling not systematic nor extensive. The guidelines then list the elements entailing a high risk:

  1. the use of a new technology;
  2. the time factor, i.e., the duration of the processing. For instance, in line with this, the Italian DPA (“Garante”) –in a landmark 2005 decision about loyalty programs– identified a maximum retention period for customer profiling data and their use for marketing of respectively 1 and 2 years (here). Retaining profiling data for longer periods entails a high risk and requires the Garante’s prior check –once the GDPR applies a DPIA would be required instead–;
  3. the large scale of data processed;
  4. matching or combining datasets.

An example of processing raising an high risk is: “The gathering of public social media profiles data to be used by private companies generating profiles for contact directories”. The use of social capturing tools to profile and match customer data on a large scale can be thus relevant.

“Strong” v “soft” profiling and the risk of discrimination in behavioral advertising

Based on this, one may apparently distinguish between a sort of “strong” and “soft” profiling, subject to different regimes. But the guidelines continue stating that a risk occurs also when processing may lead to discrimination against individuals.

This point is quite obscure, particularly if applied to behavioral advertising. Many say that behavioral advertising cannot have significant effects on individuals (see here and here). However, it is worthy noting that the UK DPA (“ICO”), in its public consultation about profiling (here – yet still provisional), warned about potential risks for individual fundamental rights connected even to behavioral advertising:

Profiling technologies are regularly used in marketing. Many organisations believe that advertising does not generally have a significant adverse effect on people. This might not be the case if, for example, the use of profiling in connection with marketing activities leads to unfair discrimination. One study conducted by the Ohio State University revealed that behaviourally targeted adverts can have psychological consequences and affect individuals’ self-perception. This can make these adverts more effective than ones relying on traditional demographic or psychographic targeting. For example, if individuals believe that they receive advertising as a result of their online behaviour, an advert for diet products and gym membership might spur them on to join an exercise class and improve their fitness levels. Conversely it may make them feel that they are unhealthy or need to lose weight. This could potentially lead to feelings of low self-esteem”.

One may argue that any kind of customer segmentation and profiling –resulting in different contents showed to individual– entails discrimination. It is probably too extreme. Profiling entails diversification in the treatment of individuals. It is for sure an act of processing (as it has an effect on individuals) though does not necessarily discriminate. But this risk shall not be underestimated: based on the opinions above, it appears that attention should be given to the possible effects (price discrimination, psychological effects, etc.) the envisaged targeted marketing campaigns can cause, considering the type products advertised, the way they are advertised, the type of segmentation and matching of interests, the scale, etc. Intrusive effects of profiling shall thus being considered. Not an easy exercise, but as profiling techniques are dramatically increasing their capacity of analyzing individual intimate aspects (e.g., AI advances are used to spot signs of sexuality, see here) authorities are calling for broader protection of individual fundamental rights.

What are the practical consequences?

“Risky” (or “strong”) profiling is subject to stricter formalities, requires a DPIA and explicit consent. This apply to behavioral advertising as well.

There is no definition of “explicit” in the GDPR. The WP29 (here) defined it as:  “all situations where individuals are presented with a proposal to agree or disagree to a particular use … and they respond actively … orally or in writing […] by engaging in an affirmative action to express their desire to accept a form of data processing. In the on-line environment explicit consent may be given […] through clickable buttons depending on the context, sending confirmatory emails, clicking on icons, etc.” An active choice shall be thus required.

What about behavioral advertising based on tracking cookies?

An explicit consent would require an opt-in for the use of tracking cookies. In the past, EU DPAs have exempted tracking cookies from explicit consents (see for instance the Italian Garante and the ICO decisions, here and here). Similarly, Recital 32 GDPR lists as a form of “express” consent (though not “explicit”) the setting of the browser. This seems evidently addressing the use of cookies. Some further guidance on the consent through the set up of browsers will be surely given under the e-Privacy Regulation. The current draft (here), at Art. 4a (former art. 9) and Recitals 20-22, provides further details on how browser settings will be presumably confirmed as a form of consent (though they are abandoning “cookie banner/cookie walls” solution endorsing advanced browsers settings “privacy by design” instead). But, based on the above, this kind of consent would be difficulty allow “strong” profiling (though e-Privacy Regulation is lex specialis and some guidelines will help).

Behavioral advertising and legitimate interest

On the other hand, “soft” profiling can be also based on “normal” consent, or on legitimate interest. This is clearly confirmed by Article 21 GDPR, that is however stressing that if profiling is based on legitimate interest controllers must grant data subjects with “objection rights”. The WP29 (see the guidance on legitimate interests, here) has already stated that: “controllers may have a legitimate interest in getting to know their customers’ preferences so as to enable them to better personalise their offers and ultimately, offer products and services that better meet the needs and desires of the customers.” Although it then excluded that legitimate interest can justify certain “excessive” behavioral advertising practices (see our past analysis here).

There is for sure a legitimate interest of companies in matching promotional communications to individual preferences and interests. This is the future of advertising as well as one of the main business models for free services. Without personalisation many services would loose appeal and users nowadays are expecting a certain level of personalization based on their interests. Conditioning behavioral advertising to consent collection can be burdensome. And consent does not always represent a real safeguard. For this reason, some are calling for more flexible interpretation of the above rules to avoid the risk of having Europe’s advertising market (as well as data intelligence industry) limited and less competing compared to other countries (that do not subject profiling to consent).

This shall be however balanced with the right to privacy and to have personal data processed fairly, which is a fundamental right under the EU Charter (art. 8).

The final answer is left to the balance between the conflicting interests of companies and individuals. A balancing test for measuring the legitimate interest and excluding significant effects and discrimination shall be of crucial importance. This fits in with the spirit of the new accountability principle, that leaves to data controllers the ultimate decision on how to treat the risks of the concrete processing. Data controllers shall put in place privacy safeguards. As balancing tools, transparency and granular control for individuals on how personal data are processed will play a key role, together with an real analysis of potential risks. These efforts should concretely mitigate privacy risks in behavioral advertising.

Let’s see how EU authorities guidelines on profiling (expected before December) will clarify these aspects and ultimately discrimination risks.

Francesco Banterle

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s