The Court of Milan on the impact of Cofemel on the copyright protection of industrial designs in Italy. A new CJEU referral on the horizon?

While Cofemel slowly marches toward its second birthday, its actual impact on Italian copyright law is still a mystery.

A preliminary disclaimer: in my view, the Cofemel decision is far from straightforward. The fact is, however, that the CJEU has been pretty clear-cut in stating that under EU law the existence of a “work” – as defined in the Court’s settled case law (see for reference Cofemel, at 29-34) – is the only requirement for copyright protection, works of industrial design included. And this principle, implicitly repeated in Brompton, implies that Member States would not be allowed to make the protection of such works conditional upon fulfilment of further requirements, in spite of Article 17 Design Directive and Article 96 Design Regulation.

Italy is (was?) one of the countries where these further requirements must (had to?) be met. Under Article 2.10 of Italian Copyright Law, in order to be protected, works of industrial design must have inherent “creative character and artistic value”. It is no surprise, then, that Italian scholars have been particularly prolific in speculating on the possible assassination of “artistic value” by the CJEU.

As influential as the scholars’ words may be, however, absent legislative intervention (which does not seem to be under discussion), the words that mainly count are those of the (Italian/EU) Courts. And these words are yet to be uttered.

In the Kiko decision (full text here), relating to the layout of the Kiko concept store, the Italian Court of Cassation explicitly quoted Cofemel. However, the Kiko case concerned a work qualified as architectural, protected under Article 2.5 of Italian Copyright Law regardless of its “artistic value”. Thus, the Court of Cassation did not really have to deal with the impact of Cofemel on “artistic value” under Article 2.10 of Italian Copyright Law. And it is worth noting that, if an indication explicitly relating to “artistic value” can be found in the decision, that indication would be the incidental statement whereby, in principle, the individual elements of the Kiko concept store can be protected as works of industrial design, “provided that they have an actual ‘artistic value’”. As if Cofemel did not exist.

A similar (non-) stance has been taken by two recent decisions issued by the Court of Milan with specific regard to works of industrial design.

In the Tecnica v. Chiara Ferragni decision (21 January 2021: here), concerning the famous Moon Boots, the Court of Milan granted copyright protection on the grounds that the Moon Boots had “artistic value”, taking it for granted that “artistic value” was still a requirement for the protection of works of industrial design (for our comments on another decision of the Court of Milan on the copyright protection of the Moon Boots, see here).

The same approach has recently been adopted by the Court of Milan with regard to a lamp designed by the Castiglioni brothers, Achille and Piergiacomo. In its 15 February 2021 decision (full text here), the Court acknowledged the “artistic value” of the lamp and granted the requested protection – again without questioning the possible impact of Cofemel on the case.

However, it was just a matter of time before the Cofemel decision and the “artistic value” requirement under Article 2.10 of Italian Copyright law would find themselves in the same court room.

In the still ongoing Buccellati case, the Court of Milan has been dealing with the alleged infringement of Buccellati’s copyright on several pieces of jewelry. The case began before Cofemel and, therefore, in a legal framework in which  there was no doubt as to  the relevance of “artistic value”. But then Cofemel arrived and a discussion arose between the parties on its impact.

Interestingly, in an interlocutory decision of 19 April 2021 (full text here), the Court of Milan (Judge Rapporteur: Alima Zana) – after quoting Cofemel at §35, whereby any subject-matter constituting a “workmust, as such, qualify for copyright protection”– explicitly states that, in light of Cofemel, Member States might no longer be allowed to “filter”  access to copyright protection by adding stricter requirements, such as  “artistic value” under Article 2.10 of Italian Copyright Law.

Also taking into account the “concerns expressed by the Advocate General”, and “the significant repercussions … that the disapplication of the requirement of the ‘artistic value’” would entail, the Court stated that the issue should be referred “to the Court of Justice in order to allow express examination of the compatibility of the national provision with the EU system.

However, due to “reasons of procedural economy”, the Court has decided to submit a CJEU preliminary question only if, at the end of a Technical Expertise conducted on the products at issue, it is found that there may be infringement.

What the wording of the possible referral might be is an intriguing question. Indeed, the CJEU has already stated that – in light of the indications already contained in Cofemel – there is no need to answer the question as to whether the interpretation by the CJEU “of Article 2(a) of Directive 2001/29 precludes national legislation … which confers copyright protection on works of applied art, industrial designs and works of design if, in the light of a particularly rigorous assessment of their artistic character, and taking account [of] the dominant views in cultural and institutional circles, they qualify as an ‘artistic creation’ or ‘work of art’”.

Riccardo Perotti

Court of Milan, 19 April 2021 (Buccellati)

Court of Milan, 15 February 2021 (Castiglioni Brothers)

Court of Milan, 21 January 2021 (Moon Boot II)

Court of Cassation, 30 April 2020 (Kiko)

HmbBfDI vs. WhatsApp: an Update

In an order with immediate enforcement, Johannes Caspar, the Hamburg Commissioner for Data Protection and Freedom of Information (HmbBfDI), has prohibited Facebook Ireland Ltd. from further processing WhatsApp user data in Germany, if this is done for their own purposes. As part of the emergency procedure under Art. 66 GDPR already discussed on this blog, this measure will remain valid for three months in the respective territory. In light of this short time frame, the Commissioner’s aim is to refer this issue to the European Data Protection Board (EDPB) in order to find a solution on a European level.

In the last months, WhatsApp had requested their users to agree to the new terms and conditions of use and privacy by May 15, 2021. With the new privacy terms and conditions, WhatsApp would receive wide-ranging data processing powers. This applies, among other things, to the evaluation of location information, the transfer of personal data to third-party companies including Facebook and cross-company verification of the account. Furthermore, the companies’ legitimate interest for data processing and transfer is brought forth in a blanket manner – even with regard to underage users.

After hearing Facebook Ireland Ltd. – and notwithstanding a consent to the terms of use – the HmbBfDI believes that there is no sufficient legal basis for justifying this interference with the users’ rights and freedoms. This is especially true when considering that the data transfer provisions are unclear, misleading and contradictory. The users’ consent is neither transparent, nor voluntary since users would have to agree to the new terms in order to continue using WhatsApp.

While this close connection between the two companies was to be expected, many stakeholders find it surprising that WhatsApp and Facebook actually want to expand their data sharing. At the same time, Johannes Caspar is confident that on the basis of the GDPR procedure, he will be able to “safeguard the rights and freedoms of the many millions of users who give their consent to the terms of use throughout Germany. The aim is to prevent disadvantages and damage associated with such a black-box procedure.”

In view of the upcoming elections in Germany, it is to be hoped that – in dialog with the companies – data protection-compliant solutions will be found quickly.

Dario Henri Haux

Anordnung des HmbBfDI: Verbot der Weiterverarbeitung von WhatsApp-Nutzerdaten durch Facebook (11.05.2021): https://datenschutz-hamburg.de/pressemitteilungen/2021/05/2021-05-11-facebook-anordnung

A PROPOSAL FOR (AI) CHANGE? A succinct overview of the Proposal for Regulation laying down harmonised rules on Artificial Intelligence.

INTRODUCTION

The benefits from the implementation of Artificial Intelligence (“AI“) systems for citizens are clear. However, the European Commission (“EC“) seems to be conscious about the rights arising from the use in terms of both safety and human rights. In this context, as part of an ambitious European Strategy for AI, the European Commission has published the proposal for a Regulation on a European approach for AI (the “Proposal“), where AI is not conceived as an end in itself, but as a tool to serve people with the ultimate aim of increasing human well-being.

The Proposal does not focus on technology, but on the potential use that different stakeholders could make of AI systems and, as a result, potential damages arising from its use. To address potential damages while capturing the full potential of AI related technologies, the Proposal, following an horizontal approach, is based on four building blocks: (i) measures establishing a defined risk-based approach; (ii) measures in support of innovation; (iii) measures facilitating the setting up of voluntary codes of conduct; and (iv) a governance framework supporting the implementation of the Proposal at EU and national level and its adaptation as appropriate.

One may wonder why the Proposal is needed. The functioning of AI systems may be challenging due to its complexity, autonomy, unpredictability, opacity and the role of data within this equation. Such characteristics are not selected in an arbitrary manner but purposely spotted by the regulator as areas of concern in terms of: (i) safety; (ii) fundamental rights; (iii) enforcement of rights; (iv) legal uncertainty; (v) mistrust in technology; and (vi) fragmentation within the EU.

The Proposal is not a surprise for many legal experts in the field and, as expected, leverages on, inter alia, the work carried out by the High-Level Expert Group on Artificial Intelligence (Ethics Guidelines for Trustworthy AI[2] and the Policy and Investment Recommendations for Trustworthy AI), the Communication from the EC on Building Trust in Human-Centric Artificial Intelligence, the White Paper on Artificial Intelligence, including the Data Governance Act, the Open Data Directive and other legislative initiatives covered under the European Data Strategy.

The Proposal introduces many aspects that might deserve further clarification as the legislative process goes on. Some are (i) the scope of application of the Proposal; (ii) the definition of AI systems; (iii) the AI-risk based approach; (iv) the role of standards (e.g. conformity assessment and CE marking and process); (v) the role of data, the obligations on data quality and the interaction of the Proposal with the legislation governing both personal and non-personal data; and (vi) the governance structure and the role of the European Commission, the (new) Artificial Intelligence Board, the AI Expert Group and, at a national level, national competent authorities. The following paragraphs shall be devoted to explore some of the above mentioned aspects.

1.- AI Definition

Finding an AI definition seemed to be a challenge for the EC and, yet, current definition is not exempt from controversy due to its broadness. As a matter of fact, AI is defined as “software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with” (Art. 3.1 (1) of the Proposal), where Annex I lays down a list of AI techniques and approaches such as, currently, machine learning, logic -and knowledge based- and statistical.

Although the aim of the EC was to provide a neutral definition in order to cover current and future AI techniques, many stakeholders have already manifested their concerns with regards to the comprehensiveness of this definition. The latter, considering that, while it is convenient to promote flexible legislation, this broad definition including a referral to the Annex I potentially subject to periodical amendments, may raise some legal uncertainty concerns in the industry.

2.- An (overreaching) scope?

To ensure the horizontal application of key requirements developed by the High-Level Expert Group on Artificial Intelligence, the Proposal aims at harmonising certain rules concerning the placing on the market, putting into service and use of AI systems that create a high risk to the health and safety or fundamental rights of natural persons (“high-risk AI systems“) in the EU.

Based on the intended purpose of the AI system and following a risk based approach, the Proposal: (i) prohibits certain AI practices; (ii) establishes requirements and obligations for high-risk AI systems – both ex-ante and ex-post; and (iii) sets forth limited transparency obligations for certain AI systems.

Despite the intention to establish a common normative standard for all high-risk AI systems, the application of the Proposal is limited when it comes: (i) to AI systems intended to be used as safety components of products or systems, or which are themselves products or systems covered by certain legislation applying to aviation, railways, motor vehicles and marine equipment sectors. (Art. 2.2 of the Proposal); and (ii) AI systems used for military purposes.

One may also ask, which are the stakeholders affected by the Proposal considering, in particular, the complexity and comprehensiveness of the AI ecosystem. Here, the European Union legislator seemed to be inspired by Regulation (EU) 2016/679 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data (“GDPR“), establishing that the Proposal shall apply to:

  • Providers of AI systems irrespective of whether they are established within the EU or in a third country outside the EU;
  • Users of AI systems established within the EU;
  • Providers and users of AI systems that are established in a third country outside the EU, to the extent the AI systems affects persons located in the EU;
  • EU institutions, Offices and Bodies.

On the one hand, Art. 3 (2) of the Proposal defines “provider” as the one who “develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge. On the other hand, according to Art. 3 (4) “user” means any “natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity“.

Hence, apart from suggesting a broad territorial scope of application, affecting providers and users located outside the EU, the Proposal seems to bring different obligations down the supply chain, placing on the ultimate provider and professional user of the AI system much of the legal burden coming from the Proposal.

Considering the multiplicity of stakeholders intervening in the AI system lifecycle (e.g. data providers, third-party assessment entities, integrators, software developers, hardware developers, telecom operators, over the top service providers, etc.) and the High-Level Expert Group on Artificial Intelligence Guidelines recommendations on inclusive and multidisciplinary teams for the development of AI systems, the Proposal, in general, fails to provide guidance on how the interaction between AI system supply chain stakeholders shall be.

Therefore, legal issues amongst stakeholders may arise such as potential contractual derogations, attributions of contractual and non-contractual liability and its validity (and compatibility) according to the principle of accountability – as inspiring the whole Proposal. In addition, is worth mentioning that the consistency with other legal frameworks such as defective product legislation is fundamental in order to ensure cohesion and legal certainty – see, to this end, the EC Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics.

3.- And the Commission said “risk-based approach”

The Proposal provides four non-mutually exclusive categories of risk: (i) unacceptable risk; (ii) high-risk; (iii) other risk – AI with specific transparency obligations; and (iv) low or no risk. Depending on the category of risk, the obligations of providers, users and other stakeholders will vary, from complete prohibition to permission with no restrictions.

  • Prohibition

In this context, following the category of risk for which there is an unacceptable risk, the EC proposes to prohibit, mainly, the following AI practices:

  1. AI systems that deploy subliminal techniques beyond a person’s consciousness to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm;
  2. AI systems that exploit people’s vulnerabilities due to their age, physical or mental disability, in order to distort the behaviour of a person in a manner that causes or is likely to cause harm to that person;
  3. AI systems, used by public authorities or on their behalf, for the evaluation or classification of the trustworthiness of natural persons when the social scoring may lead to detrimental or unfavourable treatment: (a) in social contexts which are unrelated to the contexts in which the data was originally generated or collected; or (b) that is unjustified or disproportionate to their social behaviour or its gravity; and
  4. the use of ‘real time’ remote biometric identification systems in publicly available spaces for law enforcement, unless and in as far as such us is strictly necessary for different objectives (e.g. targeted search for specific potential victims of crime; prevention of specific, substantial and imminent threat to the life of physical safety of natural persons or a terrorist attack, etc.).

While the Proposal prohibits some AI practices that were already under the spotlight of different Member States, such as facial recognition systems (see for instance the decision from the Italian Data Protection Authority –Garante per la Protezione dei Dati Personali– with regards to the Sari Real Time system), other cases leave room for interpretation such as “AI systems that exploit people’s vulnerabilities” or “AI systems that deploy subliminal techniques”. Therefore, the scope of the prohibition of certain AI practices under the Proposal, would be broader in comparison with the specific use cases banned so far within the European Union.

  • High-level risk

The proposed regulation establishes quite a broad list of sectors and uses potentially falling within the high-level risk category that, could be amended from time to time by the EC. In particular, according to Art. 6 of the Proposal, an AI shall be classified as high-risk, in the following scenarios:

  1.  In cases where the following two conditions are met:
    1. the AI system is intended to be used as a safety component of a product or is itself a product, covered by the list of Union harmonisation legislation listed in Annex II of the Proposal.
    1. the products whose safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment in order to be placed on the market or put into service pursuant to legislation contained in Annex II.
  2. For the AI systems provided in Annex III.

Therefore, the current list of high-level risk AI systems is contained in Annexes II and III of the Proposal. While Annex II covers a wide range of products or safety component of products governed by sectorial European Union law, such as machinery, transport, medical devices or radio equipment; Annex III defines some high-risk applications such as biometric identification and identification of natural persons, management and operation of critical infrastructures, AI for recruitment purposes, law enforcement, or education and vocational training.

High-level risk AI systems are at the centre of the Proposal, which establishes a set of obligations covering the entire AI lifecycle – from its design to its implementation. Such requirements include: (i) to carry out a conformity assessment and subsequent CE marking; (ii) transparency and information obligations; (iii) sign up in the EU database for high-risk AI practices; (iv) logging of activities; (v) human oversight; (vi) record-keeping and documentation obligations; (vii) establishment of risk and quality management systems; (viii) robustness, accuracy and cybersecurity obligations; and (ix) use of high-quality datasets for training, validation and testing. Most relevant obligations both, for AI providers and users, are the following:

Provider obligationsUser obligations
Establish and implement quality management system.

Elaborate and keep up to date technical documentation.

Logging obligations to enable users to monitor the operation of the high-risk AI system.

Conduct conformity assessments, and potentially re-assessment of the system in case of significant modifications.

Conduct post-market monitoring.

Collaborate with market surveillance authorities.
Operate AI systems in accordance with instructions of use.

Ensure human oversight when using of AI system.

Monitor operation for possible risks.Inform the provider or distributor about any serious incident or any malfunctioning.

Compliance with existing legal obligations (e.g. GDPR).

Source: L. SIOLI, CEPS webinar -European approach to the regulation of artificial intelligence (April 2021).

As suggested, high-risk AI systems concentrate the bulk of requirements established in the Proposal, lacking guidance in some aspects and introducing some caveats that, hopefully, will be clarified during the legislative process.

  • Other risk

Operators placing, putting into service or using AI systems having a lesser risk than high-risk AI systems shall still have to comply with transparency obligations vis-à-vis users and implementers, such as: (i) notification to humans that are interacting with an AI system, unless such interaction is evident; (ii) notification to humans that are being subject to emotional recognition or biometric categorisation systems; and (iii) application of labels to ‘deep fakes’, unless the use of ‘deep fakes’ becomes necessary for public interest reasons (e.g. criminal offences) or it is necessary for the exercise of fundamental rights (Art. 54 of the Proposal).

This category of risk comes also with different caveats that prevent the application of different notification obligations. As anticipated, some clarification could be also needed here. When doesthe interaction with an AI system becomes evident? How should the ‘notifications’ be carried out?

  • Limited or no risk

Although AI systems under this category do not result in mandatory obligations for its providers and users, the Proposal imposes the EC and the European AI Board to encourage the development of codes of conduct to enhance transparency and information about such “low or no risk” AI systems (Art. 69 of the Proposal).

In light of the above, although the EC understands that, in general, most AI systems will not entail a high-risk, one may observe a set of rules are widely applicable to all categories of risk in order to enhance, inter alia, transparency, safety and accountability within the AI ecosystem.

4.- Measures in support of innovation

A very welcome mechanism are the AI regulatory sandboxes. Regulatory sandboxes represent a regulatory concept based on “experimental legislation”, where technology companies can test and develop their innovations benefiting, for instance, from the exemption of the application of certain specific rules or legal regimes under a controlled environment. In particular AI regulatory sandboxes provide a controlled environment where the development, testing and validation of innovative AI systems are facilitated for a period of time before coming into the market under the supervision of Member states authorities or the European Data Protection Supervisor.

Although the modalities, conditions and other criteria shall be governed by the corresponding implementing acts, the Proposal seems to introduce some flexibilities when it comes to further data processing in this context (Arts. 53 and 54 of the Proposal).

The EC has proved particularly sensitive to small-scale providers and start-ups, providing some advantages through the Proposal in order to enable greater access to available resources and establish a level playing field regardless of the size and scope of the company. It is remarkable, for instance, the differentiation that from the very beginning is made between “providers” and “small-scale providers” (Art. 3 of the Proposal) in an attempt to foster the creation of a level-playing field also adapted to micro or small enterprises.

For instance, the Proposal foresees priority access to AI sandboxes to be legally granted for small-scale providers and start-ups (Art. 55 of the Proposal); organisation of awareness raising activities about the Proposal (Art. 55 of the Proposal); or the consideration -by the EC and the European AI Board- of the specific interests and needs of small-scale providers and start-ups when encouraging and drawing up codes of conduct (Art. 69.4 of the Proposal)

5.-Governance structure and enforcement

Governance structure and enforcement seem to bear some similarities with the GDPR. As such, the Proposal creates the European Artificial Intelligence Board, which shall coordinate its activities with the corresponding National Competent Authorities. This, as such, is not the only novelty but, always at the European level, the EC is expected to act as secretariat and a supporting AI Expert Group (potentially equivalent to the High-Level Expert Group on Artificial Intelligence) to be created in the future.

In addition, administrative sanctions mirror those established in the GDPR, broken down to different scales depending on the severity of the infringement and amounting to up to 30 million of euros or the 6% of the total worldwide annual turnover of the preceding financial year for most severe infringements (Art. 71 of the Proposal). The EC itself does not seem entitled to impose sanctions, since this task has been attributed to Member States’ national authorities.

Some questions are still left open, such as coordination mechanisms between authorities, in particular with regards to cross-border infringements of the Proposal. In addition, there is still some lack of clarity on what the specific authorities are expected to have competence at a national level. Current local Data Protection Authorities? Brand-new national AI authorities?

Finally, current administrative procedures mimicking, to a great extent, the current EU competition law system and the GDPR, also risk leading to fragmentation and heterogeneity between Member States. In particular, the lack of a clear decision review mechanism at a European level (i.e. administrative sanctions are only reviewed at a national level) and the entitlement of authorities to decide on an infringement having an impact in more than one member state, remain unclear.

6.- “What’s in” for Intellectual Property?

As one may understand, the main focus of the Proposal is not Intellectual Property (“IP“) but an horizontal approach to AI. Still, even though IP is only referred to twice throughout the Proposal, some questions remain unanswered, in particular, how to ensure the compliance with the obligations set forth in the Proposal while protecting IP rights and trade secrets. Here, the dichotomy between access and protection to ensure easy implementation, safety and interoperability of AI systems could need to be revisited.

Regarding transparency and information to users when using high-risk AI systems, several points can be made. What does it mean that the “operation is sufficiently transparent to enable users to interpret the system’s output” foreseen in Art. 13 of the Proposal? Does IP act as a facilitator or as a barrier to this transparency requirement? Does the current IP legal system foresee appropriate mechanisms in order to access necessary information?

Apart from specific transparency obligations, some other legal requirements set forth in the Proposal may entail the communication of different business and technology related information to other operators. For instance, how to ensure appropriate risk and quality assessment systems while there is protected information for stakeholders having to implement such systems? Do different stakeholders along the AI supply chain need to access IP and trade secret protected information or data? Which are the IP barriers to ensure data interoperability?

One may argue, for instance, from a copyright perspective, that the entitlement to “observe, study or test the functioning of the program in order to determine the ideas and principles which underlie any element of the program” (Art. 5.3 of the Software Directive) or the possibility to decompile the software (Art. 6 of the Software Directive) could not be of great use in cases where software is being periodically modified, considering also that observing, studying, testing or decompiling such software most of times becomes a costly process.

With regards to patent law, can the current “sufficient disclosure” obligation standard (see, for instance Art. 83 of the European Patent Convention be enough to ensure a “sufficiently transparent” operation? Could Art 27 (k) of the Agreement on a Unified Patent Court (providing that acts covered under Art. 6 of the Software Directive do not constitute an infringement in particular with regards to de-compilation and interoperability) inspire reverse engineering exceptions for the purposes to obtain information allowed under previously mentioned Art. 5 of the Software Directive?

Trade Secrets Directive regime is even more restrictive than previous “de-compilation” exception and only allows reverse engineering or access to information where the acquirer of the trade secret is free from any legally valid duty to limit the acquisition of the trade secret (Art. 3.1 (b) of the Trade Secrets Directive). Nevertheless, the novelty of the Trade Secrets Directive and the lack of case law causes some legal uncertainty.

Moreover, the Proposal establishes the obligation to disclose the AI source code to enforcement authorities. Although this practice could be covered under specific or general exemptions provisions currently in force based on the principle of public interest (e.g. Art. 1.2 (b) and Recital 11 of the Trade Secrets Directive), how is access to the source code useful for enforcement authorities? What is the scope of source code to be disclosed? That of the trained AI model? The validated AI model? The code to build the AI model? In practice, the above could lead to divergent practices between national authorities, which could be requesting slightly different types of information, in particular when it comes to AI systems using machine learning approaches.

Also in this context, Art. 70 of the Proposal with regards to disclosure obligations, particularly protects the confidentiality of information and data communicated to national competent authorities and notified bodies involved in the application of the Proposal. In this line, authorities shall carry out “their tasks and activities in such a manner as to protect, in particular: (i) intellectual property rights, and confidential business information or trade secrets of a natural or legal person, including source code, except the cases referred to in Article 5 [of the Trade Secrets Directive]”, where the latter provision foresees four exceptions to trade secrets rights. At this stage, albeit the protection of IP rights, confidential information and trade secrets was addressed by the legislator when drafting the Proposal, the same appears to leave room to authorities to decide which the concrete measures for its protection shall be without providing ulterior guidance (i.e. “carrying out activities in such a manner as to protect”).

In addition, the Proposal provides that “the increased transparency obligations will also not disproportionately affect the right to protection of intellectual property (Article 17(2) [EU Charter of Fundamental Rights), since they will be limited only to the minimum necessary information for individuals to exercise their right to an effective remedy and to the necessary transparency towards supervision and enforcement authorities” and that”when public authorities and notified bodies need to be given access to confidential information or source code to examine compliance with substantial obligations, they are placed under binding confidentiality obligations“. Therefore, the purpose to set a proper balance between IP and trade secret rights and access to information and data seems clear. However, guidance on concrete measures by national and notified bodies to protect IP, confidential information and trade secrets is desirable.

An additional effort must be done in connection to the above mentioned aspects and, consistently with sectorial regulation having an impact on access (and protection) of information covered under IP, build a congruent system that ensure the appropriate trade-off between access and protection, not only from a theoretical perspective, but following a pragmatic approach.

CONCLUSION

The Proposal is very much needed in order to ensure the “human-centred approach” to AI underlined on numerous occasions by different EU Institutions and comes after a long process that, as can be appreciated, has led to what could be considered a well-structured and timely Proposal. Albeit an unprecedented piece of legislation, European Union institutions must ensure that the final outcome does not lead to a burdensome regulation that, in connection with, inter alia legislation governing data protection, digital services and sectorial regulation, becomes a complex regulatory maze for companies to navigate through – redounding in a chilling effect to innovation and, as a result, issues connected to the development of the hoped-for fruitful strong digital and AI ecosystem.

Notwithstanding the above, the clarification on certain provisions, consistency with the current IP and data protection legal frameworks, and the application of lessons learnt from the GDPR could make the Proposal “future-proof”, also considering the current business, geopolitical and societal context. Also, an appropriate vacatio legis term between the adoption of the final text and its entry into force in order to adapt and additional guidance on the governance and enforcement structures shall be fundamental.

Now, the text is into “inter-institutional” negotiations, going to the European Parliament and Council for further debate, where different public and private stakeholders shall have the opportunity to get involved.

Rubén Cano

Proposal for a Regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act)

Another Italian take on the Fulvestrant Saga. The Court of Milan on technical prejudice, plausibility and off-label use.

AstraZeneca has fought its fulvestrant patent portfolio all across Europe, including Italy, for quite some time (see for instance here, here, here and here). Fulvestrant is an oncological product used for the treatment of breast cancer and is marketed by AstraZenca as Faslodex.

The Court of Milan recently chipped in again with its decision of 3 December 2020 (No. 7930/2020, Judge Rapporteur Ms Alima Zana, AstraZeneca v. Teva, here), delivering a number of interesting points on validity and infringement of second medical use patents.

In particular, the Court argued that the Italian arm of AstraZenca’s European patent No. EP 1 272 195 (“EP195”) lacked inventive step, among other things, as it did not overcome a technical prejudice, that the claimed therapeutic indication was not plausible and that – in any case – AstraZeneca did not offer sufficient evidence of Teva’s off-label infringement.

Photo by Chokniti Khongchum on Pexels.com

Procedural background

In 2018, AstraZeneca filed an action against Teva before the Court of Milan for the alleged infringement of the EP195 patent, titled “use of fulvestrant in the treatment of resistant breast cancer”. Teva counter-claimed arguing that EP195 was invalid.

In May 2020, as Teva was about to launch Fulvestrant Teva into the Italian market, AstraZeneca filed a petition for preliminary injunction based on EP195. The Court refused the PI in August (here) as it deemed that AstraZeneca’s action did not meet the “likelihood of success on the merits” requirement (so-called fumus boni iuris). The Court thus went on to decide the case on the merits.

The patent and the allegedly infringing product

Claim 1 of EP195 reads: “Use of fulvestrant in the preparation of a medicament for the treatment of a patient with breast cancer who previously has been treated with an aromatase inhibitor and tamoxifen and has failed with such previous treatment” – a typical “Swiss-type” claim (for additional information on “Swiss-type” claims, see EPO’s Guidelines, here).

We gather that the patented use of fulvestrant (in “the treatment of a patient with breast cancer who previously has been treated with an aromatase inhibitor and tamoxifen and has failed with such previous treatment”) was not indicated in the SmPC or package leaflet of Fulvestrant Teva. The product’s marketing authorization (“MA”) would confirm this (see the 2016 version of the MA, here, and further updates here, here, here and here).

Expert opinions and foreign decisions

The Court of Milan deemed that EP195 was invalid, going against the opinion of its own Court-appointed expert. This is rather uncommon in Italian patent proceedings since Judges have no technical background and they necessarily rely on their experts’ reports. Here, however, the Court was convinced by the opinion issued by a different expert in the parallel proceedings brought by AstraZeneca against another generic company on EP195 (Docket No. 37181/2018 – this case is referred to in the judgement, but we are not aware of a published decision yet).

Also, the Court relied on the foreign decisions issued on EP195 by the German Bundespatentgericht (here), the Swiss Bundespatentgericht and the Court of Barcelona (order of 18 July 2018, confirmed in 2019, here).

In doing so, the Court stressed that judges can base their decisions also on evidence that is acquired in different proceedings between the same parties, or even different parties. The general rule of Article 116 of the Civil Procedure Code (CPC) – according to which “the judge must assess the evidence according to his or her prudent judgment, unless the law provides otherwise” – fully applies to the Expert Report issued in the parallel proceedings brought by AstraZeneca on EP195. Besides, the “parallel” Expert Report was also filed by the parties to these proceedings and heavily discussed in their briefs, so that there was no surprise argument.

Lack of inventive step

The Court-appointed Expert deemed that the closest prior art to EP195 was the “Gale” article. The Court picked the “Vogel” article instead as the closest prior art and determined the objective technical problem accordingly. Applying the EPO’s “could-would approach”, the Court of Milan concluded that the solution disclosed in EP195 (i.e., to use fulvestrant as a line of treatment for breast cancer after aromatase inhibitors and tamoxifen) lacked inventive step.

According to the Court, the person skilled in the art would have found in “Vogel” an incentive to use fulvestrant as a third line treatment, after tamoxifen and aromatase inhibitors, since (i) “Vogel” mentioned pure anti-estrogens as emerging candidates to be used in the second-to-fourth therapeutic line (ii) fulvestrant was identified as a pure anti-estrogen in a publication (“DeFriend”) referenced in “Vogel” and (iii) “Vogel” specifically taught that pure anti-estrogens inhibit the growth of a tumor already treated with tamoxifen.

The Court also denied that the use of fulvestrant after tamoxifen overcame a technical prejudice. AstraZeneca argued that because tamoxifen and fulvestrant have a similar mechanism of action, the person skilled in the art would have feared that the breast cancer may have formed resistances and mutations to fulvestrant after administration of tamoxifen. The Court, however, found that tamoxifen and fulvestrant have substantial differences, which were known in the art, and therefore no technical prejudice could arise. On this point, the Court directly referenced the German Bundespatentgericht decision (here), which found that there was no technical prejudice against the choice of fulvestrant as a pure anti-estrogen since:

the person skilled in the art associated the choice of fulvestrant with a reasonable expectation of success because Vogel considers it to be an advantageous example of a pure anti-estrogen, as it does not show any relevant side effects despite its high efficacy and shows fewer side effects than other drugs known to be effective for the endocrine treatment of breast cancer”.

Plausibility (?)

As an additional ground of invalidity, the Court of Milan stressed that the patent would be valid only in relation to a purely palliative treatment (and not as an adjuvant therapy) in a patient affected by breast cancer that was previously treated by an aromatase inhibitor and tamoxifen, where said treatments were unsuccessful. This is because the experimental data provided in the patent and in the “Perey” document – published in 2007, after the filing – allegedly “refer only to a palliative use of fulvestrant”, showing that a “plausible solution to the technical problem was found only in relation to the palliative use” of the active principle. The Court thus held that, in any case, AstraZeneca would have had to submit a limited wording of claim 1 on the palliative use alone.

The Court, however, did not precisely identify which patentability requirement would have been affected by this “plausibility” issue, although novelty, inventive step and sufficiency of disclosure can all be theoretically impacted by a (lack of) plausibility argument (on the “nebulous” plausibility requirement, see a recent article by Rt. Hon. Professor Sir Robin Jacob, here). The Court was arguably hinting at an insufficiency of disclosure in relation to the therapeutic indication. It should be stressed, however, that the EPO does not expect applicants to submit clinical trials for each therapeutic indication. In particular, the EPO Guidelines (here) suggest that:

Either the application must provide suitable evidence for the claimed therapeutic effect or it must be derivable from the prior art or common general knowledge. The disclosure of experimental results in the application is not always required to establish sufficiency, in particular if the application discloses a plausible technical concept and there are no substantiated doubts that the claimed concept can be put into practice (T 950/13 citing T 578/06). Any kind of experimental data have been accepted by the boards. It has also been repeatedly emphasised that ‘it is not always necessary that results of applying the claimed composition in clinical trials, or at least to animals are reported’ (T 1273/09 citing T 609/02)”.

Besides, in an earlier case on fulvestrant (Decision of 24 July 2019, Actavis v. AstraZeneca, concerning the Italian portion of EP 2 266 573: here), the Court of Milan had rejected an insufficiency/plausibility argument by stating that: “[t]he inclusion of examples in the patent specification is not mandatory nor is it a requirement for the sufficiency of the disclosure the completion of a clinical trial at the date of the filing, as it is possible to rely on subsequent trials (decision T 433/05), provided that the effect was plausible at the date of filing” (for a comment on this case see here). As the argument was not fully developed in the decision by reference to the “Perey” document, it could seem that the Court of Milan is adopting diverging approaches to plausibility, and further clarifications would be welcomed.

Lack of infringement

To seal the deal, the Court of Milan then went on to say that, in any case, Teva’s product would not have infringed EP195. The Court argument goes as follows:

I. The asserted patent is a patent for a second medical use, the patent being directed at the protection of a known substance, limited to the use claimed;

II. In such a case the infringement cannot be found if the generic medicine in its marketing authorisation contains a clear limitation to uses that do not violate patent rights […]

III. In order for there to be infringement […] it is necessary that the [competitor’s] product is not only abstractly suitable for the claimed use, but that the package leaflet indicates the use of the product for the intended purpose.

This assessment is supported by the numerous proceedings in other jurisdictions that have ruled in favour of non-infringement [e.g. the decision of the Oberlandesgericht Düsseldorf, 9 January 2019, mentioned in the footnote].

The defendant’s argument that Teva’s drug is not ‘suitable’ for the claimed use is therefore correct, as this is not the case from a regulatory (since the product is not authorised for the claimed use) nor from a practical point of view (since the claimed use is not foreseen or included in the oncological practice), nor has the [plaintiff] offered any evidence to the contrary.

Due to the finding of invalidity of EP195, the Court’s non-infringement argument is ad abundantiam and consequently rather short. However, some passages are worth being discussed.

First, as to point (II), it is not entirely clear what the Court is referring to when it says that the MA of Fulvestrant Teva “contains a clear limitation to uses that do not violate patent rights”. Assuming that the Fulvestrant Teva’s MA does not explicitly include the therapeutic indication covered by EP195, the Court could be referring to the following passage in the authorization (see here and here):

The holder of the marketing authorization for the generic drug is solely responsible for full compliance with the industrial property rights of the reference medicinal product and of the patent legislation in force.

The holder of the MA of the equivalent medicine is also responsible for ensuring full compliance with the provisions of Article 14(2) of Legislative Decree 219/2006, which requires that those parts of the summary of product characteristics of the reference medicine which refer to indications or dosages still covered by a patent at the time the medicine is placed on the market are not included in the leaflets”.

If this is the case, the Court’s argument on the MA is not particularly convincing. Indeed, this wording – which is boilerplate and often included in MAs – can hardly be read as a “clear limitation to uses that do not violate patent rights”. Infringing uses are technically neither excluded nor forbidden by it, at least under patent law. The Italian Pharmaceutical Authority simply stresses that it is the generics’ responsibility to steer clear of patent rights.

Second, as to point (III), the Court would seem to suggest that patent infringement of a second medical use patent can only occur if the package leaflet “indicates the [claimed] use“. However, the fact that a specific therapeutic indication is missing from the MA, SmPC or package leaflet does not exclude per se that the drug may be used and prescribed for the patented use.

Photo by Pixabay

Besides, the Court’s argument is not fully supported by the case law of the Oberlandesgericht Düsseldorf referred to in the decision (see here and here). Although in the parallel AstraZeneca v. Teva proceedings the Düsseldorf Court found that EP195 was not infringed on the merits of the case, German case law has established that patent infringement can occur also in “off-label” cases, where the pharmaceutical product is (i) suitable to be used according to the patented indication and (ii) the manufacturer or distributor takes advantage of circumstances that ensure that the product is sold and/or is used for the intended purpose. This, in turn, requires a sufficient extent of use in accordance with the patent as well as knowledge (or at least a willful blindness) of this use on the part of the manufacturer or distributor (see Oberlandesgericht Düsseldorf, 5 May 2017, I-2 W 6/17, Estrogen Blocker, § 85, another case on EP195, here).

In any way, the Court of Milan somewhat softens its argument by adding, in the last paragraph, that AstraZeneca did not provide any evidence of the infringement by Teva or that the use of fulvestrant according to EP195 was the current oncological practice in Italy. Conversely – one may wonder – if AstraZeneca had provided sufficient evidence of the off-label use a finding of infringement could have been possible.

For further answers we now look forward to the decision of the Court of Milan in the parallel case (Docket No. 37181/2018) and will report on any interesting development.

Giovanni Trabucco

Court of Milan, 3 December 2020, Decision No. 7930/2020, AstraZeneca v. Teva

Facebook’s (failed) vaccine against the Infodemic

Alongside the Covid-19 pandemic, an equally dangerous emergency is underway. It relates to the circulation of false information on the internet and particularly on social media platforms. This is a pervasive and worldwide phenomenon which calls into question the role that digital platforms should play in tackling disinformation and misinformation. The question is in fact: should digital platforms be in charge of addressing the problem of online disinformation?

Last year Avaaz conducted a study aimed at monitoring the spreading on Facebook of misleading content related to the pandemic. It was also aimed at analysing and evaluating the effectiveness of the big tech policies to combat this “Infodemic”. The results showed an extreme fallacy and a decisive delay in the implementation of relevant policies. A year later, the organization published a second study that returns to this issue in order to compare those data with the current situation and to verify if an improvement occurred.

In practical terms, intervening to stop an alleged false content from circulating on social media is something left to the self-regulation of the platform. According to the said internal practice, the information must be submitted to the so-called fact checkers, that verify and certify that it is actually false information. In other words, the fake news must be “debunked”. This work is carried out by specialized companies that can be partner of the platform or independent from it. Once the fact check has been notified (before receiving a confirmation, in fact, the social media do not intervene independently) Facebook can take measures consisting of either the labelling or the removal.

The Avaaz study examined a sample of 135 Facebook content in five different languages (English, French, Italian, Spanish and Portuguese) identified as fake by independent fact checkers. A number of key findings emerged from this sample. The first is that the most widespread narrative concerns vaccines side effects, including death. Secondly, it appears that Facebook is more reluctant to intervene with fake content in non-English language. This results either in a late intervention (30 days for non-English content compared to 24 days for English-language false content) or, even, in a lack of intervention with the effect that European citizens would seem to be more exposed to the risk of misinformation than Americans (or, properly, than English-speaking countries citizens).

Ultimately, the study highlights how Facebook’s policies relating to Covid-19 disinformation and misinformation in Europe should be reviewed and strengthened especially in time of global crisis. In this regard, it should be noted that the EU Commission, even before the pandemic, was committed to fight the spread of the phenomenon at stake. In particular, in 2018 a Code of Practice on Disinformation (“the Code”) was adopted and signed by online platforms, the leading social networks and advertisers. The Code aims to implement the 2018 Communication from the Commission and it also identifies some best practices. However, Avaaz suggests that this document should be amended, providing, as an example, the following measures: i) a retroactive notification for users who interacted with fake content; ii) the reduction of the acceleration of harmful content caused by the algorithm and, finally; iii) the establishment of an independent monitoring regulator.

It should also be noted that the new proposed Digital Services Act (“DSA”) aims specifically to tackle the spread of fake news online and to increase transparency providing a series of obligations for online platforms, including social media. It will therefore be interesting to see how the works will evolve. In the meantime, one thing is certain: institutions have not found a vaccine against the Infodemic… so far.

Valeria Caforio

What’s Up, WhatsApp?!

In a GDPR urgency proceeding, the Hamburg Commissioner for Data Protection and Freedom of Information (HmbBfDI), Johannes Caspar, has taken action against Facebook. The aim of this proceeding – whose decision is expected before May 15, 2021 – is to comprehensively protect WhatsApp users in Germany, who are confronted with the company’s new terms of use. Against this backdrop, May 15 can be emphasized as an important deadline, since by that date users must consent to data processing by the parent company Facebook. The fear is, that the data will be used in particular for marketing purposes, which goes beyond the scope of analysis and security.

After Facebook had announced the new terms and conditions at the beginning of this year, a discussion arouse. As a result, the company decided to postpone the introduction to May. With many million Whatsapp users in Germany alone, Johannes Caspar now stressed the importance of having functioning institutions in place, in order to prevent the misuse of data power. At this point, Caspar could not exclude, that the data-sharing provisions between WhatsApp and Facebook would be enforced illegally, due to the lack of voluntary and informed consent. In order to prevent a potentially unlawful exchange of data and to put an end to any impermissible pressure on users for giving their consent, the formal administrative procedure was initiated.

Based on Art. 66 GDPR (“exceptional circumstances”), the emergency procedure is aimed at the European headquarters in Ireland. The American company is given the opportunity to state its position, whereby it can be expected that Facebook will consider the adjustments to be sufficient. However, the Hamburg data protection authorities had already issued an injunction against such data matching in 2016. Although Facebook took legal action, the company did not prevail in court (OVG Hamburg, February 26, 2018 – 5 Bs 93/17 – K&R 2018, 282).

The outcome of the proceedings in Hamburg is eagerly awaited since it may have an impact on the entire European market, given the direct applicability of Art. 66 GDPR in the different Member States. Although the decision from 2018 could delineate a trend, the outcome is open.

Dario Henri Haux

See the media statement at: https://datenschutz-hamburg.de/pressemitteilungen/2021/04/2021-04-13-facebook

Trumping the First Amendment: Updates on the Twitter Saga

The decisions concerning the blocking of Twitter and Facebook accounts belonging to Donald Trump are still pending. 

On the contrary, the judicial proceedings regarding the attempts by then-President Trump himself to limit the reactions to his own posts by other Twitter users was decided by the United States Court of Appeals for the Second Circuit defining the question of whether blocking another social media user could consist in a violation of the First Amendment.

President Trump, acting in his official capacity as President of the United States, petitioned for a writ of certiorari in August 2020. Because, following the elections, President Biden would have become the petitioner to this action, the Justice Department asked the Supreme Court to declare the case moot (see here). On 5 April 2020, the US Supreme Court, following its established practice, granted the writ of certiorari, vacated the lower judgement, and remanded the case to the Court of Appeals for the Second Circuit with instructions to dismiss the case as moot.

What makes this decision particularly interesting is Justice Thomas’ concurring opinion.

Indeed, the concurring opinion of Justice Thomas addresses the ongoing discussions on the nature of “digital spaces”, in between private and public spheres. Many see an inherent vice in a system based on a private enforcement of fundamental rights to be performed by platforms, such as Twitter. 

In the case at stake, the petition revolved around the possibility to qualify a Twitter thread as a public forum, protected by the First Amendment. At the same time, however, the oddity of such a qualification becomes clear, as Twitter – a private company – has unrestricted authority to moderate the threads, according to its own terms of service.

Against this backdrop, Justice Thomas states that: “We will soon have no choice but to address how our legal doctrines apply to highly concentrated, privately owned information infrastructure such as digital platforms”.

From what has been said here, it should be clear that a turning point in the ongoing discussion that heats scholars, legislators as well as all players on a global scale is marked. Indeed, Justice Thomas analyses the legal qualification of the “public forum” doctrine and its applicability to Twitter threads since the main controversial element of the case regards the existence of a governmental control of the digital space (even in the limit of a single Twitter account). 

Given that Mr. Trump often used his personal account to speak in his official capacity, it is questionable whether this element might be fulfilled in the case at stake. At the same time, the private nature of providers with control over online content, combined with the concentration of platforms limiting the number of services available to the public, may offer new ways of legally addressing these challenges. For example, Justice Thomas proposes to consider the doctrines pertaining to limitations to the right of a private company to exclude others, such as “common carriers” or “public accommodation”. 

In this regard, Justice Thomas found that: “there is a fair argument that some digital platforms are sufficiently akin to common carriers or places of accommodation to be regulated in this manner”, especially in cases where digital platforms have dominant market share deriving from their network size.

Interestingly, Justice Thomas did not miss the chance to depict the digital environment as such, when stating that: “The Internet, of course, is a network. But these digital platforms are networks within that network”.

What is more, the dominant position of the main platforms in the digital market is taken for granted without further analysis. Namely, it is stressed that the existing concentration gives few private players “enormous control over speech.” This is particularly valid, considering that viable alternatives to the services offered by GAFAM are barely existing, also given to strategic acquisitions of promising start-ups and competitors. 

It becomes evident that public control over the platform’s right to exclude should – at least – be considered. In that case, platforms’ unilateral control would be reduced to the benefit of an increasing public oversight, that implies: “a government official’s account begins to better resemble a ‘government-controlled space’.”

Justice Thomas highlights that this precise reasoning gives strong arguments to support a regulation of digital platforms that addresses public concerns.

In conclusion, according to Justice Thomas, the tension between ownership and the right to exclude in respect to the right of free speech must be solved expeditiously and consideration must be given to both the risks associated with a public authority (as then-President Trump) cutting off citizens’ free speech using Twitter features, and the smoothing power of dominant digital platforms. 

Andrea Giulia Monteleone

The full text of the Supreme Court decision: 20-197 Biden v. Knight First Amendment Institute at Columbia Univ. (04/05/2021) (supremecourt.gov)

A Puzzling Decision: Freedom on Toy Blocks’ Production Market vs. LEGO’s Protection

We already referred to a case concerning Lego, and, in particular, concerning Lego mini-figures and their possibility to be registered as shape marks (see here).

Now, after filing an application for registration of a Community Design with the European Union Intellectual Property Office (EUIPO) to be applied in Class 21.01 of the Locarno Agreement of 8 October 1968 with the following description: “Building blocks from a toy building set”, Lego A/S (“Lego”) got a big surprise by Delta Sport Handelskontor GmbH (“Delta”) which decided to “raise its concerns”.

Specifically, on December 8, 2016, Delta decided to challenge the validity of Lego’s application on the basis of the fact that all the features of appearance of the product concerned by the contested design were solely dictated by the technical function of the product and, for that reason, were excluded from protection; the claims were mostly grounded on Article 25(1)(b) of Regulation No 6/2002, along with Articles 4 to 9 of that same Regulation.

In the first instance, on October 30, 2017, EUIPO’s Cancellation Division rejected the invalidity claim while, in appeal, the ruling was overturned and the design was invalidated (by decision of the Board of Appeal – April 10, 2019) (texts of both decisions are available here under section “Decisions”).

At this point, Lego decided to bring the case in front of EU’s General Court claiming (i) the annulment of the contested decision and (ii) the upholding of the decision of the Cancellation Division.

The EU’s General Court, on March 24, 2021, annulled the appeal decision, specifying, inter alia, what follows (full text available here):

  • The Board of Appeal failed to assess whether the design met the requirements of the exception provided for in Article 8(3) of Regulation No 6/2002 for which “the mechanical fittings of modular products may nevertheless constitute an important element of the innovative characteristics of modular products and present a major marketing asset, and therefore should be eligible for protection”. Since it failed to do so, it erred in law.
  • In order to assess whether a design product should be invalidated, all the features and/or elements of the same should be dictated by technical function, but if at least one of the features is not imposed exclusively by technical functions, the design cannot be declared invalid. In that regard, the General Court notes that the fact that the LEGO brick has a smooth surface of the upper face of the product is not present among the characteristics identified by the Board of Appeal, even though it is a feature of the appearance of the brick itself; and that
  • with reference to the above, the burden of proof should rely on the applicant of the invalidation query and then be ascertained by EUIPO.

Maria Di Gravio and Francesca Di Lazzaro

General Court (Second Chamber), 24 March 2021, LEGO A/S vs EUIPO, Case T‑515/19

HAPPY EASTER EVERYONE FROM IPlens

The Italian Competition Authority clarifies how to improve the internal market competitiveness. Focus on digital platforms.

On 23 March, the Italian Competition Authority (ICA) published a 105 pages report providing the Italian Government with a number of pro-competitive reform proposals in view of the forthcoming annual Competition Act (full report in Italian here). This last can be read as the outcome of a constructive dialogue between the ICA and the Government, whereby the first clarifies how to unlock the Italian market competitiveness and the latter may turn these suggestions into binding law. It falls outside the scope of this summary to examine whether, since the entry into force of the Competition Act in 2009, the Government year by year in charge followed the advices of the ICA. In any event, the recent call of Prime Minister Draghi himself for the ICA opinion during the opening speech at the Italian Parliament, bodes well for an effective 2021 Competition Act.

Covering a wide range of topics, the report overall implies the ICA embraced the new EU wide zeitgeist of the coming years, namely, sustainability and digitalization.

The reform proposals focusing on the connection between competition law and sustainable development, indeed, echo and tailor to the Italian markets’ peculiarities the innovative studies released by the Dutch and Greek Competition Authorities (and the recent report of the CMA). As for the digital sector, two proposals for legislative amendments concerning, first of all, the economic dependence law and, secondly, an ex-ante regulatory framework for digital platforms, stand out for the radical impact on the ICA enforcement regime they could have.

With regard to the first point, the Italian Law 192/1998 (“Law on Subcontracting in Production Activities”) imposes companies with a strong bargaining power against customers and/or other companies to avoid any abuse of this situation and related economic dependence on the latter. The economic dependence is determined according to the following criteria:

  1. the possibility for the company to impose an “excessive imbalance of rights and obligations in the companies’ commercial relationships”, and
  2. the “effective possibility”for the alleged abused company in“finding satisfactory alternatives on the market.”

The available remedies against this abuse consist in declaration of nullity of the agreement concerned, injunctions and compensation for damages. Remarkably, within this assessment, it finds no room neither the (costly and time consuming) antitrust analysis of firms’ dominance in a given market nor the assessment of anticompetitive effects herein occurred. Conversely, it shares with the abuse of dominance legal institute the up to 10% fines of the turnover of the company concerned the ICA may impose and the penalty payments in case of non-compliance with the ICA’s decision.

Against this background, an emerging trend pioneered by new legislation adopted in Germany and Belgium and by a more aggressive enforcement by the France antitrust watchdog, is revitalizing the legal instrument at hand, sharpening its teeth.

The ICA is following this trend setting its sight on digital platforms. The core of the proposal rests upon the introduction of a rebuttable presumption of economic dependence  “in case a company make use of the intermediary services provided by a digital platform holding (…), a crucial role in reaching end users and suppliers” (also due to network effects and/or data availability it can leverage ). A list of non-exclusive forbidden conducts supplements this proposal, fattening this newly shaped notion of abuse of economic dependence and enhancing the legal certainty for digital platforms. These are:

  1. directly or indirectly imposing unfair purchase or selling prices, other unfair conditions or retroactive non-contractual obligations;
  2. applying dissimilar conditions to equivalent transactions;
  3. refusing products or services’ interoperability or data portability, thereby harming competition;
  4. subordinating the conclusion and execution of contracts and/or the continuity and regularity of the resultingcommercial relations to the acceptance by other parties of supplementary obligations which, by their very nature and according to commercial usage, have no connection with the subject of such contracts;
  5. providing insufficient information or data on the scope or quality of the service provided;
  6. unduly benefitting from unilateral obligations that are not justified by the nature or the scope of the activity provided; in particular, by making the quality of the service provided conditional upon data transfer in an unnecessary or disproportionate amount”.

In sum, by overturning the economic dependence burden of proof on the platforms’ shoulders, if approved, this ambitious reform proposal will lead the enforcement toolkit under exam towards applications never tested before, opening a new chapter of the intriguing debate on the intertangles balance between “purely antitrust based” enforcement instruments and contract law grounded ones.

The second proposal sees the ICA willing to be armed with an ex-ante regulatory tool for digital platforms modelled on the recently implemented German one. Clarifying why an antitrust authority asks to be equipped such powers would need an ad hoc article (and probably a more competent author). Yet, for those brave enough desiring to learn more about the challenges competition law is facing in dealing with digital platforms, these reports are a good first step:

Herein it is merely underlined how it is widely accepted among antitrust scholars and practitioners the traditional ex-post antitrust enforcement rules are keeping pace with the challenges posed by global digital players. The abovementioned Commission report notably stressed the digitalization of the economy requires “a vigorous competition policy regime” that, nevertheless, “will require a rethinking of the tool of analysis and enforcement”. This rethinking is currently leading within the EU towards the development of a synergy between antitrust enforcement and regulatory legal instruments (this should not surprise, having Europe a long tradition of combining competition law and regulation in order to achieve similar policy objectives – e.g. the regulation of international roaming charges, Interchange Fee Regulation for the financial industry). The EU Commission draft Digital Market Act, being the last and most debated example of this policy choice, aims at monitoring certain anticompetitive behaviors carried out by the so-called Gatekeepers, those market actors that control the entry of new players into the digital market.

National Competition Authorities are pursuing the same path. In January 2021, the German Parliament adopted an amendment of the German Antitrust Act, introducing, among other things, a new weapon for the Bundeskartellamt to act against platforms and assess network effects. The reform proposal of the ICA mirrors this amendment.

This last introduces the legal notion of undertakings having “primary importance for competition in multiple markets (i.e. the Gatekeepers). The ICA shall declare this peculiar legal status following this non-exhaustive list of criteria:

  1. the dominant position in one or more markets;
  2. the vertical integration and/or presence on otherwise related markets;
  3. the access to data relevant to competition;
  4. the importance of the activities for third parties’ access to supply and sales markets and the
  5. related influence on third parties’ business activities.”

Once approved the status of Gatekeeper, in addition to the general rules governing abuse of a dominant position contained in 3 of the Law n. 287/90, the undertaking concerned is subject – for five years since the ICA declaration – to seven black-listed conducts:

  1. giving preferential treatment, in the supply and sales markets, to its own goods or services against other companies’ ones, in particular by giving preference to its own offers in the presentation or by pre-installing its own offers on devices or to integrate them in any other way into the companies’ offers;
  2. taking measures that hinder other companies in their business activities on procurement or sales markets, if the undertaking’s activities are important for access to these markets;
  3. hindering other companies on a market on which the undertaking can rapidly expand its position, even without being dominant, in particular through tying or bundling strategies;
  4. processing competitively sensitive data collected by the undertaking, to create or appreciably raise barriers to market entry;
  5. impeding the interoperability of products or services or the portability of data;
  6. providing other companies with insufficient information about the services provided or otherwise make it difficult for them to assess the value of this service;
  7. demanding advantages for the treatment of another company’s offers which are disproportionate to the reason for the demand, in particular by demanding for their presentation the transfer of data or rights which are not absolutely necessary for this purpose.

The aforementioned prohibitions do not apply to the extent that the Gatekeeper demonstrates that its conduct is objectively justified. It is not yet clear whether this burden on companies is to be understood in the same way as the traditional discipline on abuse or whether in addition to reasons of efficiency, public interest reasons (e.g. safety) could also be included. In case of non-compliance, the ICA could fine the undertakings concerned and/or impose behavioral or structural remedies to put an end to the infringement and its effects or to prevent a repeat of the conduct.

In conclusion, while the above reported reports made the last years a period of reflection for competition law, the antitrust watchdogs are now making their moves to tackle a number of shortcomings the rise of platforms led to. Despite sharing a basic premise – the perceived inherent limits of ex-post antitrust enforcement – as far as it can be assessed at this preliminary stage, the regulatory models under discussion give rise to several questions.

It is still unclear, indeed, how the concurrent application of antitrust and regulatory powers (both at EU and national level) will work when a conduct simultaneously violates competition law and harms market contestability or P2B fairness. On the same vein, how the synergies between antitrust and regulatory powers can be maximized without harming legal certainty and predictability (without forgetting procedural safeguards, above all, the ne bis in idem principle)?

One could further pinpoint an apparent lack of awareness of the intersection between competition and other regulatory protections – such as, for instance, privacy – in the digital sector, and thus the need for forms of coordination and shared responsibility among all authorities with functions in the digital sector. Indeed, some of the above reported obligations on Gatekeeper, such as those relating to interoperability and data portability and to the prohibition to introduce or reinforce entry barriers based on data exploitation, could give rise to tensions with the legislation – and the Authority empowered to surveilon the processing of personal data. In this sense, the absence of institutional cooperation let foresee a bumping road ahead for the governance of the digital economy. Yet, by providing for, as an example, rules on information exchanges between authorities or even co-responsibility between authorities in some areas, the Italian legislator could still anticipate these overlaps (not least giving the impression that taking the best from other countries should not set critical thinking aside).

Andrea Aguggia

“To Sample, Or Not To Sample, That Is The Question”

On 16th September 2020 the United States District Court for the Central District of California had to decide if the use by an artist – known as Nicky Minaj – of the recording of lyrics and melodies of a musical work “Baby Can I Hold You” by the artist Tracy Chapman (hereinafter the “Work”) for artistic experimentation and for the purpose of securing a license from the copyright owner is fair use (full decision here). Nicky Minaj was aware that she needed to obtain a license to publish a remake of the Work as her remake incorporated many lyrics and vocal melodies from the Work. Minaj made several requests to Chapman to obtain a license, but Chapman denied each request. Minaj did not include her remake of Sorry in her album. She contacted DJ Aston George Taylor, professionally known as DJ Funk Master Flex, and asked if he would preview a record that was not on her album.

The Court recognized fair use based on the following assessments:

·       the purpose of Minaj’s new work was experimentation. Since Minaj “never intended to exploit the Work without a license” and excluded the new work from her album, Minaj’s use was not purely commercial. In addition, the Court noted that “artists usually experiment with works before seeking licenses, and rights holders usually ask to see a proposed work before approving a license” The Court expressed concern that “the eradication … [these] common practices would limit creativity and stifle innovation in the music industry“;

·       the nature of the copyrighted work, did not favor fair use because the composition is a musical work, which is “the type of work that is at the core of copyright’s protective purpose“;

·       the amount of the portion used in relation to the work as a whole, favored fair use. Although Minaj’s new work incorporated many of the composition’s lyrics and vocal melodies, the material used by Minaj “was no more than necessary to show Chapman how [Minaj] intended to use the composition in the new work“;

·       the effect of the use on the potential market or value of the copyrighted Work, favored fair use because “there is no evidence that the new work usurps any potential market for Chapman“.

Considering the factors together, the Court found that Minaj’s use was fair and granted partial summary judgment in favor of Minaj that her use did not infringe Chapman’s right to create derivative works. The Court determined that Chapman’s distribution claim has to be tried and resolved by a jury, but a settlement eliminates the need for a trial. Minaj has paid a significant sum (450.000,00 Us dollar) to settle and avoid the risk of trial. If on one hand, this case confirm that private sampling should be protected as fair use, on the other hand it sounds like a warning for artists on sampling matter. Obtaining a preliminary license – also in the land of fair use – is always the best practice, although creativity and experimentation needs – in the opinion of the writer – to be protected to empower the spread of different music genre and contribute on cultural renaissance, especially regarding hip hop music, that is historically based on sampling.

The decision offers an interesting comparison with the Pelham case (CJEU – C-476/17 Pelham GmbH and others) in order to analyze how the two different systems are evolving on sampling matter.  Actually, the agreement between these two decisions is only partial.

Indeed, in Pelham the CJEU recognized the admissibility of “unrecognizable sample“. According to the CJEU “where a user, in exercising the freedom of the arts, takes a sound sample from a phonogram in order to use it, in a modified form unrecognizable to the ear, in a new work, it must be held that such use does not constitute ‘reproduction’ within the meaning of Article 2(c) of Directive 2001/29.”

Furthermore, in Pelham CJEU argue that the reproduction of a sound sample, even if very short, constitutes a reproduction that falls within the exclusive rights granted to the producer of phonogram. Considering that the US Court stressed that “not only (…) the quantity of the materials used, but about their quality and importance, too” has to be considered, according to Campbell, 510 U.S. at 587, this is probably one of the main gaps between the two decisions.

Indeed, the logical-argumentative process of the US Judge moves from a deep context analysis that implies an interpretation of sampling based on the purpose and character of the uses, according to the common-law tradition of fair use adjudication that always preferes a case-by-case analysis rather than bright-line rules.

Instead, the CJEU chose a different approach, arguing that the “free use” is a derogation not provided by the Infosoc Directive, so any reproduction act is subject to the reproduction rights notion mentioned by art.2 of each Directive. This “static” approach also (and especially?) depends on the pending – and unsolved – harmonization process of the European system of exceptions and limitations provided by the Infosoc Directive.

The US Court, instead of being based on a parameter of appreciation such as the “recognizability of hearing”, comes to the balance through an analysis of context aimed at preserving the freedom of artists to experiment, demonstrating – even in the (apparent) identity of results – more courage, as opposed to the practical approach of the European Court of Justice. The CJEU has not – in the opinion of the writer – taken the opportunity to move more decisively towards a grater balance between exclusive rights and fundamental freedoms, which should be considered the freedom to experiment for artists.

Matteo Falcolini

Chapman v. Maraj No. 2:18-cv-09088-VAP-SS (C.D. Cal. Sept. 16, 2020)