A PROPOSAL FOR (AI) CHANGE? A succinct overview of the Proposal for Regulation laying down harmonised rules on Artificial Intelligence.

INTRODUCTION

The benefits from the implementation of Artificial Intelligence (“AI“) systems for citizens are clear. However, the European Commission (“EC“) seems to be conscious about the rights arising from the use in terms of both safety and human rights. In this context, as part of an ambitious European Strategy for AI, the European Commission has published the proposal for a Regulation on a European approach for AI (the “Proposal“), where AI is not conceived as an end in itself, but as a tool to serve people with the ultimate aim of increasing human well-being.

The Proposal does not focus on technology, but on the potential use that different stakeholders could make of AI systems and, as a result, potential damages arising from its use. To address potential damages while capturing the full potential of AI related technologies, the Proposal, following an horizontal approach, is based on four building blocks: (i) measures establishing a defined risk-based approach; (ii) measures in support of innovation; (iii) measures facilitating the setting up of voluntary codes of conduct; and (iv) a governance framework supporting the implementation of the Proposal at EU and national level and its adaptation as appropriate.

One may wonder why the Proposal is needed. The functioning of AI systems may be challenging due to its complexity, autonomy, unpredictability, opacity and the role of data within this equation. Such characteristics are not selected in an arbitrary manner but purposely spotted by the regulator as areas of concern in terms of: (i) safety; (ii) fundamental rights; (iii) enforcement of rights; (iv) legal uncertainty; (v) mistrust in technology; and (vi) fragmentation within the EU.

The Proposal is not a surprise for many legal experts in the field and, as expected, leverages on, inter alia, the work carried out by the High-Level Expert Group on Artificial Intelligence (Ethics Guidelines for Trustworthy AI[2] and the Policy and Investment Recommendations for Trustworthy AI), the Communication from the EC on Building Trust in Human-Centric Artificial Intelligence, the White Paper on Artificial Intelligence, including the Data Governance Act, the Open Data Directive and other legislative initiatives covered under the European Data Strategy.

The Proposal introduces many aspects that might deserve further clarification as the legislative process goes on. Some are (i) the scope of application of the Proposal; (ii) the definition of AI systems; (iii) the AI-risk based approach; (iv) the role of standards (e.g. conformity assessment and CE marking and process); (v) the role of data, the obligations on data quality and the interaction of the Proposal with the legislation governing both personal and non-personal data; and (vi) the governance structure and the role of the European Commission, the (new) Artificial Intelligence Board, the AI Expert Group and, at a national level, national competent authorities. The following paragraphs shall be devoted to explore some of the above mentioned aspects.

1.- AI Definition

Finding an AI definition seemed to be a challenge for the EC and, yet, current definition is not exempt from controversy due to its broadness. As a matter of fact, AI is defined as “software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with” (Art. 3.1 (1) of the Proposal), where Annex I lays down a list of AI techniques and approaches such as, currently, machine learning, logic -and knowledge based- and statistical.

Although the aim of the EC was to provide a neutral definition in order to cover current and future AI techniques, many stakeholders have already manifested their concerns with regards to the comprehensiveness of this definition. The latter, considering that, while it is convenient to promote flexible legislation, this broad definition including a referral to the Annex I potentially subject to periodical amendments, may raise some legal uncertainty concerns in the industry.

2.- An (overreaching) scope?

To ensure the horizontal application of key requirements developed by the High-Level Expert Group on Artificial Intelligence, the Proposal aims at harmonising certain rules concerning the placing on the market, putting into service and use of AI systems that create a high risk to the health and safety or fundamental rights of natural persons (“high-risk AI systems“) in the EU.

Based on the intended purpose of the AI system and following a risk based approach, the Proposal: (i) prohibits certain AI practices; (ii) establishes requirements and obligations for high-risk AI systems – both ex-ante and ex-post; and (iii) sets forth limited transparency obligations for certain AI systems.

Despite the intention to establish a common normative standard for all high-risk AI systems, the application of the Proposal is limited when it comes: (i) to AI systems intended to be used as safety components of products or systems, or which are themselves products or systems covered by certain legislation applying to aviation, railways, motor vehicles and marine equipment sectors. (Art. 2.2 of the Proposal); and (ii) AI systems used for military purposes.

One may also ask, which are the stakeholders affected by the Proposal considering, in particular, the complexity and comprehensiveness of the AI ecosystem. Here, the European Union legislator seemed to be inspired by Regulation (EU) 2016/679 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data (“GDPR“), establishing that the Proposal shall apply to:

  • Providers of AI systems irrespective of whether they are established within the EU or in a third country outside the EU;
  • Users of AI systems established within the EU;
  • Providers and users of AI systems that are established in a third country outside the EU, to the extent the AI systems affects persons located in the EU;
  • EU institutions, Offices and Bodies.

On the one hand, Art. 3 (2) of the Proposal defines “provider” as the one who “develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge. On the other hand, according to Art. 3 (4) “user” means any “natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity“.

Hence, apart from suggesting a broad territorial scope of application, affecting providers and users located outside the EU, the Proposal seems to bring different obligations down the supply chain, placing on the ultimate provider and professional user of the AI system much of the legal burden coming from the Proposal.

Considering the multiplicity of stakeholders intervening in the AI system lifecycle (e.g. data providers, third-party assessment entities, integrators, software developers, hardware developers, telecom operators, over the top service providers, etc.) and the High-Level Expert Group on Artificial Intelligence Guidelines recommendations on inclusive and multidisciplinary teams for the development of AI systems, the Proposal, in general, fails to provide guidance on how the interaction between AI system supply chain stakeholders shall be.

Therefore, legal issues amongst stakeholders may arise such as potential contractual derogations, attributions of contractual and non-contractual liability and its validity (and compatibility) according to the principle of accountability – as inspiring the whole Proposal. In addition, is worth mentioning that the consistency with other legal frameworks such as defective product legislation is fundamental in order to ensure cohesion and legal certainty – see, to this end, the EC Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics.

3.- And the Commission said “risk-based approach”

The Proposal provides four non-mutually exclusive categories of risk: (i) unacceptable risk; (ii) high-risk; (iii) other risk – AI with specific transparency obligations; and (iv) low or no risk. Depending on the category of risk, the obligations of providers, users and other stakeholders will vary, from complete prohibition to permission with no restrictions.

  • Prohibition

In this context, following the category of risk for which there is an unacceptable risk, the EC proposes to prohibit, mainly, the following AI practices:

  1. AI systems that deploy subliminal techniques beyond a person’s consciousness to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm;
  2. AI systems that exploit people’s vulnerabilities due to their age, physical or mental disability, in order to distort the behaviour of a person in a manner that causes or is likely to cause harm to that person;
  3. AI systems, used by public authorities or on their behalf, for the evaluation or classification of the trustworthiness of natural persons when the social scoring may lead to detrimental or unfavourable treatment: (a) in social contexts which are unrelated to the contexts in which the data was originally generated or collected; or (b) that is unjustified or disproportionate to their social behaviour or its gravity; and
  4. the use of ‘real time’ remote biometric identification systems in publicly available spaces for law enforcement, unless and in as far as such us is strictly necessary for different objectives (e.g. targeted search for specific potential victims of crime; prevention of specific, substantial and imminent threat to the life of physical safety of natural persons or a terrorist attack, etc.).

While the Proposal prohibits some AI practices that were already under the spotlight of different Member States, such as facial recognition systems (see for instance the decision from the Italian Data Protection Authority –Garante per la Protezione dei Dati Personali– with regards to the Sari Real Time system), other cases leave room for interpretation such as “AI systems that exploit people’s vulnerabilities” or “AI systems that deploy subliminal techniques”. Therefore, the scope of the prohibition of certain AI practices under the Proposal, would be broader in comparison with the specific use cases banned so far within the European Union.

  • High-level risk

The proposed regulation establishes quite a broad list of sectors and uses potentially falling within the high-level risk category that, could be amended from time to time by the EC. In particular, according to Art. 6 of the Proposal, an AI shall be classified as high-risk, in the following scenarios:

  1.  In cases where the following two conditions are met:
    1. the AI system is intended to be used as a safety component of a product or is itself a product, covered by the list of Union harmonisation legislation listed in Annex II of the Proposal.
    1. the products whose safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment in order to be placed on the market or put into service pursuant to legislation contained in Annex II.
  2. For the AI systems provided in Annex III.

Therefore, the current list of high-level risk AI systems is contained in Annexes II and III of the Proposal. While Annex II covers a wide range of products or safety component of products governed by sectorial European Union law, such as machinery, transport, medical devices or radio equipment; Annex III defines some high-risk applications such as biometric identification and identification of natural persons, management and operation of critical infrastructures, AI for recruitment purposes, law enforcement, or education and vocational training.

High-level risk AI systems are at the centre of the Proposal, which establishes a set of obligations covering the entire AI lifecycle – from its design to its implementation. Such requirements include: (i) to carry out a conformity assessment and subsequent CE marking; (ii) transparency and information obligations; (iii) sign up in the EU database for high-risk AI practices; (iv) logging of activities; (v) human oversight; (vi) record-keeping and documentation obligations; (vii) establishment of risk and quality management systems; (viii) robustness, accuracy and cybersecurity obligations; and (ix) use of high-quality datasets for training, validation and testing. Most relevant obligations both, for AI providers and users, are the following:

Provider obligationsUser obligations
Establish and implement quality management system.

Elaborate and keep up to date technical documentation.

Logging obligations to enable users to monitor the operation of the high-risk AI system.

Conduct conformity assessments, and potentially re-assessment of the system in case of significant modifications.

Conduct post-market monitoring.

Collaborate with market surveillance authorities.
Operate AI systems in accordance with instructions of use.

Ensure human oversight when using of AI system.

Monitor operation for possible risks.Inform the provider or distributor about any serious incident or any malfunctioning.

Compliance with existing legal obligations (e.g. GDPR).

Source: L. SIOLI, CEPS webinar -European approach to the regulation of artificial intelligence (April 2021).

As suggested, high-risk AI systems concentrate the bulk of requirements established in the Proposal, lacking guidance in some aspects and introducing some caveats that, hopefully, will be clarified during the legislative process.

  • Other risk

Operators placing, putting into service or using AI systems having a lesser risk than high-risk AI systems shall still have to comply with transparency obligations vis-à-vis users and implementers, such as: (i) notification to humans that are interacting with an AI system, unless such interaction is evident; (ii) notification to humans that are being subject to emotional recognition or biometric categorisation systems; and (iii) application of labels to ‘deep fakes’, unless the use of ‘deep fakes’ becomes necessary for public interest reasons (e.g. criminal offences) or it is necessary for the exercise of fundamental rights (Art. 54 of the Proposal).

This category of risk comes also with different caveats that prevent the application of different notification obligations. As anticipated, some clarification could be also needed here. When doesthe interaction with an AI system becomes evident? How should the ‘notifications’ be carried out?

  • Limited or no risk

Although AI systems under this category do not result in mandatory obligations for its providers and users, the Proposal imposes the EC and the European AI Board to encourage the development of codes of conduct to enhance transparency and information about such “low or no risk” AI systems (Art. 69 of the Proposal).

In light of the above, although the EC understands that, in general, most AI systems will not entail a high-risk, one may observe a set of rules are widely applicable to all categories of risk in order to enhance, inter alia, transparency, safety and accountability within the AI ecosystem.

4.- Measures in support of innovation

A very welcome mechanism are the AI regulatory sandboxes. Regulatory sandboxes represent a regulatory concept based on “experimental legislation”, where technology companies can test and develop their innovations benefiting, for instance, from the exemption of the application of certain specific rules or legal regimes under a controlled environment. In particular AI regulatory sandboxes provide a controlled environment where the development, testing and validation of innovative AI systems are facilitated for a period of time before coming into the market under the supervision of Member states authorities or the European Data Protection Supervisor.

Although the modalities, conditions and other criteria shall be governed by the corresponding implementing acts, the Proposal seems to introduce some flexibilities when it comes to further data processing in this context (Arts. 53 and 54 of the Proposal).

The EC has proved particularly sensitive to small-scale providers and start-ups, providing some advantages through the Proposal in order to enable greater access to available resources and establish a level playing field regardless of the size and scope of the company. It is remarkable, for instance, the differentiation that from the very beginning is made between “providers” and “small-scale providers” (Art. 3 of the Proposal) in an attempt to foster the creation of a level-playing field also adapted to micro or small enterprises.

For instance, the Proposal foresees priority access to AI sandboxes to be legally granted for small-scale providers and start-ups (Art. 55 of the Proposal); organisation of awareness raising activities about the Proposal (Art. 55 of the Proposal); or the consideration -by the EC and the European AI Board- of the specific interests and needs of small-scale providers and start-ups when encouraging and drawing up codes of conduct (Art. 69.4 of the Proposal)

5.-Governance structure and enforcement

Governance structure and enforcement seem to bear some similarities with the GDPR. As such, the Proposal creates the European Artificial Intelligence Board, which shall coordinate its activities with the corresponding National Competent Authorities. This, as such, is not the only novelty but, always at the European level, the EC is expected to act as secretariat and a supporting AI Expert Group (potentially equivalent to the High-Level Expert Group on Artificial Intelligence) to be created in the future.

In addition, administrative sanctions mirror those established in the GDPR, broken down to different scales depending on the severity of the infringement and amounting to up to 30 million of euros or the 6% of the total worldwide annual turnover of the preceding financial year for most severe infringements (Art. 71 of the Proposal). The EC itself does not seem entitled to impose sanctions, since this task has been attributed to Member States’ national authorities.

Some questions are still left open, such as coordination mechanisms between authorities, in particular with regards to cross-border infringements of the Proposal. In addition, there is still some lack of clarity on what the specific authorities are expected to have competence at a national level. Current local Data Protection Authorities? Brand-new national AI authorities?

Finally, current administrative procedures mimicking, to a great extent, the current EU competition law system and the GDPR, also risk leading to fragmentation and heterogeneity between Member States. In particular, the lack of a clear decision review mechanism at a European level (i.e. administrative sanctions are only reviewed at a national level) and the entitlement of authorities to decide on an infringement having an impact in more than one member state, remain unclear.

6.- “What’s in” for Intellectual Property?

As one may understand, the main focus of the Proposal is not Intellectual Property (“IP“) but an horizontal approach to AI. Still, even though IP is only referred to twice throughout the Proposal, some questions remain unanswered, in particular, how to ensure the compliance with the obligations set forth in the Proposal while protecting IP rights and trade secrets. Here, the dichotomy between access and protection to ensure easy implementation, safety and interoperability of AI systems could need to be revisited.

Regarding transparency and information to users when using high-risk AI systems, several points can be made. What does it mean that the “operation is sufficiently transparent to enable users to interpret the system’s output” foreseen in Art. 13 of the Proposal? Does IP act as a facilitator or as a barrier to this transparency requirement? Does the current IP legal system foresee appropriate mechanisms in order to access necessary information?

Apart from specific transparency obligations, some other legal requirements set forth in the Proposal may entail the communication of different business and technology related information to other operators. For instance, how to ensure appropriate risk and quality assessment systems while there is protected information for stakeholders having to implement such systems? Do different stakeholders along the AI supply chain need to access IP and trade secret protected information or data? Which are the IP barriers to ensure data interoperability?

One may argue, for instance, from a copyright perspective, that the entitlement to “observe, study or test the functioning of the program in order to determine the ideas and principles which underlie any element of the program” (Art. 5.3 of the Software Directive) or the possibility to decompile the software (Art. 6 of the Software Directive) could not be of great use in cases where software is being periodically modified, considering also that observing, studying, testing or decompiling such software most of times becomes a costly process.

With regards to patent law, can the current “sufficient disclosure” obligation standard (see, for instance Art. 83 of the European Patent Convention be enough to ensure a “sufficiently transparent” operation? Could Art 27 (k) of the Agreement on a Unified Patent Court (providing that acts covered under Art. 6 of the Software Directive do not constitute an infringement in particular with regards to de-compilation and interoperability) inspire reverse engineering exceptions for the purposes to obtain information allowed under previously mentioned Art. 5 of the Software Directive?

Trade Secrets Directive regime is even more restrictive than previous “de-compilation” exception and only allows reverse engineering or access to information where the acquirer of the trade secret is free from any legally valid duty to limit the acquisition of the trade secret (Art. 3.1 (b) of the Trade Secrets Directive). Nevertheless, the novelty of the Trade Secrets Directive and the lack of case law causes some legal uncertainty.

Moreover, the Proposal establishes the obligation to disclose the AI source code to enforcement authorities. Although this practice could be covered under specific or general exemptions provisions currently in force based on the principle of public interest (e.g. Art. 1.2 (b) and Recital 11 of the Trade Secrets Directive), how is access to the source code useful for enforcement authorities? What is the scope of source code to be disclosed? That of the trained AI model? The validated AI model? The code to build the AI model? In practice, the above could lead to divergent practices between national authorities, which could be requesting slightly different types of information, in particular when it comes to AI systems using machine learning approaches.

Also in this context, Art. 70 of the Proposal with regards to disclosure obligations, particularly protects the confidentiality of information and data communicated to national competent authorities and notified bodies involved in the application of the Proposal. In this line, authorities shall carry out “their tasks and activities in such a manner as to protect, in particular: (i) intellectual property rights, and confidential business information or trade secrets of a natural or legal person, including source code, except the cases referred to in Article 5 [of the Trade Secrets Directive]”, where the latter provision foresees four exceptions to trade secrets rights. At this stage, albeit the protection of IP rights, confidential information and trade secrets was addressed by the legislator when drafting the Proposal, the same appears to leave room to authorities to decide which the concrete measures for its protection shall be without providing ulterior guidance (i.e. “carrying out activities in such a manner as to protect”).

In addition, the Proposal provides that “the increased transparency obligations will also not disproportionately affect the right to protection of intellectual property (Article 17(2) [EU Charter of Fundamental Rights), since they will be limited only to the minimum necessary information for individuals to exercise their right to an effective remedy and to the necessary transparency towards supervision and enforcement authorities” and that”when public authorities and notified bodies need to be given access to confidential information or source code to examine compliance with substantial obligations, they are placed under binding confidentiality obligations“. Therefore, the purpose to set a proper balance between IP and trade secret rights and access to information and data seems clear. However, guidance on concrete measures by national and notified bodies to protect IP, confidential information and trade secrets is desirable.

An additional effort must be done in connection to the above mentioned aspects and, consistently with sectorial regulation having an impact on access (and protection) of information covered under IP, build a congruent system that ensure the appropriate trade-off between access and protection, not only from a theoretical perspective, but following a pragmatic approach.

CONCLUSION

The Proposal is very much needed in order to ensure the “human-centred approach” to AI underlined on numerous occasions by different EU Institutions and comes after a long process that, as can be appreciated, has led to what could be considered a well-structured and timely Proposal. Albeit an unprecedented piece of legislation, European Union institutions must ensure that the final outcome does not lead to a burdensome regulation that, in connection with, inter alia legislation governing data protection, digital services and sectorial regulation, becomes a complex regulatory maze for companies to navigate through – redounding in a chilling effect to innovation and, as a result, issues connected to the development of the hoped-for fruitful strong digital and AI ecosystem.

Notwithstanding the above, the clarification on certain provisions, consistency with the current IP and data protection legal frameworks, and the application of lessons learnt from the GDPR could make the Proposal “future-proof”, also considering the current business, geopolitical and societal context. Also, an appropriate vacatio legis term between the adoption of the final text and its entry into force in order to adapt and additional guidance on the governance and enforcement structures shall be fundamental.

Now, the text is into “inter-institutional” negotiations, going to the European Parliament and Council for further debate, where different public and private stakeholders shall have the opportunity to get involved.

Rubén Cano

Proposal for a Regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act)

Another Italian take on the Fulvestrant Saga. The Court of Milan on technical prejudice, plausibility and off-label use.

AstraZeneca has fought its fulvestrant patent portfolio all across Europe, including Italy, for quite some time (see for instance here, here, here and here). Fulvestrant is an oncological product used for the treatment of breast cancer and is marketed by AstraZenca as Faslodex.

The Court of Milan recently chipped in again with its decision of 3 December 2020 (No. 7930/2020, Judge Rapporteur Ms Alima Zana, AstraZeneca v. Teva, here), delivering a number of interesting points on validity and infringement of second medical use patents.

In particular, the Court argued that the Italian arm of AstraZenca’s European patent No. EP 1 272 195 (“EP195”) lacked inventive step, among other things, as it did not overcome a technical prejudice, that the claimed therapeutic indication was not plausible and that – in any case – AstraZeneca did not offer sufficient evidence of Teva’s off-label infringement.

Photo by Chokniti Khongchum on Pexels.com

Procedural background

In 2018, AstraZeneca filed an action against Teva before the Court of Milan for the alleged infringement of the EP195 patent, titled “use of fulvestrant in the treatment of resistant breast cancer”. Teva counter-claimed arguing that EP195 was invalid.

In May 2020, as Teva was about to launch Fulvestrant Teva into the Italian market, AstraZeneca filed a petition for preliminary injunction based on EP195. The Court refused the PI in August (here) as it deemed that AstraZeneca’s action did not meet the “likelihood of success on the merits” requirement (so-called fumus boni iuris). The Court thus went on to decide the case on the merits.

The patent and the allegedly infringing product

Claim 1 of EP195 reads: “Use of fulvestrant in the preparation of a medicament for the treatment of a patient with breast cancer who previously has been treated with an aromatase inhibitor and tamoxifen and has failed with such previous treatment” – a typical “Swiss-type” claim (for additional information on “Swiss-type” claims, see EPO’s Guidelines, here).

We gather that the patented use of fulvestrant (in “the treatment of a patient with breast cancer who previously has been treated with an aromatase inhibitor and tamoxifen and has failed with such previous treatment”) was not indicated in the SmPC or package leaflet of Fulvestrant Teva. The product’s marketing authorization (“MA”) would confirm this (see the 2016 version of the MA, here, and further updates here, here, here and here).

Expert opinions and foreign decisions

The Court of Milan deemed that EP195 was invalid, going against the opinion of its own Court-appointed expert. This is rather uncommon in Italian patent proceedings since Judges have no technical background and they necessarily rely on their experts’ reports. Here, however, the Court was convinced by the opinion issued by a different expert in the parallel proceedings brought by AstraZeneca against another generic company on EP195 (Docket No. 37181/2018 – this case is referred to in the judgement, but we are not aware of a published decision yet).

Also, the Court relied on the foreign decisions issued on EP195 by the German Bundespatentgericht (here), the Swiss Bundespatentgericht and the Court of Barcelona (order of 18 July 2018, confirmed in 2019, here).

In doing so, the Court stressed that judges can base their decisions also on evidence that is acquired in different proceedings between the same parties, or even different parties. The general rule of Article 116 of the Civil Procedure Code (CPC) – according to which “the judge must assess the evidence according to his or her prudent judgment, unless the law provides otherwise” – fully applies to the Expert Report issued in the parallel proceedings brought by AstraZeneca on EP195. Besides, the “parallel” Expert Report was also filed by the parties to these proceedings and heavily discussed in their briefs, so that there was no surprise argument.

Lack of inventive step

The Court-appointed Expert deemed that the closest prior art to EP195 was the “Gale” article. The Court picked the “Vogel” article instead as the closest prior art and determined the objective technical problem accordingly. Applying the EPO’s “could-would approach”, the Court of Milan concluded that the solution disclosed in EP195 (i.e., to use fulvestrant as a line of treatment for breast cancer after aromatase inhibitors and tamoxifen) lacked inventive step.

According to the Court, the person skilled in the art would have found in “Vogel” an incentive to use fulvestrant as a third line treatment, after tamoxifen and aromatase inhibitors, since (i) “Vogel” mentioned pure anti-estrogens as emerging candidates to be used in the second-to-fourth therapeutic line (ii) fulvestrant was identified as a pure anti-estrogen in a publication (“DeFriend”) referenced in “Vogel” and (iii) “Vogel” specifically taught that pure anti-estrogens inhibit the growth of a tumor already treated with tamoxifen.

The Court also denied that the use of fulvestrant after tamoxifen overcame a technical prejudice. AstraZeneca argued that because tamoxifen and fulvestrant have a similar mechanism of action, the person skilled in the art would have feared that the breast cancer may have formed resistances and mutations to fulvestrant after administration of tamoxifen. The Court, however, found that tamoxifen and fulvestrant have substantial differences, which were known in the art, and therefore no technical prejudice could arise. On this point, the Court directly referenced the German Bundespatentgericht decision (here), which found that there was no technical prejudice against the choice of fulvestrant as a pure anti-estrogen since:

the person skilled in the art associated the choice of fulvestrant with a reasonable expectation of success because Vogel considers it to be an advantageous example of a pure anti-estrogen, as it does not show any relevant side effects despite its high efficacy and shows fewer side effects than other drugs known to be effective for the endocrine treatment of breast cancer”.

Plausibility (?)

As an additional ground of invalidity, the Court of Milan stressed that the patent would be valid only in relation to a purely palliative treatment (and not as an adjuvant therapy) in a patient affected by breast cancer that was previously treated by an aromatase inhibitor and tamoxifen, where said treatments were unsuccessful. This is because the experimental data provided in the patent and in the “Perey” document – published in 2007, after the filing – allegedly “refer only to a palliative use of fulvestrant”, showing that a “plausible solution to the technical problem was found only in relation to the palliative use” of the active principle. The Court thus held that, in any case, AstraZeneca would have had to submit a limited wording of claim 1 on the palliative use alone.

The Court, however, did not precisely identify which patentability requirement would have been affected by this “plausibility” issue, although novelty, inventive step and sufficiency of disclosure can all be theoretically impacted by a (lack of) plausibility argument (on the “nebulous” plausibility requirement, see a recent article by Rt. Hon. Professor Sir Robin Jacob, here). The Court was arguably hinting at an insufficiency of disclosure in relation to the therapeutic indication. It should be stressed, however, that the EPO does not expect applicants to submit clinical trials for each therapeutic indication. In particular, the EPO Guidelines (here) suggest that:

Either the application must provide suitable evidence for the claimed therapeutic effect or it must be derivable from the prior art or common general knowledge. The disclosure of experimental results in the application is not always required to establish sufficiency, in particular if the application discloses a plausible technical concept and there are no substantiated doubts that the claimed concept can be put into practice (T 950/13 citing T 578/06). Any kind of experimental data have been accepted by the boards. It has also been repeatedly emphasised that ‘it is not always necessary that results of applying the claimed composition in clinical trials, or at least to animals are reported’ (T 1273/09 citing T 609/02)”.

Besides, in an earlier case on fulvestrant (Decision of 24 July 2019, Actavis v. AstraZeneca, concerning the Italian portion of EP 2 266 573: here), the Court of Milan had rejected an insufficiency/plausibility argument by stating that: “[t]he inclusion of examples in the patent specification is not mandatory nor is it a requirement for the sufficiency of the disclosure the completion of a clinical trial at the date of the filing, as it is possible to rely on subsequent trials (decision T 433/05), provided that the effect was plausible at the date of filing” (for a comment on this case see here). As the argument was not fully developed in the decision by reference to the “Perey” document, it could seem that the Court of Milan is adopting diverging approaches to plausibility, and further clarifications would be welcomed.

Lack of infringement

To seal the deal, the Court of Milan then went on to say that, in any case, Teva’s product would not have infringed EP195. The Court argument goes as follows:

I. The asserted patent is a patent for a second medical use, the patent being directed at the protection of a known substance, limited to the use claimed;

II. In such a case the infringement cannot be found if the generic medicine in its marketing authorisation contains a clear limitation to uses that do not violate patent rights […]

III. In order for there to be infringement […] it is necessary that the [competitor’s] product is not only abstractly suitable for the claimed use, but that the package leaflet indicates the use of the product for the intended purpose.

This assessment is supported by the numerous proceedings in other jurisdictions that have ruled in favour of non-infringement [e.g. the decision of the Oberlandesgericht Düsseldorf, 9 January 2019, mentioned in the footnote].

The defendant’s argument that Teva’s drug is not ‘suitable’ for the claimed use is therefore correct, as this is not the case from a regulatory (since the product is not authorised for the claimed use) nor from a practical point of view (since the claimed use is not foreseen or included in the oncological practice), nor has the [plaintiff] offered any evidence to the contrary.

Due to the finding of invalidity of EP195, the Court’s non-infringement argument is ad abundantiam and consequently rather short. However, some passages are worth being discussed.

First, as to point (II), it is not entirely clear what the Court is referring to when it says that the MA of Fulvestrant Teva “contains a clear limitation to uses that do not violate patent rights”. Assuming that the Fulvestrant Teva’s MA does not explicitly include the therapeutic indication covered by EP195, the Court could be referring to the following passage in the authorization (see here and here):

The holder of the marketing authorization for the generic drug is solely responsible for full compliance with the industrial property rights of the reference medicinal product and of the patent legislation in force.

The holder of the MA of the equivalent medicine is also responsible for ensuring full compliance with the provisions of Article 14(2) of Legislative Decree 219/2006, which requires that those parts of the summary of product characteristics of the reference medicine which refer to indications or dosages still covered by a patent at the time the medicine is placed on the market are not included in the leaflets”.

If this is the case, the Court’s argument on the MA is not particularly convincing. Indeed, this wording – which is boilerplate and often included in MAs – can hardly be read as a “clear limitation to uses that do not violate patent rights”. Infringing uses are technically neither excluded nor forbidden by it, at least under patent law. The Italian Pharmaceutical Authority simply stresses that it is the generics’ responsibility to steer clear of patent rights.

Second, as to point (III), the Court would seem to suggest that patent infringement of a second medical use patent can only occur if the package leaflet “indicates the [claimed] use“. However, the fact that a specific therapeutic indication is missing from the MA, SmPC or package leaflet does not exclude per se that the drug may be used and prescribed for the patented use.

Photo by Pixabay

Besides, the Court’s argument is not fully supported by the case law of the Oberlandesgericht Düsseldorf referred to in the decision (see here and here). Although in the parallel AstraZeneca v. Teva proceedings the Düsseldorf Court found that EP195 was not infringed on the merits of the case, German case law has established that patent infringement can occur also in “off-label” cases, where the pharmaceutical product is (i) suitable to be used according to the patented indication and (ii) the manufacturer or distributor takes advantage of circumstances that ensure that the product is sold and/or is used for the intended purpose. This, in turn, requires a sufficient extent of use in accordance with the patent as well as knowledge (or at least a willful blindness) of this use on the part of the manufacturer or distributor (see Oberlandesgericht Düsseldorf, 5 May 2017, I-2 W 6/17, Estrogen Blocker, § 85, another case on EP195, here).

In any way, the Court of Milan somewhat softens its argument by adding, in the last paragraph, that AstraZeneca did not provide any evidence of the infringement by Teva or that the use of fulvestrant according to EP195 was the current oncological practice in Italy. Conversely – one may wonder – if AstraZeneca had provided sufficient evidence of the off-label use a finding of infringement could have been possible.

For further answers we now look forward to the decision of the Court of Milan in the parallel case (Docket No. 37181/2018) and will report on any interesting development.

Giovanni Trabucco

Court of Milan, 3 December 2020, Decision No. 7930/2020, AstraZeneca v. Teva

What happens when copyright protection expires? Trade mark/copyright intersection and George Orwell’s ‘Animal farm’ and ‘1984’

The 1st of January of each calendar year marks not only the days full of new years’ hope, resolutions, and promises, but also the public domain day. 1 January 2021, a moment full of hope with the anti-COVID vaccine rolling out, is no different for copyright law purposes. On the 1st of January, many copyright protected works fell in the public domain.

This year marked the falling into the public domain of the works of George Orwell. Born in 1903, under the real name of Eric Arthur Blair, he has authored masterpieces such as ‘1984’ and ‘Animal Farm’, which in recent years have become extremely topical and relevant. In early 2017, the sales of ‘1984’ went so high up that the book became a bestseller once again. Some have suggested this is a direct response to the US Presidency at the time in the face of Donald Trump.

Orwell passed away in 1950, which means that, following the life of the author plus seventy years rule as per Article 1 of the Term Directive, copyright in Orwell’s works expired on 1 January 2021. Despite this, in the last several years an interesting trend has prominently emerged. Once copyright in famous works, such as those at issue, has expired, the body managing the IPRs of the author has often sought to extend the IP protection in the titles by resorting to trade mark applications. At the EUIPO, this has been successful for ‘Le journal d’Anne Frank’ (31/08/2015, R 2401/2014-4, Le journal d’Anne Frank), but not for ‘The Jungle Book’ (18/03/2015, R 118/2014-1, THE JUNGLE BOOK) nor ‘Pinocchio’ (25/02/2015, R 1856/2013-2, PINOCCHIO).

This is the path that the ‘GEORGE ORWELL’, ‘ANIMAL FARM’ and ‘1984’ signs are now following. The question of their registrability as trade marks is currently pending before the EUIPO’s Grand Board of Appeal. This post will only focus on the literary work titles – ‘ANIMAL FARM’ and ‘1984’, as the trade mark protection of famous authors’ personal names is a minefield of its own, deserving a separate post.

The first instance refusal

In March 2018, the Estate of the Late Sonia Brownell Orwell sought to register ‘ANIMAL FARM’ and ‘1984’ as EU trade marks for various goods and services among which books, publications, digital media, recordings, games, board games, toys, as well as entertainment, cultural activities and educations services. The Estate of the Late Sonia Brownell Orwell manages the IPRs of George Orwell and is named after his second and late wife – Sonia Mary Brownwell.

The first instance refused the registration of the signs as each of these was considered a “famous title of an artistic work” and consequently “perceived by the public as such title and not as a mark indicating the origin of the goods and services at hand”. The grounds were Article 7(1)(b) and 7(2) EUTMR – lack of distinctive character.

The Boards of Appeal

The Estate was not satisfied with this result and filed an appeal. Having dealt with some preliminary issues relating to the potential link of ‘ANIMAL FARM’ to board games simulating a farm life, the Board turned to the thorny issue of registering titles of literary works as trade marks. The Board points out that while this is not the very first case of its kind, the practice in the Office and the Board of Appeal has been diverging. Some applications consisting of titles of books or of a well-known character are registered as marks since they may, even if they are well-known, still be perceived by the public also as an indicator of source for printed matter or education services. This was the case with ‘Le journal d’Anne Frank’. Other times, a famous title has been seen as information of the content or the subject matter of the goods and services being considered as non-distinctive and descriptive in the meaning of Article 7(1)(b) and (c) EUTMR. This was the case for ‘THE JUNGLE BOOK’, ‘PINOCCHIO’ and ‘WINNETOU’. The EUIPO guidelines on this matter are not entirely clear. With that mind, on 20 June 2020, the case has been referred to the Grand Board of Appeal at the EUIPO. Pursuant to Article 37(1) of Delegated Regulation 2017/1430, the Board can refer a case to the Grand Board if it observes that the Boards of Appeal have issued diverging decisions on a point of law which is liable to affect the outcome of the case. This seems to be precisely the situation here. The Grand Board has not yet taken its decision.

Comment

The EUIPO’s Grand Board is a bit like the CJEU’s Grand Chamber – cases with particular importance where no harmonised practice exists are referred to it (Article 60, Rules of Procedure, CJEU). The Grand Board has one specific feature which the other five ‘traditional’ Boards lack. Interested parties can submit written observations, the otherwise known ‘amicus curiae’ briefs (Article 23, Rules of Procedure of the EUIPO Boards of Appeal). In fact, in this specific instance, INTA has already expressed its position in support of the registration of the titles.

It must be observed here that when the trade marks were filed, back in March 2018, ‘Animal farm’ and ‘1984’ were still within copyright protection (but not for long). Indeed, the Estate underlines in one of its statements from July 2019 that “George Orwell’s work 1984 is still subject to copyright protection. The EUIPO Work Manual (ie, the EUIPO guidelines) specifically states that where copyright is still running there is a presumption of good faith and the mark should be registered”. While this is perfectly true, what is also important to mention here is that trade mark registration does not take place overnight, especially when it comes to controversial issues such as the question of registering book titles as trade marks.

The topic of extending the IP life of works in which copyright has expired has seen several other examples and often brings to the edge of their seat prominent IP professors, as well as trade mark examiners and Board members. Furthermore, several years ago, the EFTA court considered the registrability as trade marks of many visual works and sculptures of the Norwegian sculptor Gustav Vigeland (see the author’s photo below).

‘Angry Boy’ by Gustav Vigeland, Vigeland park, Oslo, photo by Alina Trapova

At stake here was a potential trade mark protection for an artistic work in which copyright had expired. One of these is the famous ‘Angry Boy’ sculpture shown here. The Municipality of the city of Oslo had sought trade mark registration of approximately 90 of Vigeland’s works. The applications were rejected. The grounds included not only the well-known descriptiveness and non-distinctiveness, but also an additional objection on the grounds of public policy and morality. Eventually, the case went all the way up to the EFTA court. The final decision concludes thatit may be contrary to public policy in certain circumstances, to proceed to register a trade mark in respect of a well-known copyright work of art, where the copyright protection in that work has expired or is about to expire. The status of that well known work of art including the cultural status in the perception of the general public for that work of art may be taken into account”. This approach does make good sense as it focuses on the specific peculiarities of copyright law (e.g., different to trade mark law, copyright protection cannot be renewed), but it also considers re-appropriation of cultural expression as aggressive techniques of artificially prolonging IP protection – something, Justice Scalia at US Supreme Court has labelled as “mutant copyright” in Dastar v. Twentieth Century Fox in 2003.

In the EU, the public policy/morality ground has been traditionally relied on to object to obscene expression. The focus has been on the whether the sign offends, thus tying morality to public policy (See also another Grand Board decision on public policy and morality: 30/01/2019, R958/2017-G, ‘BREXIT’). A notable example is the attempt to register the ‘Mona Lisa’ painting as a trade mark in Germany. The sign was eventually not registered, but not due to clash with public policy and potential artificial extension of copyright through the backdoor of trade marks, but because the sign lacked distinctiveness. Consequently, the EU understands the public policy and morality ground in a rather narrow and limited manner, namely linked only to offensive use.

Overlap of IPRs happens all the time: a patent turns into a copyright claim (C-310/17 – Levola Hengelo and C-833/18 – Brompton Bicycle), whereas design and copyright may be able to co-habit in the same item (C-683/17 – Cofemel). All would agree that an unequivocal law prohibiting overlaps of IPRs is not desirable from a policy perspective. Yet, you may feel somehow cheated when your favourite novel is finally in the public domain due to copyright expiring (which, by the way, already lasts the life of the author plus seventy years), but protection has been “revived” through a trade mark. Trade marks initially last for 10 years, but they can be renewed and thus protection lasts potentially forever (what a revival!). Therefore, there may be some very strong policy reasons against this specific type of IP overlap and extension of IPRs. Like the European Copyright Society has said in relation to Vigeland, “the registration of signs of cultural significance for products or services that are directly related to the cultural domain may seriously impede the free use of works which ought to be in the public domain. For example, the registration of the title of a book for the class of products including books, theatre plays and films, would render meaningless the freedom to use public domain works in new forms of exploitation”. To that end, Martin Senftleben’s new book turns to this tragic clash between culture and commerce. He vocally criticises the “corrosive effect of indefinitely renewable trademark rights” with regard to cultural creativity.

As for ‘Animal farm’ and ‘1984’, the EUIPO’s first instance did not go down the public policy grounds road. This is unfortunate as lack of distinctiveness for well-known titles such as ‘Animal Farm’ and ‘1984’ is difficult to articulate as the Board of Appeal itself underlines – the Office’s guidelines are confusing (some titles have been protected and others not). Besides, lack of inherent distinctiveness can be saved by virtue of acquired distinctiveness (something the Estate of the Late Sonia Brownell Orwell has explicitly mentioned they will be able to prove, should they need to). Considering that the rightholder here would be the Estate of the Orwell family, proving acquired distinctiveness of the titles for the requested goods and services would not be particularly difficult. On that note, a rejection on the ground of public policy/morality can never be remedied through acquired distinctiveness. Thus, it would perhaps have been more suitable to rely on public policy and establish that artificially extending the IP life of these cultural works is not desirable.

Well, the jury is out. One thing is for sure – the discussion at the Grand Board will be heated.

Alina Trapova