For centuries, patent law has rewarded those who devise new and inventive solutions technical problems. As AI systems become more sophisticated, patent offices and courts around the world are now faced with a challenge: what if the “inventor” is not human?
The patent system rewards the development of new and useful inventions with exclusive rights to exploit these inventions for a certain period of time. In return, it requires the inventor to publicly disclose their invention, so that other people can learn from it and continuously build upon the state of the art. However, this process of iterative innovation is no longer the sole domain of humans. Much as we noted in part 3 of this series that AI systems are now generating “creative” outputs, AI systems such as IBM’s Watson and Google’s DeepMind have already been deployed to solve some of the great technical, scientific, and medical challenges of our time.1
In a world where machines are playing an increasingly important role in innovation, the question has been asked how the patent system will protect AI-generated inventions. Is a system designed to foster human ingenuity and innovation sufficiently flexible to accommodate non-human inventors? Should it be?
Can an AI system be an “inventor”?
What is the role of the “inventor” in patent law?
As we explained in part 3, under copyright law, the existence of a (human) author is directly relevant to the question of whether a work will attract copyright protection at all. Patent law is different. While the identity of the inventor has legal certain implications, it does not affect patentability of the invention itself. This is because the patentability of an invention is assessed objectively, rather than by reference to the subjective “inventive” or other mental process of the inventor. As the High Court of Australia has observed, a valid patent may be obtained from something “stumbled across by accident” or “remembered from a dream”, so long as it is otherwise meets the criteria for patentability.2
So, why does it matter who the inventor is? The answer is ownership. In most jurisdictions, a patent can only be granted to the inventor (ie the person – or people – who are responsible for the “inventive concept”) or to someone who derives title from the inventor. For example, this might be the case due to the assignment of rights under contract, inventions developed in the course of employment, or being the legal representative of a deceased inventor. Given that the role of patents is to encourage innovation, ensuring that there is clarity over who will be entitled to a patent stemming from inventive activities is critical.
A global test: DABUS
The question of whether an AI system is capable of being named as an inventor of patentable subject matter is being tested around the world as part of The Artificial Inventor Project. The project comprises a series of “test” patent applications filed by Dr Stephen Thaler in respect of inventions generated by his AI system, ‘DABUS’.
DABUS (or ‘Device for the Autonomous Bootstrapping of Unified Sentience’) is a type of ‘connectionist AI’, which uses multiple neural networks to generate new ideas, the novelty of which is then assessed by a second system of neural networks. By this process, DABUS has autonomously generated two “inventions” in respect of which Dr Thaler has applied for patent protection: the fractal container (a food container) and the neural flame (a search and rescue beacon).
So far, with very few exceptions, the global trend in courts and patent offices has been to reject these applications, on the basis that an AI system cannot be regarded as an inventor for the purposes of patent law. Although the reasoning has varied from country to country, the emerging global consensus is that an inventor of a patented invention must be a human or a person with legal capacity.
We have previously reported on outcomes in:
- Australia, where a five-judge bench of the Full Court of the Federal Court unanimously held that only a natural person can be an inventor, overturning an earlier ruling of a single judge that the meaning of “inventor” could evolve as technology develops and included an AI system;3
- the United Kingdom, where both the High Court and Court of Appeal agreed that the UK Comptroller of Patents was correct to refuse Dr Thaler’s patent applications, on the basis that the inventor had to be a natural person and Dr Thaler had not shown a sufficient derivation of rights from an inventor. However one of the three Court of Appeal judges, Lord Justice Birss, while agreeing with the majority of Lords Justices that, under the Patents Act 1977, the term “inventor” meant a natural person and could not include an AI, nevertheless disagreed over the correct course of action the Comptroller should take. Birss LJ considered that the application should have been allowed to proceed to examination, and, if valid, to grant, subject to any challenges by third parties. An appeal from the Court of Appeal decision was heard by the UK Supreme Court in March this year and we await a decision; and
- the European Patent Office, holding that an inventor must be a person with legal capacity and that Dr Thaler could not be granted a patent as the inventor’s successor in title (on account of creating and owning DABUS) because an AI system is incapable of transferring rights.4
The position in the United States is similar. The US Patent and Trademark Office also rejected Dr Thaler’s patent applications, stating that the explicit statutory language used by Congress to define the term ‘inventor’ (such as ‘individual’ and ‘himself or herself’) was uniquely directed to human beings.5
Dr Thaler was unsuccessful in challenging this decision in both the District Court for the Eastern District of Virginia and the Court of Appeals for the Federal Circuit,6 and the US Supreme Court has recently declined to hear Dr Thaler’s appeal.7
Dr Thaler’s patent applications have also been unsuccessful in New Zealand,8 Taiwan,9 Israel10 the Republic of Korea,11 Canada,12 Brazil,13 and India.14
In Germany, one of Dr Thaler's appeals contained three auxiliary requests, (1) to grant the patent without any inventor designation, (2) to include a paragraph in the description clarifying that DABUS created the invention, and (3) to designate the inventor as "Stephen L. Thaler, PhD who prompted the artificial intelligence DABUS to create the invention” The 11th Senate of the Federal Patent Court granted the third auxiliary request, stating that while the role of AI in the creation of the invention may or may not be mentioned, in any case, a natural person must be named as the inventor.15
In a parallel proceeding relating to another DABUS patent, however, the 18th Senate of the Federal Patent Court decided that a patent on AI-generated inventions cannot be granted, unless the applicant omits the reference to the AI in the inventor designation. Both decisions are subject to appeal to the Federal Court of Justice, which is expected to provide clarification.
To date, South Africa16 and Saudi Arabia17 are the only exceptions, although in both of those jurisdictions the patents have not yet undergone substantive examination.
Dr Thaler has also filed applications in jurisdictions including China, Japan, Singapore, and Switzerland, so it remains to be seen whether further exceptions to the global trend will emerge.
If not AI, then who?
Because of the way Dr Thaler argued the cases, none of the Australian, UK or US courts nor the EPO was required to answer who, if not DABUS, should have been named as the inventor of the relevant patents. However, the Full Court of the Australian Federal Court suggested a number of options, namely:
- the owner of the machine upon which the AI software runs;
- the developer of the AI software;
- the owner of the copyright in its source code; and
- the person who inputs the data used by the AI to develop its output.
As noted above, due to the auxiliary requests filed in the German proceedings, the 11th Senate of the German Federal Patent Court considered that the appropriate inventor designation was “Stephen L. Thaler, PhD who prompted the artificial intelligence DABUS to create the invention”.
Determining who should be named as the “inventor” of an invention devised using (or by) an AI system is likely to be fact-specific. In many cases this question is likely to be academic, since regardless of which individual(s) are considered to be the “inventors” it will be clear that the same person or organisation will ultimately be entitled to the patent (for example, the potential inventors’ employers or the owners of the AI system). Nonetheless, until an appropriate case is litigated in which that question is required to be considered, some uncertainty will remain as to who should be regarded as the “inventor” of such an invention.
Implications for patent law
Beyond inventors: the impact of AI on patentability
In determining whether a patent should be granted, patent offices and courts are tasked with assessing the novelty, inventiveness, and utility of the claimed invention, as well as ensuring that it is clearly and comprehensively described in the patent’s specification. Central to many of these assessments is a fictitious, yet centrally important figure: the “person skilled in the art” (PSA).
The PSA is the hypothetical person to whom the claimed invention is assumed to be addressed. They have the ordinary level of skill and perception of those working in the relevant field at the time, but they are not particularly inventive or creative. They are also armed with what is called the “common general knowledge”: the knowledge assimilated and accepted by the bulk of people working in the relevant field at the time.
For hundreds of years, the PSA has only ever been human, and the law has only attributed to them knowledge they are likely to already possess or that which is readily at their fingertips. What happens when AI is thrown into the mix?
One of the key criteria of patentability in most jurisdictions is that the claimed invention must involve an “inventive step”. This is assessed by determining whether the invention, when compared with the prior art, would have been obvious to the PSA in light of the common general knowledge.
Given the rapid rate at which AI systems are being applied to new contexts, in the Australian DABUS litigation, both the primary judge18 and the Full Court on appeal raised the question whether the standard of inventiveness needed to be recalibrated if, for example, the PSA were considered also to have access to AI systems.19
The Full Court considered that issue should be dealt with urgently, but that courts should be cautious about stretching the interpretation of existing legislation to give it meaning the legislature did not intend.
Historically, the law of obviousness has been sufficiently flexible to adapt to the changing nature of innovation. For example, the hypothetical PSA will often be an interdisciplinary team, not just a single person, reflecting the way research and development is actually conducted. The common general knowledge has also evolved to account for reference material that a PSA would routinely consult or have access to, notably including the proliferation of online resources, and , at least in the UK, courts have been willing to find that the outcome of routine processes will be obvious even if it would not have been predictable in advance.20
In principle, there is therefore no reason these concepts could not include AI systems, if they would be routinely used in the relevant field.
In practice, however, such developments may raise evidentiary challenges. It may be difficult to establish what kind of AI is part of the PSA’s normal toolkit, given the range of functionality, complexity, and sophistication in AI systems. This is further compounded by the lack of predictability of AI outputs given matters such as the “black box” nature of AI systems and the dependence on the datasets on which they are trained. As ever, therefore, answering these questions will critically depend on the quality of evidence that can be adduced, including the opinions of skilled experts in the field.
The fundamental bargain of patent law is that patentees get a monopoly over their invention in exchange for publicly disclosing it. In many jurisdictions, this is reflected in threshold requirements of ‘sufficiency’ or ‘enablement’. To obtain exclusive rights, the patent application must disclose the invention clearly, completely, and in enough detail for it to be carried out by the PSA without them having to undertake undue work or experimentation
Where an AI system is used to create an invention, a unique practical challenge for sufficiency is that the “black box” nature of many such systems means that humans are unable to understand or access the functions or processes by which an AI system arrived at the final output. If the person preparing the patent specification cannot properly understand how the invention was derived or how it is performed, it may not be possible to make a full disclosure enabling others to carry out the invention.21
The “black box” problem is not unique to patents: it has also been raised with the growing use of AI in other fields, for example ensuing that decision-making processes do not unlawfully discriminate, or that medical or diagnostic models are capable of verification by clinicians.22
A practical solution, in each case, may be the development of “explainable AI”: models capable of explaining or providing insights into how they arrived at their outputs. While some AI experts say that transparency comes at the cost of accuracy,23
AI researchers and several institutions have been making significant inroads in the development of explainable AI.24
The road ahead
To date, the vast majority of patent offices and courts around the world have declined to recognise AI as an inventor. It is important to note that, for the most part, they have been addressing the relatively narrow question of whether Dr Thaler’s patent applications met the formal requirements of existing legislative frameworks. However, those decisions have raised the broader question of whether, and how, the patent system should accommodate inventive contributions made by AI systems.
That question is the subject of active consideration by governments. For example, a UK Intellectual Property Office consultation, reporting in 2022, identified several options for reform, including:
- expanding the definition of ‘inventor’ to include the humans responsible for an AI system that generates inventions;
- allowing AI to be identified as the inventor; or
- protecting AI-devised inventions by means other than the patent system. 25
Perhaps surprisingly, the UKIPO reported that the majority of respondents preferred the option of not making any change to UK law, for the time being, since most respondents felt that AI was not yet advanced enough to invent without human intervention. This was the approach ultimately adopted by the UK Government.
Although applications like Dr Thaler’s remain something of a novelty or anomaly, they have exposed a tension between existing patent laws and the practical realities of modern innovation. Inventive AI sits uncomfortably within a system where inventors are presumed to have legal personality and the capacity to enjoy and assign rights. It also raises challenging questions about the appropriate benchmarks for assessing inventiveness and sufficiency of disclosure.
Nonetheless, modern patent law has evolved over centuries to adapt and remain relevant through countless industrial and technological advances. AI systems are, in that sense, no different from what has come before. As ever, courts and legislatures will continue to adapt the core principles of patent law, based on evidence as to the actual practices of inventors, to ensure that the patent system continues to incentivise, not hinder, innovation—in whatever form it may take.
- Ryan Abbott, ‘Everything is Obvious’ (2018) 66 UCLA Law Review 2 at 22–6.
- Wellcome Foundation Ltd v VR Laboratories (Aust) Pty Ltd  HCA 12 at .
- Dr Thaler sought leave to appeal from the Full Court decision to the High Court of Australia, which was refused: Thaler v Commissioner of Patents  HCATrans 199.
- An appeal from this decision was dismissed by a Legal Board of Appeal in decision J 8/20. Note that Dr Thaler has filed a divisional application in the EPO: see EP 4 067 251 A1.
- Univ. of Utah v Max-Planck-Gesellschaft, 734 F.3d 1315, 1323 (Fed Cir, 2013); Beech Aircraft Corp v EDO Corp, 990 F.2d 1237, 1248 (Fed Cir, 1993).
- Thaler v Hirshfeld, 558 F. Supp. 3d 238, 249 (ED Va, 2021); Thaler v. Vidal No. 43 F.4th 1207 (Fed Cir, 2022).
- Thaler v Vidal, case number 22-919 before the Supreme Court of the United States.
- Decision of the Intellectual Property Office of New Zealand affirmed in the High Court of New Zealand on 17 March 2023: Thaler v Commissioner of Patents  NZHC 554.
- Decision of the Taiwanese Patent Office affirmed by the Intellectual Property and Commercial Court of Taiwan: Thaler v Taiwan IP Office (TIPO), 110 Xing Zhuan Su 3 (August 2021).
- Decision of the Israeli Commissioner of Patents on 29 March 2023: Lexology, AI as an Inventor (online, 19 March 2023).
- Decision of the Korean Intellectual Property Office on 4 October 2022.
- The Canadian Intellectual Property Office has deemed Dr Thaler’s patent application ‘PCT non-compliant’: patent application 3137161.
- On 6 September 2022, The Brazilian PTO issued its opinion that it is not possible to name an AI system as an inventor in a patent application and Dr Thaler’s application has been withdrawn as a result: Lexology, Brazilian PTO issues an Opinion Declaring that Artificial Intelligence Cannot be Indicated as an Inventor in Patent Application (online, 13 October 2022).
- Decision of the Controller General of Patents; but note that a Parliamentary Standing Committee under the Department of Commerce in India has recommended legislative change to The Patents Act 1970 and the Copyright Act 1957 to accommodate the emerging technologies of AI and AI related inventions: Review of the Intellectual Property Rights Regime in India, Parliament of India (Report No 161, 23 July 2021) [8.5]: Lexology, Inventions by Artificial Intelligence: Patentable or Not? (online, 22 August 2022).
- Federal Patent Court (Bundespatentgericht), decision of 11 November 2021 – 11 W (pat) 5/21. Written opinion delivered 31 March 2022 and published 19 April 2022.
- Patent No ZA2021/03242 in South Africa.
- Application No 521422019.
- Thaler v Commissioner of Patents (2021) 160 IPR 72 at .
- Commissioner of Patents v Thaler (2022) 289 FCR 45 at .
- See eg Hospira UK Limited v Genentech, Inc.  EWCA Civ 780.
- Steven Baldwin and Gabriella Bornstein, ‘Asking AI to Explain Itself - A Problem of Sufficiency’ (2020) 285 Managing Intellectual Property 35 at 36.
- Neil Savage, ‘Breaking into the Black Box of Artificial Intelligence’, Nature Outlook: Robotics and artificial intelligence (29 March 2022).
- Neil Savage, ‘Breaking into the Black Box of Artificial Intelligence’, Nature Outlook: Robotics and artificial intelligence (29 March 2022).
- Steven Baldwin and Gabriella Bornstein, ‘Asking AI to Explain Itself - A Problem of Sufficiency’ (2020) 285 Managing Intellectual Property 35 at 36–7; Neil Savage, ‘Breaking into the Black Box of Artificial Intelligence’, Nature Outlook: Robotics and artificial intelligence (29 March 2022).
- Intellectual Property Office, Artificial Intelligence and Intellectual Property: Copyright and Patents: Government Response to Consultation (28 June 2022).