In late 2023, the UK Supreme Court confirmed in its judgment that the UKIPO was correct to find that two patent applications designating an artificial intelligence (AI), known as DABUS, as the sole inventor should be withdrawn for failing to identify a person as an inventor.  This judgment confirmed the previous decisions of the UK Court of Appeal, High Court and the UKIPO Hearing Officer.  Related cases have also been working their way through the legal systems of Germany, South Korea, Japan, New Zealand, US, and even the EPO’s Boards of Appeal with various outcomes. 

Professor Ryan Abbott is the driving force behind the DABUS test cases also called the “Artificial Inventor Project” (AIP) through which he documents this story as it unfolds with the assistance of a global team of lawyers all of whom are providing their services pro bono. 

Much of the public discussion surrounding the DABUS patent applications has focused on whether an AI may be listed as an inventor.  However, Professor Abbott argues the more important legal issue was always whether IP rights can be granted for AI-generated output.

Professor Abbott has an impressive and diverse career.  On top of his work with DABUS he is Professor of Law at the University of Surrey, Associate Professor of Medicine at University of California Los Angeles, a licensed physician and attorney in the US, and a solicitor advocate in England and Wales.  Mason Birch and Ian Jones caught up with Professor Abbott to find out more about his motivation for pursuing the DABUS cases and what he is hoping to achieve.  Here’s what he had to say:

Your academic research is mainly focused on the interplay between artificial intelligence and intellectual property.  What is your motivation behind such research?  To what extent, if any, does the research of Dr Stephen Thaler (the creator of DABUS) play into that motivation?

At one point, I was working as outside general counsel at a biotech company and also teaching patent law.  At the biotech, I came across a vendor saying essentially that if we gave them our therapeutic target they’d have a computer system go through a large antibody library and identify one best suited to be our lead drug candidate.

I thought to myself, “well, that’s interesting because when we have a person do that, they’re named as an inventor on the patent you get for that antibody, but here we might have a computer doing a lot of the heavy lifting. So I wondered whether if you have a computer invent something, if you can get a patent on that thing. It seemed like an interesting academic idea, and I wondered whether anyone else had thought of itbefore.  Lo and behold, lots of people had thought about it before!  In fact, people were writing about AI-generated inventions in the ‘60s!  But the literature was arguing that if an AI invents something, you wouldn’t need a patent on it because the AI doesn’t care about getting a patent.

Obviously, that’s true – an AI doesn’t care about getting a patent.  But this isn’t the right question or issue, as it is the pharma company using the computer, not the computer itself, that cares about getting a patent.

So I decided to write an article on AI-generated inventions, and as part of that I read up on what people had written previously.  Lots of people claimed to have been using AI to invent things in the early 2010s, and even as far back as the ‘70s and ‘80s!  Dr Stephen Thaler was one of those people – he had developed some advanced computer architectures involving machine learning.  But others claimed to have machines also doing inventive work, albeit with different sorts of computer architectures than DABUS – for instance genetic programming and expert systems – some building inventive machines with the express purpose of showing that machine could invent things autonomously.

I interviewed Dr. Thaler and some of the other innovators in this space, and published an article arguing that it’s important to protect AI-generated inventions because it actually promotes all the purposes of the patent system.  While there’s this romanticised concept of the solo inventor tinkering away in their shed, most inventions are owned by companies and it takes a lot of investment to create these inventions, especially in biopharma.  Using AI enables a more cost-efficient way of creating socially valuable inventions (new drugs and treatments, for example), and so you’d want to encourage companies to create and use AI to generate inventions.  Ensuring that AI-generated inventions can be validly patented provides that incentive, as companies would also be encouraged to disclose trade secrets and commercialise inventions in the knowledge that such information is patent protected.  So AI-generated inventions are really something you want to protect for social benefits.

The article got more attention than anything I had written previously, and over the course of a few years of people shifted from saying “this is kind of vaguely interesting” to “we have some concerns about subsistence”.  For example, companies were asking, “what do we do if an AI invents something?  What does a person have to do to be a legal inventor?  How do different jurisdictions deal with this?”  And my response was “I can tell you what I think will happen, but there’s never been a case on this”.

You created the Artificial Inventor Project (AIP), which oversees “a series of pro bono legal test cases seeking intellectual property rights for AI-generated output in the absence of a traditional human inventor or author”.  These test cases centre on whether patent applications designating DABUS as inventor should be granted.  What were the circumstances which led to the creation of the AIP?  To what extent, if any, did the initial refusal of these patent applications by various patent offices play a part?

The circumstances were that stakeholders were becoming increasingly more concerned about how to protect AI-generated inventions, but there had never been a case on this before, so it was difficult to provide definitive answers.  A test case would provide much needed guidance, promote a discussion about what legal rules should apply to the use of AI, and advance the position that AI-generated inventions should be protectable.  I asked Dr Thaler if he would be interested in having DABUS invent something for a test case.  He said sure, DABUS generated a couple of inventions, and we were off to the races! 

The creation of the AIP was organic and informal – at first, it was myself and a group of like-minded patent attorneys and advocates who saw this test case as important, all of whom worked pro bono on it. I managed the project and was directly responsible for the US litigation, joinly responsible for the UK litigation, and I managed the litigation in other jurisdictions.  There was an initial core team, which expanded as the cases expanded into other jurisdictions. None of us would have predicted the amount of attention these cases garnered!  There was a view that we might submit a patent application to the UKIPO, it would get rejected, and somewhere in my next IP casebook there would be a footnote about how interesting it was that someone had filed a patent application designating an AI as inventor and the UKIPO rejected it.  But the public reaction to this case motivated us – we began to increasingly appreciate how important the test cases were.  Because of the public reaction, external attorneys approached me and asked to join the team.  We ended up with big law firms, like in Australia, making huge pro bono contributions, which led to tangible outputs like winning the federal Court of Australia case in the first instance.

It got easier to sign people up for the AIP a couple years into it, but initially there was more scepticism on a couple of different levels.  One level was that reporters wouldn’t stop saying (inaccurately) that we were trying to get AI to own patents – no matter how many times we explained that inventorship does not equate to ownership.  On another level, there was scepticism as to the factual matter of whether AI could invent something.

Since the past couple of years and the release of the larger foundation models like ChatGPT for public use, there is no longer much pushback about whether AI can invent something.  Ten years ago there were a number of companies working to pioneer AI-related drug discovery and repurposing, but now the largest pharma companies are developing AI for these purposes or partnering with AI developrs.  So, the timing was right.

So the AIP very much just “built itself up” – initially the case was only going to be filed in a few places and then as more firms and attorneys signed up and took an interest, it ultimately went to 17 jurisdictions.

What does the AIP aim to achieve by carrying out these test cases?

These test cases, which are still largely ongoing, achieved a lot.  We were able to raise the issue of whether you can name an AI as an inventor if it invents something, and more generally ask whether AI-generated inventions are patentable.  Inventorship is relevant to subsistence, because most jurisdictions (not all) require an inventor to be named, and inaccurate inventorship can in some jurisdictions (like the USA) render a patent invalid. We listed DABUS as the inventor because it was actually inventing the products, but also to see how this would be handled by the patent offices and courts and whether, if you don’t have a traditional human inventor, what a person would need to do to qualify as a legal inventor for an AI-generated invention.

The three goals we had in mind for these test cases were

  1. to generate guidance for stakeholders;
  2. to promote a public discourse on how the law should deal with the use of AI in invention; and
  3. to promote the view that AI-generated inventions are beneficial to protect.

Of course, an AI isn’t listed as an inventor for the AI’s sake or to give an AI any kind of right. But if you have an AI invent something, you should disclose that it is an AI-generated invention, as it protects the moral rights of human inventors.  For example, if I get an AI to cure cancer, it would give me false credit to list myself as the inventor of the cure for cancer.  Also, if ownership if flowing from inventorship, then listing the AI provides a clear chain of title so that I, as owner of the AI, have the right to the patent for the AI’s inventions.

You can’t have AI own property because it isn’t a legal person and it wouldn’t make sense to have an AI own property.  We argue the owner of the AI should own the AI’s output and IP in the same way that people own property made by their property.  For example, a person owns the lemons grown by their lemon tree, a painting made by their 3D printer, the cryptocurrency mined by their computers, and the interest on their principle.

To what extent do you feel that the AIP are achieving its aims despite the numerous setbacks from the Courts around the world?

A key part of these test cases was to raise awareness of the issues surrounding AI-generated inventions, to create public discourse about what the law should be and how the law should deal with this sort of thing, and to create discourse among stakeholders and policymakers.  On that metric alone, the cases have been wildly successful.

As to the courts’ opinions, patent laws predate the widespread use of AI, so in many jurisdictions we had the harder textualist argument to make.  Yet the more we worked on it, the more we came to the view that we did have the better purposive argument. 

I was disappointed with the UK Supreme Court judgment, which avoided, in large part, the purposive issues.  And, our position was not only, in my view, the right outcome, it was still consistent with the text of the UK Patents Act. For example, in the Act, an inventor is defined as a devisor, but it does not say that the devisor is a human being – if an AI devises something, it would literally fall within the statutory definition.  Also, the Patents Act states that in preference to the inventor obtaining rights to a patent, anyone who by any rule of law has rights to the invention may obtain grant of the patent.  In my view, this accommodates concepts like ownership of AI-generated inventions by possession or accession. Otherwise, you’re either encouraging applicants to lie about who invented something, or losing patent protection.

The Supreme Court noted that the Act elsewhere assumes an inventor is a natural person.  That may be true, but I if you take a step back and think “what was the Patents Act really designed to do?”, then you might come to a different answer.  That is what Justice Beach did in Australia in the first instance.  But in the UK Supreme Court, Lord Kitchin cited Lady Justice Lang in the Court of Appeal judgment saying that Parliament was not thinking about protecting AI-generated inventions in 1977. Also true, but it is something in my view the Act could accommodate. Of course, I’m not biased at all!

If you look at the US case, the Federal Circuit ruled even more textually, which was easier for that court given that US law states that an inventor is an individual. That Court then held an individual is a natural person.  But even then, the first thing the Federal Circuit does is say “look at a dictionary – it says that an individual is a person”.  But dictionaries also define an individual as “a thing” , and so we thought that, actually, a textualist argument for AI as inventor works under US law. 

The USPTO came around to some of the challenges with the case in oral arguments, noting this was denying patents to otherwise patentable inventions, but ultimately argued the law needs to be interpreted as it is not as it should be. Off the back of all this, we’ve now got this dramatically revised guidance from the USPTO saying that a substantial contribution by a natural person is needed to obtain a patent, which is a major departure from their prior guidance.  Also, the US Congress has looked at changing patent law (and copyright law) in response to protecting AI-generated output.  So, I think the test cases have been tremendously successful.

What are the AIP’s next steps in the wake of the courtroom setbacks? 

We have had some successes, and the test cases are still alive in most of the jurisdictions we filed in.  In the US, the die has been cast, but the issue has now moved on to policymakers – there’s a vibrant public discourse about this right, and we are optimistic that Congress is going to (eventually) change the law.  In other jurisdictions, the test cases remain with patent offices or in court.

For example, the EPO rejected the test cases but then said that the owner/user of the AI could be listed as an inventor.  So based on these findings, we filed a divisional application with Dr Thaler listed as the inventor and noting it is an application for an AI-generated invention in the specification, and that application remains before the formalities division.

Similarly, the Comptroller-General’s counsel argued to the UK Supreme Court that, contrary to what the initial UKIPO hearing officer held, the UKIPO would simply grant the patents if we had listed Dr Thaler as the inventor regardless of what he actually did.  So we’ve done that now by filing divisional applications with Dr Thaler as the inventor at the UKIPO, but the UKIPO is objecting to these divisional on the basis that the Supreme Court has said you can’t do that!  So these test cases are still going in the UK.

Also, I think that as jurisdictions become more aware of the importance of protecting AI-generated inventions, that we may get some jurisdictions deciding differently on the matter – the test cases are just now really entering the courts in China, Japan and South Korea, for example, and so we may get some different rulings out of this.

As you mentioned, AI and AI-generated inventions have been around for decades.  Why is it only in recent years that discourse on protecting AI-generated invention has captivated the mainstream?

I think it’s largely down to technical advances – AI has been generating things for a long time, but it’s gotten better at doing it, and so industry adoption is increasing.  Also, popular AI tools like ChatGPT have shown people that AI can generate creative content, and so the notion of AI being able to generate inventive concepts is more readily appreciated by the public.

Coming back to issues of ownership – If we were to get to the stage of artificial general intelligence (AGI), would you see such AGI becoming owners of IP (and other legal) rights then?

It depends what you mean by AGI.  If you mean it in the way that people tend to use it, which is a machine that could engage in any intellectual task a person could do, then no, I certainly wouldn’t think it should have rights.  Just because it has certain functional capabilities doesn’t mean that it would be a legal person – it doesn’t mean it would make much sense to give it legal personality.  Let’s say I invented an AGI and it created a new disc brake – what would it do with a patent for the disc brake?  How would it commercialise and enforce the patent?  It doesn’t really care about having a patent.

Now I’m not saying that you can’t construct a very convoluted legal system to accommodate that – we provide legal personality to all sorts of things that don’t live, like companies and governments.  Maybe we could figure out some way to do this for AGI, for example, maybe on a blockchain to create a decentralised autonomous organisation with legal personality but I think it would be somewhere between needlessly and hopelessly complex and counterproductive in any event.

Now, if we someday live in a future world in which there’s not only AGI, but also computers that are sentient, self-aware, and have phenomenal experiences in the same way that humans beings do, then we’ve got some real ethical issues that we are completely unprepared for.  Hopefully, Elon Musk is wrong about the timeline for AGI and we have a few more years to let me write a book on the subject.

Lastly, and staying on the topic of AGI, when do you think we could make our own Westworld, with these conscious computers controlling androids?

This is a complicated issue in ways that may not be immediately apparent.  Starting with, what do you mean by consciousness?  Because there’s a thin concept of consciousness that could equate to a machine simply being aware of, or acting in accordance with an understanding of, what it is, what it is doing, and its environment. On the other hand, there’s consciousness in the sense of that we associate with that of a human being.   Alan Turing addressed this in a paper in 1950 where he pondered whether a machine could think. He ultimately decided it was irrelevant, as we should care more about behaviour than mental processes – a machine that behaves like it thinks is a thinking machine.  Some people think he took that too far, but I do think it is the right legal test for these things. For example, we want laws that encourage people to make and use inventive machines, not machines that invent in a particular way that might be more like a natural person’s mental process.

A few years ago, one of my colleagues took me to a sushi bar that had robot waiters.  She’s an AI-sceptic and was gleefully pointing out how terrible the robots were as servers—as indeed they were. But, I now routinely see robot waiters and they are no longer terrible. So who knows – maybe android staffed theme parks are not that far off . The androids may not think in exactly the same way that a person thinks, but they may behave like they do!