Technology and practices in the healthcare field are increasingly being augmented by AI. As the use of AI tools in this field has developed, many developers have begun to distribute their tools under the software as a service (SaaS) model – in other words, offering the use of the tool on a subscription basis while running the tool on cloud infrastructure (as opposed to the tool being installed on the customer’s own system).
The SaaS model is popular with both developers and customers across the software industry for reasons familiar to those working in this field. From the customer’s point of view, the software being deployed on the cloud allows them to access more computing power than their own system provides, and the subscription-based charging model tends to mean lower initial costs than traditional software licences. For developers, SaaS offers steady streams of revenue, easy deployment of updates and, because the accessibility of SaaS does not depend on the end user’s hardware, access to a wider range of customers.
These advantages are as relevant in the healthcare field as in other areas. However, the use of AI tools deployed as SaaS in the medical context raises some unique concerns that have the potential to limit the adoption of SaaS AI tools in this area.
A first concern (not strictly limited to the healthcare field) is that sending patient data to a SaaS AI model hosted on third-party cloud infrastructure may conflict with the obligations of practitioners and researchers in relation to patient confidentiality and sharing of data with third parties. These concerns will typically be less pertinent when the AI tool is hosted in-house, for example on the servers of the hospital or research institution using the patient data, since the AI tool and data in such setups remain under the control of the organisation handling them. Besides posing a challenge to the processing of data using an established AI tool, data-sharing rules may also limit developers’ access to suitable training data. As governments continue to debate how AI should be regulated, it seems reasonable to expect that future AI regulation will impose new requirements governing this kind of data-sharing.
A second potential challenge for the deployment of AI under the SaaS model in healthcare is the requirements that medical regulation may impose on AI tools in future. For example, the developers of AI claiming to provide diagnostic or therapeutic benefits may be required to demonstrate the efficacy of their tools before they are allowed to be employed in frontline healthcare. Transparency requirements may also require developers to disclose details of how their AI tools work. For example, the UK’s medical regulator recently published a policy paper on its planned approach to AI, which indicated that, in relation to AI devices, the principles of transparency and explainability should require disclosing “how the device works”. Some developers of AI tools may be reluctant to disclose the information required – indeed, one attraction of the SaaS model is that keeping the AI tool on the provider’s server helps to keep its inner workings secret from competitors.
If future regulations do indeed require AI developers to share detailed information on the effectiveness and inner workings of their models, prudent companies will take measures to protect any technological know-how that they might be required to disclose. Patents offer developers a powerful form of protection in this respect: a patent covering an AI tool enables its proprietor to prevent its competitors from using the tool even when those competitors have the know-how to do so (e.g. from information made public by the regulatory process). In this sense, patents offer a more robust form of protection for AI tools than approaches like keeping the software secret behind the API that is exposed to users under the SaaS model. Crucially, however, a patent is only valid if the idea that it protects was not made public before the patent application was filed. Therefore, innovators concerned that future AI regulation might require disclosing sensitive technical details should consider filing patent applications protecting the functionality of their tools before new regulation forces the issue (and indeed before making any other non-confidential disclosure).
Patenting AI in the healthcare field poses some unique challenges thanks to a set of specific rules that the patent system applies to software and medical technologies. We have explored how some of these provisions can be contended with previously. While these features of the patent system mean that filing patent applications for AI in healthcare requires some care, your patent attorney will be able to advise you as to whether a patent is the appropriate form of protection for your idea, and if so, produce an application that accounts for these provisions to provide the most robust protection for your ideas.
If you would like to discuss strategies for protecting your AI innovations, please get in touch with our specialist digital health patent attorneys by emailing gje@gje.com.