Applying AI in the field of medicine: a legal and ethical perspective
AI: the two magical letters of our century. A couple of years ago, artificial intelligence was just in our GPS in our cars and the google translate service. Today, algorithms based on machine learning are disrupting various sectors, from personalised education to medicine.
Using AI to revolutionise the efficiency of the healthcare system sounds tempting. Currently, AI-enabled software systems are largely applied in medical diagnosis. Pattern recognition programmes in fields like radiology and pathology can interpret radiography images and analyse patient data faster and with greater accuracy than a human clinician.
However, questions of liability arise, particularly in relation to negligence.To be held liable for negligence a number of elements need to be satisfied. There must be a breach of the clinician’s duty of care which must have caused harm, and the harm must not be too remote from the breach. These requirements are well established in common law.
While clinicians can justify their diagnoses, the autonomous basis of the AI solutions makes it nearly impossible to trace the steps it took to conclude a particular decision. So who is liable? The software vendor? The healthcare professional? Can a patient sue an algorithm or a robot for malpractice? The law is currently silent and the topic is terra incognita. Therefore policy makers have to find a balance between protecting the public and not stifling innovation.
Because the future is already here. AI is already being used to predict various medical conditions from the strangest sources, like Facebook. The social media giant is employing an algorithm which makes suicide predictions based on posts including phrases like “Are you okay?” and “Please don’t do this.” The significant ethical concerns raised by this process are numerous, and the consequences could be unpleasant. Selling medical predictions to third parties like employers, life insurers or any other interested parties could lead to bias and discrimination..
Data privacy and security
This brings us to the further challenge raised by AI in medicine:data privacy and security.
Article 22 of the EU General Data Protection Regulation (GDPR) states that a data subject:
“shall have the right not to be subject to a
decision based solely on automated processing,
including profiling, which produces legal effects
concerning him or her similarly significantly
alerts him or her.”
The imprecise language of the regulation creates more questions than answers. What is the actual protection afforded to data subjects? Terms like “profiling”, “solely” and “legal effect” are not self-explanatory and the absence of clarification leaves too much room for judicial interpretation.
Another set of challenges in the sector is created by IP.
IP relates to intangible assets like inventions and new technologies. The most relevant forms of intellectual property for medical AI are patents and trade secrets. A key issue when considering patent protection is the “abstract idea” exception. Patent claims that are based on abstract ideas are ineligible under patent law. Currently there is little case law on the topic. However, it is notable that one district court in the USA has recently ruled that “to the extent artificial intelligence inventions…involve an inventive concept, they could be patentable even if they have, at their core, an abstract concept.” (Blue Spike, LLC v. Google Inc., Case No. 14-cv-01650-YGR, 2015 WL 5260506, *6 (N.D. Cal. Sep. 8, 2015).
There are further issues around trade secrets . A trade secret constitutes “information that provides a competitive advantage because it is not known to others, and for which reasonable safeguards are maintained to protect its secrecy”. While keeping their data and methods secret might be an effective tool for medical AI companies to stay competitive, problems might arise from these practices. Clinicians, physicians and patients may not be keen to be part of a project they know nothing about, especially if they don’t know how it was developed and what the side effects are, for example.
As with any emerging technology, AI can be protected in various ways. However, the legal framework around AI is currently not capable of addressing all the issues arising from it and this calculus should change in the near future.
The healthcare artificial intelligence market is expected to grow by 41.8% from 2019 to 2025, hitting the revenue mark of 13 billion dollars by 2025. Having legal clarity and certainty will be essential if the industry wants to use the full potential of the emerging technology. AI as a stethoscope will be a fundamental part of our lives, but only a multi-disciplinary approach can address all the issues it raises.
By Slavina Petrova