Authors – Kristina Astromskė, Dr. Eimantas Peičius, Dr. Paulius Astromskis
This paper inquiries into the complex issue of informed consent applying artificial intelligence in medical diagnostic consultations. The aim is to expose the main ethical and legal concerns of the New Health phenomenon, powered by intelligent machines. To achieve this objective, the first part of the paper analyzes ethical aspects of the alleged right to explanation, privacy, and informed consent, applying artificial intelligence in medical diagnostic consultations. This analysis is followed by a legal analysis of the limits and requirements for the explainability of artificial intelligence. Followed by this analysis, recommendations for action are given in the concluding remarks of the paper.
Medical diagnostics, informed consent, opacity, trust, artificial intelligence, medical ethics, right to explanation
More than 50 years ago, Isaac Asimov predicted, with the punctilio of accuracy, that robots are neither common, nor very good in 2014, but they are in existence (Asimov 1964). Indeed, artificial intelligence and robotics are not science fiction anymore as they are present in households and workplaces throughout the world. Technologies fundamentally alter the healthcare sector too, exponentially enhancing physicians’ abilities to deliver better and faster results more than ever before. Explosion in the amount of healthcare data, information technology development, democratization of access for healthcare, and willingness of the general public to be more active participants in their own health are identified as major trends that transform and define the “New Health” phenomenon, where robotics and artificial intelligence increasingly become a part of our healthcare ecosystem (PwC 2017).
The converging trends in artificial intelligence and robotics developments used to define the abovementioned “New Health” transformations, are visible everywhere – from surgery, drug discovery, patient care, and information management, to diagnostics and beyond (Feldman et al. 2019; PwC 2017). Artificial intelligence systems, accompanied by increasing computational powers, can analyze personal characteristics, medical records, large amounts of literature, and other medical data with high-accuracy and using just a fraction of time and cost required for the same task to be completed by a human counterpart. These intelligent machines are accompanied with high-incentives to support diagnostic decisions of the physicians, since decreased time, cost, and medical errors increase access and quality of healthcare (Sion 2019). It is predicted that eventually machines will have demonstrably better success rates than human physicians in providing the most accurate and higher-quality diagnoses or in making the best decisions when choosing the most beneficial therapy (Bachle 2019; Froomkin et al. 2019). Even though artificial intelligence tools bear an inherent risk of making incorrect determinations, they make fewer mistakes than humans, thus, reducing the number of medical errors (Bachle 2019; Meskó, Hetényi, and Győrffy 2018;). Indeed, machines don’t get tired, don’t allow emotion to influence their judgment, make decisions faster and can be programmed to learn more readily than humans (AoRMC 2019).
Historically, one of the most common obstacles of precise and clear diagnosis (along with visually observed symptoms) was the lack of medical data. Nowadays, this gap has been immensely compensated by the usage of a plethora of powerful and data-driven devices. For instance, the Watson Oncology Advisor (IBM) uses almost 15 million pages of medical literature to advise oncologists on cancer diagnoses and chemotherapy plans, with an accuracy of 90% (Hoerenand Niehoff 2018). Additionally, artificial intelligence enables a review and translation of mammograms 30-times faster with 99% accuracy or correctly detected 92.4% of breast cancer tumors compared to the 73.2% detected correctly by human doctors. The recent study of McKinney et al (2020) revealed that the artificial intelligence system applied for breast cancer screening has outperformed all of the human readers: the area under the receiver operating characteristic curve (AUC-ROC) for the artificial intelligence system was greater than the AUC-ROC for average radiologist by an absolute margin of 11.5%; the artificial intelligence system reduced the workload of the second reader by 88% and resulted in an absolute reduction of 5.7% and 1.2% (USA and UK) in false positives and 9.4% and 2.7% in false negatives. Thus, AI could significantly reduce the need for unnecessary biopsies as well as decrease the uncertainty and stress of a misdiagnosis (Griffiths 2016; Liu et al. 2017; PwC 2017). IBM’s Watson for Health is working on cognitive technology that can process medical information exponentially faster than any human, providing findings and decisions free of cognitive biases or overconfidence, thus reducing misdiagnosis (PwC 2017). Google DeepMind learning algorithms can interpret quickly eye scans from routine clinical practice with unprecedented accuracy and recommend correctly where patients should be referred to for treatment of over 50 sight-threatening eye diseases as accurately as any world-leading expert doctor (De Fauw et al. 2018; Gulshan et al. 2016). Wearables like Cyrcadia’s iTBra for the detection of breast cancer or Cardio Diagnostics for cardiac monitoring and rhythm management (PwC 2017), chatbots like Wysa or Woebot for depression, and other mental health issues detection and treatment (Stiefel 2019) and many other technological developments in radiology, ophthalmology, dermatology, pathology, and other medical specialties are already saving lives, disrupts early diagnostic practices, and attracting the attention of policymakers and investors (see for e.g., Konnangath 2019; Liang et al. 2019; Yu et al, 2018; and others).
Although the goal with which artificial intelligence systems are applied in medicine is to assist humans to execute their tasks more efficiently, medical ethics has begun to highlight certain concerns regarding algorithmic bias, transparency, dehumanization of physician-patient relationships, decline of practical physician skills, and others (see for e.g., Char et al. 2018; Mittelstadt et al. 2016; Obermeyer and Emanuel 2016; Schiff, 2019). Schönberger (2019) has identified biased training data, inconclusive correlations, intelligibility, inaccuracy, unfair outcomes and other key concerns related to decision-making capacity of artificial intelligent systems. Indeed, machines could be poorly programmed, poorly trained, used inappropriately, contain incomplete or biased data, and could be misled or hacked (AoRMC 2019). However, analysis of Ienca et al. (2018) revealed that privacy and confidentiality are by far the dominant concern in the ethical domain, followed by informed consent, but the important issues of fairness, discrimination, trust, transparency, responsibility, and others compose a relatively small portion of the spectrum current ethical and legal issues discussed in the literature.
Thus, together with the “New Health” concept, new types of unprecedented risks emerge which need to be considered. Due to data dependency, there is a risk that patients may no longer be regarded holistically but as mere bearers of medically (ir)relevant data. This may lead to an increased de-personalization of individual patients by reducing the quantity and quality of human contact in doctor-patient relationships. It also entails much less visible consequences, such as a slow change in our understanding of what it means to be healthy and the general tendency to over-diagnose or over-treat (Bachle 2019). Therefore, informed consent requires a debate on the core purposes, content design, and form of it in the situations when intelligent machines are used in diagnostics or other interventions into health (Wolf etal.2018).
Moreover, throughout the history of humanity, decision making was a fundamental human process, for assistance of which conventional tools were created, Never before humans have built machines that would be able to mimic a human decision process in ways their creators sometimes do not understand and cannot explain, nor have the machines been created that would deserve the call to explore the possibility of applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently (European Parliament 2017). This also raises the question whether existing legal systems and rules, originally intended and designed for human-to-human (in personam) and human-to-machine (in rem) processes, can work well in machine-to-human and machine-to-machine environments (Fomin 2018). Due to the novelty and challenges of the application of technology in healthcare, the emerging intelligent machines may lack normative rule sets regarding how they can or should be used, or what their use means.
Even considering that current laws and ethical concepts are largely suited to deal with majority of concerns, particularly related to bias, opacity and failure to model the real world accurately, some prompt clarifications are desirable, especially in the fields of responsibility, fairness and others (Schönberger 2019). Once artificial intelligence-based medical diagnostics are shown to be superior to humans, the risk of over-reliance emerges, requiring a revision of existing medical malpractice law, the standard of care, and liability rules in the clinical settings (Froomkin et al.2019). However, over-regulation could arbitrarily diminish the value of artificial intelligence systems in healthcare. More control means less freedom and hence over-protectionism might unnecessarily hinder welfare-enhancing innovations. Indeed, as artificial intelligence is still a nascent field in healthcare, hard and premature regulation could stifle many useful innovations that could be beneficial for patients and health system (Schönberger 2019). On the other hand, liberal and innovation-fostering regulation must not lead to a loss of trust between patient and physician or violations of fundamental human rights. Therefore, the main challenge for regulators is to balance freedom and control in a way that would maximize the personal and overall well-being of society, thus promoting the benefits and inhibiting the harmful effects (den Hertog 2010).
Little ethical and legal work related to artificial intelligence in healthcare exists so far (Schönberger 2019), although the body of literature in this field is diverse and rapidly-growing (Ienca et al. 2018). Still, there is a lack of empirical studies to reveal if patients directly benefit from the use of artificial intelligence systems. Although deep learning will not be a panacea, it has huge potential in many clinical areas where datasets are potentially stable over extended periods and high dimensional data is mapped to a simple classification. As such, it will be incumbent on healthcare professionals to become more familiar with this and other artificial intelligence technologies in the coming years to ensure that they are used appropriately (Keane and Topol 2018).
Accordingly, this article contributes to the ongoing scientific discussion on the risks and benefits of artificial intelligence use in healthcare and aims to expose contemporary legal and ethical challenges of informed consent applying artificial intelligence in medical diagnostic consultations. As a first step, the article exposes ethical aspects of the right to explanation, privacy, and informed consent, applying artificial intelligence in medical diagnostic consultations, followed with a legal analysis of the limits and requirements for it. This analysis is accomplished with conceptual recommendations for action in the concluding part of this article.