Emerging Technologies

Artificial Intelligence in Health: Ethical Considerations for Research and Practice

The concept and application of artificial intelligence (AI) technology is top of mind for health researchers, nurses, physicians, and informaticists today as the amount of data and information continues to increase at a rapid pace. AI was initially proposed in 1956 by Dartmouth Professor John McCarthy, with a rebirth recently in health with the era of “big data”.1 AI defined is “the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”2 AI technology use in healthcare inherently triggers ethical questions surrounding the vast amount of data and its management. This viewpoint will examine four core principles necessary to address the design, development and use of AI technology and ethical considerations for research and practice.3

Privacy and Security

AI technology, used in research and clinical practice, should adhere to privacy and security requirements of patient data. Both privacy and security of data is critical to comply with the law and act ethically. An AI technology system should require both, as it will access the massive amounts of protected health information and data that will ultimately improve human health and well-being. The National Institutes of Health (NIH) Data Sharing Policy and Implementation Guidelines states that data should be widely and freely available, yet simultaneously safeguard privacy of participants, and protect confidential data.5 The Common Rule, updated in 2015, allows researchers to obtain a broad consent from individuals, meaning the individuals “agree to researchers’ using their identifiable biospecimens, originally obtained for other purposes such as clinical care, for future yet-to-be specified research.”6 Thus, the privacy and security of patient data is an even more important ethical consideration in the use of AI technology based on these regulations. When considering AI use in health, it is important to use technology that utilizes strategies and techniques such as homomorphic encryption, techniques to separate data from identifying information about individuals, and techniques that to protect against tampering, misuse or hacking.3 Ultimately, these protection techniques available today will enhance the privacy and security of a patient’s data, while enabling actionable insights for the researcher and clinician.

Reliability and Safety

Another essential ethical issue for consideration is the reliability and safety of AI technology, as it may impact research and clinical decision making including differential diagnosis. For example, AI use in emergency departments may include critical and time sensitive applications such as clinical image analysis, intelligent clinical monitoring, algorithms for clinical outcome prediction, population and social media analysis such as public health and disease surveillance.2 However, to support widespread adoption of technology that may lead to improved patient outcomes, additional research studies are needed with the use of AI. In addition, there is a lack of publication and reporting guidelines for AI in health which further exacerbates the evaluation and adoption of such technology. 2 Research and collaboration among industry, government and academia is needed to develop guidelines for use of reliable and safe AI technology systems and applications. More importantly, AI systems should seek out human subject matter experts and situational awareness in critical circumstances to further promote reliability and safety. 3 Fundamentally, these two components will significantly influence whether a researcher or clinician develops trust in a technology. Finally, the data feeding the AI system and the data collection process, regardless of the technique, depends on reliable data collected from reliable methods. 5

Fairness and Inclusivity

AI systems should not only treat patient data in a balanced and fair way, but also not impact similar groups of people in different ways. 3 In addition, to eliminate bias in research and clinical practice, inclusivity is another concept to incorporate into the design of AI systems. To promote fairness and inclusivity, engineers, developers and coders should be not only practice inclusive behaviors, but also come from diverse backgrounds with a varied set of experiences. The design and development of ethical AI technology systems must include input and feedback from people with research, clinical, administrative/operational, etc. backgrounds, as this will mutually benefit the patients and ultimately adoption of such technology. In addition, AI systems can increase access to clinical trial information, education, government services, and social and economic opportunities thereby increasing the potential for a more inclusive and diverse research study or clinical best practice. An application of AI in imaging proposes that an individual in a geographic or remote locations could have an ultrasound, for example, by an unskilled worker and accurately diagnosed earlier with AI, and referred to an expert.1 This AI example is potentially transformative for the patient, clinician and health system.

Transparency and Accountability

The last elements to consider ethically are transparency and accountability of the people who design, deploy and use AI systems, whether that be industry, government, private sector, or academia. If AI systems are used to help make decisions that impact patient’s lives and health, understanding how those decisions were made must transparent to researchers, clinicians and patients. Information on how the decisions AI systems made may make it easier to identify and raise awareness of potential bias, errors and unintended outcomes. 3 While no clear rules and regulations are in place today that govern these two factors, development is underway. In May 2018, the Food and Drug Administration (FDA) allowed the use of AI diagnosis software in a retrospective study of 1,100 adult images to detect wrist fractures. 4 One last significant area for accountability to consider is an internal review boards (IRB). Research that is federally funded and uses human subjects are required to be approved by an IRB. 5 While they may not necessarily be AI subject matter experts, the IRB team can provide oversight and guidance on questions regarding development and deployment of AI systems. 3

Summary

As the pace of technology continues to drive transformation in health, one must consider the ethical implications of AI systems for research and practice. Current AI techniques and examples have the potential to improve patient outcomes, benefit clinicians, and provide the public higher quality care. 2 The four core principles discussed herein should be considered in design, development and deployment of AI systems to advance adoption. As the amount of health data generated continues to grow exponentially, the ability of AI systems to complement research and practice should not be underestimated.

The views and opinions expressed in this content or by commenters are those of the author and do not necessarily reflect the official policy or position of HIMSS or its affiliates.

HIMSS20 Digital

Experience the education, innovation and collaboration of the HIMSS Global Health Conference & Exhibition with on-demand sessions available as your schedule allows. We’re also supplementing our HIMSS20 programming with critical COVID-19 content.

Be ready for what’s next

Jones LD, Golan D, Hanna SA, Rammachandran M. Artificial intelligence, machine learning and the evolution of healthcare: a bright future or cause for concern? Bone Joint Res. 2018;7:223-225.

Menikoff J, Kanneshiro J, Pritchard I. The common rule, updated. The New England Journal of Medicine. 2017;376(7):613-615.

Stewart J, Sprivulis P, Dwivedi G. Artificial intelligence and machine learning in emergency medicine. Emergency Medicine Australasia. 2018;30:870-874.

Smith B and Shum H. The future computed: artificial intelligence and its role in society. https://3er1viui9wo30pkxh1v2nh4w-wpengine.netdna-ssl.com/wp-content/uploads/2018/02/The-Future-Computed_2.8.18.pdf Accessed April 13, 2019.

Steneck N. Introduction to the responsible conduct of research. US Department of Health and Human Services, Office of Research Integrity, 2007.