This post comes from Fred Nelson, MD, an orthopaedic surgeon in the Department of Orthopedics at Henry Ford Hospital and a clinical associate professor at Wayne State Medical School. Some of Dr. Nelson’s tips go out weekly to more than 3,000 members of the Orthopaedic Research Society (ORS), and all are distributed to more than 30 orthopaedic residency programs. Those not sent to the ORS are periodically reposted in OrthoBuzz with the permission of Dr. Nelson.
Artificial intelligence (AI) is no longer on the horizon; it is here and its number of “medical” applications, such as radiographic interpretation, is growing. Given the spectrum of potential uses of AI in medical decision making, consideration of medical ethics is essential, says Alan M. Reznik, MD, MBA in a recent AAOS Now article (see link below).
First, Dr. Reznik reviews the four basic elements of medical ethics:
- Autonomy—coercion-free independence of thought and decision making
- Justice—the assurance that the burdens and benefits of new or experimental treatments are distributed throughout all groups
- Beneficence—the intent of doing good for the patient
- Non-maleficence—the goal of doing no harm to the patient or society as a whole
Dr. Reznik goes on to observe that neural networks, the brains behind AI, have no inherent ethical reasoning. With the ability of neural networks to process massive amounts of human data, AI can and will “find and reinforce all preexisting biases in the dataset being used to ‘train’ it,” writes Dr. Reznik.
Here are 4 examples of why AI must conform to the four basic ideals of medical ethics:
Autonomy: The use of AI by insurance companies might yield fewer surgical approvals—saving carriers money, but denying individuals appropriate care. If that happens, “patient and physician autonomy will continue to be lost,” writes Dr. Reznik.
Justice: In AI-based epidemiology, the use of zip codes may introduce and/or amplify a wide range of socioeconomic, religious, and racial biases. AI applications that use addresses or zip codes “may need to be justified and checked for unethical bias each time they are used,” cautions Dr. Reznik.
Beneficence: Although “justice” might dictate decreased use of addresses, zip codes, and genetic information in AI-based medical applications, Dr. Reznik points out that to protect “beneficence” for individuals, some of that sensitive data will have to be included.
Nonmaleficence: The question here, Dr. Reznik writes, is “how AI will balance individual needs versus society and differing cultures in daily medical care.”