The Ethics of Augmented Intelligence in Dentistry: Making the Patient the Priority

Over the past decade, the potential of integrating Artificial Intelligence or Augmented Intelligence (AI) as a way to assist healthcare professionals in delivering more efficient and precise care has become less of a futuristic fantasy and more of a reality. Nonetheless, AI continues to be what Robert Baker would describe as a “morally disruptive technology.”  Baker defines “morally disruptive” technologies as those which “undermine established moral norms or ethical codes.”1 While AI seems to have a greater foothold on the imagination in the field of medicine, many of the same technologies employed in healthcare have relevance for dentistry. Incorporating AI into dentistry is a foregone conclusion, but the question remains how to do so in a way that has “the benefit of the patient as [the] primary goal.”2 This essay identifies opportunities and challenges of incorporating AI into dentistry and draws on the American Dental Association Principles of Ethics & Code of Professional Conduct (the ADA Code) to examine the ethical implications of these technologies.

230619_AI_article_patient_2
 

Promises of AI
AI, in essence, is a computer scientific and data driven process that allows non-human technology to sense, reason, act, and adapt in a way that simulates human intelligence.3 The development of AI relies on a variety of subsidiarity functions in order to enhance the function of the technology. Two key functions that contribute to the capacity of any AI are machine learning (ML) and deep learning. ML utilizes algorithms that are designed to calculate large sums of data that it is exposed to over long periods of time in an effort to find patterns and to generate predictions when given new data to process.4,5 The refinement of the algorithm in order to make increasingly accurate prediction places through deep learning. Deep learning is a subset of ML by which the system itself learns from and processes the high volume of data input into the system and refines the outputs generated in ML. 4,5 The  successful applications of AI relies on the capacity to access and interpret large amounts of data. Given the large amount of information generated—demographic, personal, and diagnostic—when caring for patients, AI processes can be utilized to generate better care that focuses on improvements in population health, care team well-being, patient experience, equity and inclusion, diagnostic accuracy, and cost reduction.6 

However, given the complexity of how ML and deep learning contribute to patient-specific information, one of the challenges that presented in patient care is how to explain a particular treatment recommendation or diagnosis was generated. While there might be self-driving cars, there will not be fully automated dentists. However, the way in which dentists incorporate technology into practice may allow for augmented dentistry. “Augmented intelligence is an alternative conceptualization that focuses on AI’s assistive role, emphasizing a design approach and implementation that enhances human intelligence rather than replaces it.”7 The technology is likely to always be dependent on the intelligence of health professionals in both development and application.  In dentistry, as in healthcare, there is a need for a professional to assist the augmented intelligence and to inform the patient how the AI recommendation is generated. AI as augmented intelligence, rather than artificial intelligence, highlights the fact that its use is primarily an enhancement tool for the clinician rather than a replacement for the expertise, training, and interpersonal skills.. AI should be a tool to improve the clinician patient relationship and not a tool to replace or diminish that relationship.

Research on the uses of AI in healthcare focus on opportunities to increase the efficiency (faster, with greater accuracy, and a presumed lower cost) and effectiveness (improving health and maintaining life) for those with access to healthcare. AI technologies can be used for radiology scans, for example, to assist in more accurate diagnosis of malignancies.8 From a health outcomes perspective, these developments hold great promise at delivering more efficient and accurate care.9 In many ways, dentistry operates in the early stages of using AI, but can utilize AI in a similar way.  While AI holds great potential for use in healthcare and dentistry settings, various ethical questions arise.10 

Ethical Questions
The opportunities offered by AI hold benefits both for patients and providers alike,  but implementing these technology-driven solutions requires thoughtful consideration with respect to privacy and confidentiality, patient autonomy including informed consent, system reliability, and cultural competency/representation.11 Privacy is always a concern when dealing with sensitive information. How is the data being stored; what is it being utilized for; who is benefiting from the information generated; who has access to it? An initial, and important concern is a process for transparency that ensures that deidentification occurs and privacy is being protected in a sufficient manner. If patient data is being used for research purposes, should patients—even if the information is deidentified—be able to control how their information is used? What happens if this information is not sufficiently deidentified? Is it necessary for patients to approve of the use of their deidentified data and under what circumstances? 

In addition to deidentification and approval of its use, questions of bias have been ongoing in the development of Artificial Intelligence and its application in healthcare.12,13 Within the US, health inequities have been well-documented and have resulted in non-white patients delaying accessing healthcare services for a variety of reasons that contribute to poorer health outcomes.14 Dentistry also has some striking examples of health disparities. For example, “Among older adults who were non-Hispanic black, Mexican American, poor, near-poor, or current smokers, the prevalence of untreated decay was about 2 to 3 times that of those who were non-Hispanic white or not-poor or who never smoked.”15 While the reality of the social determinants of health is not new, it does raise questions when compiling patient data for development of AI tools. The output of the patient data is likely to reveal poorer outcomes and higher healthcare costs associated with black and latinx communities. While the data might reveal a known bias (health disparities), it raises a social ethics question for providers. Will they use that information to solidify its healthy payer mix by improving technological and more personalized choices for its insured patients or target health improvement measures for those more vulnerable in the community? Or, is there a way to draw on the efficiency of newer technologies to do both? 

While one might hope the answer is the latter, the business models proposed by AI-driven solutions tend to lead with the economic value of their technologic solutions for providers that may deliver higher quality care for some patients and make access more complicated for others.16 Thus, if the data going in is known to be targeting outputs that generate an economic benefit to providers, biases baked into dental care that can be linked to socio-economic status are likely to yield disproportionate benefit to an already healthier subset of predominantly white patients. Philosopher Ruha Benjamin asks if bias exists within the algorithm at the outset, which one could reasonably say is true given the reality of health disparities, who bears the responsibility for any errors that develops from the biased intelligence?  Does this become the responsibility of the AI-programmer or the provider of care?12 When selecting AI-based systems, dentists should bear in mind that it is “widely accepted today that health AI models can unintentionally replicate and encode existing structural biases into practice.”17  This error can be due to reliance on historical data,18 health disparities reflected in the data,19 and human judgment reflected in outcomes,20 among other reasons.  To mitigate possible biases, dentists can select systems created by multidisciplinary teams,21 with trained experts in equity,17 and with transparent logic models.  Dentists, like other health care providers are bound by their social contract to put patient needs ahead of financial gain. While bias highlights social responsibilities, the most basic questions of trust between provider and patient remain fundamental. 

Concerns about privacy and bias certainly intersect so they cannot and should not be considered as separate issues. As AI becomes more robust with better and more comprehensive data, new and more complex questions about privacy will arise. Data-sharing requirements across organizations can create both ethical and security issues.22 Sometimes measures used to ensure privacy and equity can have the opposite effect at scale, such as excluding protected attributes, like race, gender, and socioeconomic status.23,24 Privacy issues can be addressed, in part, with fair information practices, such as colleting only data that is needed, storing it at the local level, and using it only for the purpose for which it was collected.17 In addition to privacy, transparency is also needed to ensure that patients are fully informed and understand information that will be generated by AI to guide treatment decisions or support diagnoses. 
Given that much of the information generated for use in patient care stems from complex algorithms that can be difficult to understand, when these algorithms produce a diagnosis or treatment recommendation, the provider must be capable of explaining how the AI output was generated. Providers are obligated to explain clearly the diagnosis in a way that both the provider and patient understand how recommendations were generated, what possible alternatives exist, and how those decisions can be arrived at by focusing on the patient’s best interest. While this approach works well if the outcome is positive, what happens when the AI generated recommendation leads to an adverse outcome? Who bears responsibility? AI in many ways, generates more questions than answers at this point, however, the ethical framework provided by the ADA Code offers insights that prioritize the good of patients and delineates the ethical responsibility of providers. 

Drawing on ADA Principles
Contextualizing each of these issues within the existing framework of the ADA Code offers some guidance about how dentistry can ethically integrate AI into practice in a way that maintains the integrity of the dentist-provider relationship and the integrity of the profession as a whole. Each of the five principles:  Autonomy, Nonmaleficence, Beneficence, Justice and Veracity need to be considered when implementing AI in both administrative and treatment settings in the dental office.  

Autonomy
The Principle of Autonomy addresses the need for patient involvement in treatment decisions, protection of patient privacy and confidentiality and a consideration of the patient’s goals and values within the bounds of acceptable treatment.2 AI raises some potential challenges to adherence to these principles given the current complexity of the generation of treatment algorithms. To truly satisfy informed consent dentists will need to explain to patients in an understandable way why a particular treatment is being recommended and what alternatives there may be to reliance on the algorithmic determination. Additionally, patients will need to comprehend the information provided.

Privacy may also be an issue in AI applications in dentistry as development of AI tools is largely dependent upon accumulation of patient data. Even if a patient’s data is going to be aggregated, concerns about privacy and security still exist and current “practices of notifying patients and obtaining consent for data use are not adequate, nor are strategies for de-identifying data effective in the context of large, complex data sets when machine learning algorithms can re-identify a record from as few as 3 data points.”7  Despite these risk, AI still has patient benefits including the benefit of increasing options available to patients and offering patients more targeted evidence-based treatments.

Nonmaleficence & Beneficence
In general, this principle obligates the dentist to refrain from harming the patient.2 More specifically, and with particular relevance to AI, this principle requires the dentists to “keep their knowledge and skill current.”2 If the dentist does not have a sufficient understanding of the AI tool being used, the dentist either must become educated about it or, in the interest of patient safety, make a referral to one who does have the necessary expertise.2 Harm may also occur if existing health disparities are further exacerbated by use of AI developed based upon a homogenous data set.  Sher et al. acknowledge this risk of “the creation of suboptimal models that fail to generalize to other data sets as a result of the poor quality of the data fed into the initial algorithmic model. Such technical issues can lead to the introduction of bias into these models, which can have adverse consequences for the society at large.”25 Dentists, however, can actively avoid some of these ethical risks by becoming actively involved in development of AI tools offering insight and expertise to those actively developing the technology. Dentists know and understand the patient/clinician dyad in a way that must factor into the development of AI for dentistry to best avoid harming patients. Practitioner involvement may be particularly important to reduce the exacerbation of disparities in dental care. 

Justice
The concerns about bias touch on another important principle of the ADA Code, Justice.  “[T]his principle expresses the concept that the dental profession should actively seek allies throughout society on specific activities that will help improve ace to care for all.”2 While written long before the advent of AI, this principle has direct applicability to one of the greatest risks and one of the greatest benefits of the continued development and adoption of AI in healthcare.  The intersection of technological innovation, an emphasis on economic impact, and the reality of health disparities raise important questions about implementing AI in dentistry. Collaborations between community members, practitioners, and computer scientists may generate more ethical approaches. At the least, collaborations between dentists and computer scientists will yield a deeper understanding and capacity to communicate the underlying processes behind recommendations generated by augmented intelligence. 

Veracity
Despite the enormous potential of AI, dentists have an obligation to be truthful and honest.2 A clear benefit, if the full potential of AI is realized, would include reduction in the risk of overbilling and/or the risk of potential for overtreatment. Dentists utilizing AI tools, however must be careful not to over promise or oversell the technology to peers or patients. Additionally, dentists need to be honest with themselves as to who benefits from the most from the utilization of AI and ensure that the focus is kept first and foremost on those receiving (or in need of receiving) dental care. 

Conclusion
The ethical issues of AI in dentistry are similar to those faced in all facets of healthcare. AI holds the potential to improve accuracy in diagnostics and treatment recommendations, but requires a large amount of data to generate these potential solutions. Caution should be exercised to ensure that all patient’s data remains private and that any known health inequalities are considered in a way that mitigates the potential of bias. The principles of ethics and professionalism of the field demand, as part of the social contract that “dentists should possess not only knowledge, skill and technical competence but also those traits of character…honesty, compassion, kindness, integrity, fairness and charity are part of the ethical education of a dentist and practice of dentistry and help to define the true professional.”2 Just as the potential applications of AI with in dentistry will always rely on the intelligence of dentists, the ethical implementation likewise requires collaborative thinking to generate socially responsible and patient-focused care. 



Biographies:

Erin D. Williams, JD, Managing Director, Biomedical Innovation The Health FFRDC. Operated by the MITRE Corporation

Michael McCarthy, PhD, HEC-C, Associate Professor, Neiswanger Institute for Bioethics, Loyola University Chicago

Nanette Elster, JD, MPH, Associate Professor, Neiswanger Institute for Bioethics, Loyola University Chicago. Content Director and Editor, American College of Dentists

 



 References:

  1. Baker Robert,. Before Bioethics : A History of American Medical Ethics from the Colonial Period to the Bioethics Revolution.; 2013.
  2. ADA_Code_Of_Ethics_November_2020.pdf. Accessed June 24, 2021. https://www.ada.org/~/media/ADA/Member%20Center/Ethics/ADA_Code_Of_Ethics_November_2020.pdf?la=en
  3. Amisha  null, Malik P, Pathania M, Rathaur VK. Overview of artificial intelligence in medicine. J Family Med Prim Care. 2019;8(7):2328-2331. doi:10.4103/jfmpc.jfmpc_440_19
  4. Esteva A, Robicquet A, Ramsundar B, et al. A guide to deep learning in healthcare. Nat Med. 2019;25(1):24-29. doi:10.1038/s41591-018-0316-z
  5. Mishra A. Machine Learning for IOS Developers | Wiley Online Books. John Wiley and Sons, Inc; 2020. Accessed June 25, 2021. https://onlinelibrary.wiley.com/doi/book/10.1002/9781119602927
  6. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril: A Special Publication from the National Academy of Medicine | Petrie-Flom Center. The Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School. Accessed June 24, 2021. https://petrieflom.law.harvard.edu/resources/article/artificial-intelligence-in-health-care-the-hope
  7. Crigger E, Khoury C. Making Policy on Augmented Intelligence in Health Care. AMA Journal of Ethics. 2019;21(2):188-191. doi:10.1001/amajethics.2019.188
  8. Senbekov M, Saliev T, Bukeyeva Z, et al. The recent progress and applications of digital technologies in healthcare: A review. International Journal of Telemedicine and Applications. 2020;2020. doi:10.1155/2020/8830200
  9. Topol E. Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. Basic Books; 2019. Accessed June 24, 2021. https://www.amazon.com/Deep-Medicine-Artificial-Intelligence-Healthcare/dp/1541644638
  10. Morley J, Machado CCV, Burr C, et al. The ethics of AI in health care: A mapping review. Social Science and Medicine. 2020;260. doi:10.1016/j.socscimed.2020.113172
  11. Machine Learning in Healthcare: Examples, Tips & Resources | UIC Online. UIC Online Health Informatics. Published November 13, 2020. Accessed June 24, 2021. https://healthinformatics.uic.edu/blog/machine-learning-in-healthcare/
  12. Benjamin R. Race after Technology: Abolitionist Tools for the New Jim Code. Polity; 2019.
  13. Gerke S, Minssen T, Cohen G. Ethical and legal challenges of artificial intelligence-driven healthcare. Artificial Intelligence in Healthcare. Published online 2020:295-336. doi:10.1016/B978-0-12-818438-7.00012-5
  14. 2019. Key Facts on Health and Health Care by Race and Ethnicity – Coverage, Access to, and Use of Care – 8878-02. KFF. Published November 12, 2019. Accessed June 24, 2021. https://www.kff.org/report-section/key-facts-on-health-and-health-care-by-race-and-ethnicity-coverage-access-to-and-use-of-care/
  15. Oral Health Surveillance Report, 2019. Published December 15, 2020. Accessed June 24, 2021. https://www.cdc.gov/oralhealth/publications/OHSR-2019-index.html
  16. Nordling L. A fairer way forward for AI in health care. Nature. 2019;573(7775):S103-S105. doi:10.1038/d41586-019-02872-2
  17. MITRE Corporation. Artificial Intelligence in Health: A Pulse of the Future.; 2020:1-12.
  18. DHS. Developing Predictive Risk Models to Support Child Maltreatment Hotline Screening Decisions. Allegheny County Analytics: Reports, Visualizations and Datasets. Published May 1, 2019. Accessed June 24, 2021. https://www.alleghenycountyanalytics.us/index.php/2019/05/01/developing-predictive-risk-models-support-child-maltreatment-hotline-screening-decisions/
  19. Rajkomar A, Hardt M, Howell MD, Corrado G, Chin MH. Ensuring Fairness in Machine Learning to Advance Health Equity. Annals of internal medicine. 2018;169(12):866-872. doi:10.7326/M18-1990
  20. Panch T, Mattie H, Atun R. Artificial intelligence and algorithmic bias: implications for health systems. Journal of Global Health. 2019;9(2):010318. doi:10.7189/jogh.09.020318
  21. What Do We Tell the Drivers? Toward Minimum Driver Training Standards for Partially Automated Cars - Stephen M. Casner, Edwin L. Hutchins, 2019. Accessed June 24, 2021. https://journals.sagepub.com/doi/10.1177/1555343419830901
  22. Tom E, Keane PA, Blazes M, et al. Protecting Data Privacy in the Age of AI-Enabled Ophthalmology. Trans Vis Sci Tech. 2020;9(2):36-36. doi:10.1167/tvst.9.2.36
  23. McCradden MD, Joshi S, Mazwi M, Anderson JA. Ethical limitations of algorithmic fairness solutions in health care machine learning. The Lancet Digital Health. 2020;2(5):e221-e223. doi:10.1016/S2589-7500(20)30065-0
  24. Corbett-Davies S, Goel S. The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning. arXiv:180800023 [cs]. Published online August 14, 2018. Accessed June 24, 2021. http://arxiv.org/abs/1808.00023
  25. Sher T, Sharp R, Wright RS. Algorithms and Bioethics. Mayo Clinic proceedings. 2020;95(5):843-. doi:10.1016/j.mayocp.2020.04.020