Ethical Issues May Surround Clinician Use of AI in Cancer Care

News
Article

As part of a survey of practicing oncologists in the United States, clinicians may have ethical concerns around using artificial intelligence in cancer care.

Ethical Issues May Surround Clinician Use of AI in Cancer Care

Ethical Issues May Surround Clinician Use of AI in Cancer Care

The ethical issues around the implementation of artificial intelligence (AI) in cancer care may concern clinicians in the US, which may in turn disrupt optimal use of the technology, as demonstrated by findings from a cross-sectional survey published in JAMA Network Open.

Survey respondents (n = 204) overwhelmingly reported that they would benefit from dedicated AI training (93.1%), although 75% did not know of appropriate resources to do so. Only 13.8% and 7.8% reported that AI prognostic and clinical decision models, respectively, could be used clinically when only researchers could explain them. Additionally, 81.3% and 84.8%, respectively, reported that they needed to be explainable by oncologists and 13.8% and 23.0%, respectively, stated they needed to be explainable by patients. When clinicians were presented with a scenario where an FDA-approved AI decision model selects a different regimen than the oncologist initially planned to recommend, the most common answer was to present both options and let the patient decide (36.8%).

Moreover, most respondents indicated that patients should consent to the use of AI tools in treatment decisions (81.4%) and 56.4% said that consent was needed for diagnostic decisions. Most respondents (90.7%) said that the developers of AI should be responsible for legal problems caused by the technology and a majority responded that oncologists should protect patients from biased AI (76.5%). Just 27.9% of respondents indicated that they were confident in their ability to identify how representative the data used in an AI model was, including 66.0% of those who thought it was the clinician’s responsibility to protect patients from biased AI tools.

“US oncologists reported that AI needs to be explainable by oncologists but not necessarily patients, and that patients should consent to AI use for cancer treatment decisions,” study authors wrote. “Less than half of oncologists viewed medico-legal problems from AI use as physicians’ responsibility, and although most reported feeling responsible for protecting patients from biased AI, few reported feeling confident in their ability to do so.”

Investigators conducted a population-based survey from November 15, 2022, to July 31, 2023, of practicing oncologists in the United States. Study authors created a survey instrument with 24 questions in domains such as AI familiarity, predictions, explainability, bias, deference, and responsibilities. Clinicians were mailed paper surveys, followed by reminder letters that included an electronic survey option and phone calls for non-responders.

The objective of the survey was to “evaluate oncologist’ views on the ethical domains of the use of AI in clinical care, including familiarity, predictions, explainability, bias, deference, and responsibilities.” The primary outcome was respondent views on the need for patients to provide informed consent for the use of an AI model during treatment decision-making.

Additional results indicated that the survey response rate was 52.7% (n = 204/387). Respondents hailed from 37 states and were mostly male (63.7%), non-Hispanic white (62.7%), did not have prior AI training (53.4%), were familiar with at least 2 AI models (69.1%), and were medical oncologists (61.8%). Among 202 clinicians who indicated their practice setting, 60 practiced in an academic setting and 142 practiced in another setting.

Additional data from the survey showed that respondents from academic practices were more likely to choose the AI’s recommendation over their initial recommendation (OR, 2.99; 95% CI, 1.39-6.47; P = .004) or defer the decision to the patient (OR, 2.56; 95% CI, 1.19-5.51; P = .02) when posed with the conflicting recommendation situation. Moreover, findings from a multivariable logistic regression model revealed that clinicians outside of the academic setting (OR, 1.72; 95% CI, 0.77-3.82; P = .19) and those without prior AI training (OR, 2.62; 95% CI, 1.15-1.15; P = .02) were more likely to have a preference for patient consent when using an AI treatment decision model compared with their counter parts. Compared with those in other settings, clinicians in academic practices were more likely to report that they could explain AI models (OR, 2.08; 95% CI, 1.06-4.12) and predict that AI would improve adverse effect management (OR, 1.93; 95% CI, 1.01-3.73) and end-of-life decision-making (OR, 2.06; 95% CI, 1.11-3.84).

“Ethical AI in cancer care requires accounting for stakeholder positions,” study authors wrote in conclusion. “This cross-sectional survey study highlights potential issues related to accountability and deference to AI as well as associations with practice setting. Our findings suggest that the implementation of AI in the field of oncology must include rigorous assessments of its effect on care decisions and decisional responsibility when problems related to AI use arise.”

Reference

Hantel A, Walsh TP, Marron JM, et al. Perspectives of oncologists on the ethical implications of using artificial intelligence for cancer care. JAMA Netw Open. 2024;7(3):e244077. doi:10.1001/jamanetworkopen.2024.4077

Related Videos
Setting Boundaries Can Be a Challenge for Nurse Navigators
Smiling Nurse and Patient
© 2024 MJH Life Sciences

All rights reserved.