This study explores human-XAI (Explainable Artificial Intelligence) collaboration in the medical setting, focusing on clinicians’ perceptions and preferences. Ten clinicians from I.R.C.C.S. Policlinico San Matteo Foundation of Pavia, Italy, participated in the survey, and two of them took part in the think-aloud session. The aim of the study is to assess and compare clinicians’ perceptions of three tools: an explainable-by-design Bayesian network and two local XAI methods – Shapley values (Shap) and Araucana tree. The explanations were designed as the extension of the ALFABETO project, classifying COVID-19 patients for either discharge or hospitalization. Perceptions were assessed on usability dimensions: self-reported helpfulness, comprehensibility, and cognitive load. Sentiment analysis was also used to gauge emotional tone. Results show clinicians generally trusted XAI explanations, with high compliance rates of 86%, though only 50% of predicted cases showed correct classification, indicating potential over-reliance. Compliance correlated with experience and survey completion time. Shap was perceived as the most comprehensible, helpful, and requiring the least cognitive effort due to its additive nature. Araucana required higher cognitive load and had slightly lower scores mirroring its higher complexity. The Bayesian network was neither comprehensive nor helpful, requiring too much cognitive effort. Sentiment analysis mirrored survey results, but more data is needed for conclusive findings. Significant differences in tool preferences were found between ER (Emergency room) and ID (Infectious diseases) departments, with ID clinicians preferring Shap and ER clinicians preferring both Shap and Araucana. The study highlights the need for theoretical and empirical studies run together, by fitting results into a four-dimensional explainability framework. Overall, fine-tuning cognitive load and usability based on specific user needs makes Shap and Araucana strong candidates for effective human-XAI collaboration in healthcare. Key words: Explainable AI (XAI), Human-XAI collaboration, Medical decision-making, Usability, Shapley values, Araucana Tree, Bayesian network.

This study explores human-XAI (Explainable Artificial Intelligence) collaboration in the medical setting, focusing on clinicians’ perceptions and preferences. Ten clinicians from I.R.C.C.S. Policlinico San Matteo Foundation of Pavia, Italy, participated in the survey, and two of them took part in the think-aloud session. The aim of the study is to assess and compare clinicians’ perceptions of three tools: an explainable-by-design Bayesian network and two local XAI methods – Shapley values (Shap) and Araucana tree. The explanations were designed as the extension of the ALFABETO project, classifying COVID-19 patients for either discharge or hospitalization. Perceptions were assessed on usability dimensions: self-reported helpfulness, comprehensibility, and cognitive load. Sentiment analysis was also used to gauge emotional tone. Results show clinicians generally trusted XAI explanations, with high compliance rates of 86%, though only 50% of predicted cases showed correct classification, indicating potential over-reliance. Compliance correlated with experience and survey completion time. Shap was perceived as the most comprehensible, helpful, and requiring the least cognitive effort due to its additive nature. Araucana required higher cognitive load and had slightly lower scores mirroring its higher complexity. The Bayesian network was neither comprehensive nor helpful, requiring too much cognitive effort. Sentiment analysis mirrored survey results, but more data is needed for conclusive findings. Significant differences in tool preferences were found between ER (Emergency room) and ID (Infectious diseases) departments, with ID clinicians preferring Shap and ER clinicians preferring both Shap and Araucana. The study highlights the need for theoretical and empirical studies run together, by fitting results into a four-dimensional explainability framework. Overall, fine-tuning cognitive load and usability based on specific user needs makes Shap and Araucana strong candidates for effective human-XAI collaboration in healthcare. Key words: Explainable AI (XAI), Human-XAI collaboration, Medical decision-making, Usability, Shapley values, Araucana Tree, Bayesian network.

A mixed-methods investigation of clinician attitudes towards explainable AI in medical decision making

ORLOWSKA, MARTA ANNA
2023/2024

Abstract

This study explores human-XAI (Explainable Artificial Intelligence) collaboration in the medical setting, focusing on clinicians’ perceptions and preferences. Ten clinicians from I.R.C.C.S. Policlinico San Matteo Foundation of Pavia, Italy, participated in the survey, and two of them took part in the think-aloud session. The aim of the study is to assess and compare clinicians’ perceptions of three tools: an explainable-by-design Bayesian network and two local XAI methods – Shapley values (Shap) and Araucana tree. The explanations were designed as the extension of the ALFABETO project, classifying COVID-19 patients for either discharge or hospitalization. Perceptions were assessed on usability dimensions: self-reported helpfulness, comprehensibility, and cognitive load. Sentiment analysis was also used to gauge emotional tone. Results show clinicians generally trusted XAI explanations, with high compliance rates of 86%, though only 50% of predicted cases showed correct classification, indicating potential over-reliance. Compliance correlated with experience and survey completion time. Shap was perceived as the most comprehensible, helpful, and requiring the least cognitive effort due to its additive nature. Araucana required higher cognitive load and had slightly lower scores mirroring its higher complexity. The Bayesian network was neither comprehensive nor helpful, requiring too much cognitive effort. Sentiment analysis mirrored survey results, but more data is needed for conclusive findings. Significant differences in tool preferences were found between ER (Emergency room) and ID (Infectious diseases) departments, with ID clinicians preferring Shap and ER clinicians preferring both Shap and Araucana. The study highlights the need for theoretical and empirical studies run together, by fitting results into a four-dimensional explainability framework. Overall, fine-tuning cognitive load and usability based on specific user needs makes Shap and Araucana strong candidates for effective human-XAI collaboration in healthcare. Key words: Explainable AI (XAI), Human-XAI collaboration, Medical decision-making, Usability, Shapley values, Araucana Tree, Bayesian network.
2023
A mixed-methods investigation of clinician attitudes towards explainable AI in medical decision making
This study explores human-XAI (Explainable Artificial Intelligence) collaboration in the medical setting, focusing on clinicians’ perceptions and preferences. Ten clinicians from I.R.C.C.S. Policlinico San Matteo Foundation of Pavia, Italy, participated in the survey, and two of them took part in the think-aloud session. The aim of the study is to assess and compare clinicians’ perceptions of three tools: an explainable-by-design Bayesian network and two local XAI methods – Shapley values (Shap) and Araucana tree. The explanations were designed as the extension of the ALFABETO project, classifying COVID-19 patients for either discharge or hospitalization. Perceptions were assessed on usability dimensions: self-reported helpfulness, comprehensibility, and cognitive load. Sentiment analysis was also used to gauge emotional tone. Results show clinicians generally trusted XAI explanations, with high compliance rates of 86%, though only 50% of predicted cases showed correct classification, indicating potential over-reliance. Compliance correlated with experience and survey completion time. Shap was perceived as the most comprehensible, helpful, and requiring the least cognitive effort due to its additive nature. Araucana required higher cognitive load and had slightly lower scores mirroring its higher complexity. The Bayesian network was neither comprehensive nor helpful, requiring too much cognitive effort. Sentiment analysis mirrored survey results, but more data is needed for conclusive findings. Significant differences in tool preferences were found between ER (Emergency room) and ID (Infectious diseases) departments, with ID clinicians preferring Shap and ER clinicians preferring both Shap and Araucana. The study highlights the need for theoretical and empirical studies run together, by fitting results into a four-dimensional explainability framework. Overall, fine-tuning cognitive load and usability based on specific user needs makes Shap and Araucana strong candidates for effective human-XAI collaboration in healthcare. Key words: Explainable AI (XAI), Human-XAI collaboration, Medical decision-making, Usability, Shapley values, Araucana Tree, Bayesian network.
File in questo prodotto:
File Dimensione Formato  
Orlowska_Marta_Thesis_PDF.pdf

accesso aperto

Dimensione 3.72 MB
Formato Adobe PDF
3.72 MB Adobe PDF Visualizza/Apri

È consentito all'utente scaricare e condividere i documenti disponibili a testo pieno in UNITESI UNIPV nel rispetto della licenza Creative Commons del tipo CC BY NC ND.
Per maggiori informazioni e per verifiche sull'eventuale disponibilità del file scrivere a: unitesi@unipv.it.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14239/26594