Artificial intelligence (AI) algorithms are increasingly being employed in the medical field, playing a pivotal role in decision-making processes related to diagnosis, treatment planning, and healthcare resource management. While deep learning (DL) models were initially regarded as neutral and objective tools, recent discussions have highlighted their susceptibility to biases that can inadvertently reinforce existing disparities in medical diagnosis and treatment. This issue is particularly relevant in the field of radiology, where AI-driven models are widely used to interpret medical imaging data. In response to concerns about algorithmic bias in medical AI applications, a growing body of research is dedicated to investigating and mitigating these challenges. In this context, we explore potential biases in the prediction of lung diseases from chest X-ray images. We begin with a theoretical overview of deep learning models and introduce key concepts related to algorithmic bias and fairness. We then present our study, which utilizes publicly available datasets of chest X-ray images containing demographic metadata such as age, gender, and clinical history. Our approach is divided into two phases: first, we compare a model trained solely on image data with a model that incorporates both image features and demographic information to examine whether the inclusion of sensitive variables influences prediction outcomes, potentially leading to biased decision-making. We then analyze the methods used to detect these biases and investigate their possible sources. Finally, we propose a domain-adversarial neural network (DANN) model incorporating a gradient reversal layer to address and mitigate bias in lung disease prediction. We discuss the effectiveness of this approach in making AI-based diagnostic tools more robust and equitable, thereby ensuring fair and reliable healthcare decisions across diverse patient populations. By developing strategies for bias correction, we aim to contribute to the advancement of AI-driven medical technologies that promote high-quality, unbiased healthcare for all individuals.

Artificial intelligence (AI) algorithms are increasingly being employed in the medical field, playing a pivotal role in decision-making processes related to diagnosis, treatment planning, and healthcare resource management. While deep learning (DL) models were initially regarded as neutral and objective tools, recent discussions have highlighted their susceptibility to biases that can inadvertently reinforce existing disparities in medical diagnosis and treatment. This issue is particularly relevant in the field of radiology, where AI-driven models are widely used to interpret medical imaging data. In response to concerns about algorithmic bias in medical AI applications, a growing body of research is dedicated to investigating and mitigating these challenges. In this context, we explore potential biases in the prediction of lung diseases from chest X-ray images. We begin with a theoretical overview of deep learning models and introduce key concepts related to algorithmic bias and fairness. We then present our study, which utilizes publicly available datasets of chest X-ray images containing demographic metadata such as age, gender, and clinical history. Our approach is divided into two phases: first, we compare a model trained solely on image data with a model that incorporates both image features and demographic information to examine whether the inclusion of sensitive variables influences prediction outcomes, potentially leading to biased decision-making. We then analyze the methods used to detect these biases and investigate their possible sources. Finally, we propose a domain-adversarial neural network (DANN) model incorporating a gradient reversal layer to address and mitigate bias in lung disease prediction. We discuss the effectiveness of this approach in making AI-based diagnostic tools more robust and equitable, thereby ensuring fair and reliable healthcare decisions across diverse patient populations. By developing strategies for bias correction, we aim to contribute to the advancement of AI-driven medical technologies that promote high-quality, unbiased healthcare for all individuals.

Predizione delle Malattie Polmonari dalle Immagini a Raggi X Utilizzando Domain-Adversarial Neural Networks

TATLICI, ISMAIL KEREM
2023/2024

Abstract

Artificial intelligence (AI) algorithms are increasingly being employed in the medical field, playing a pivotal role in decision-making processes related to diagnosis, treatment planning, and healthcare resource management. While deep learning (DL) models were initially regarded as neutral and objective tools, recent discussions have highlighted their susceptibility to biases that can inadvertently reinforce existing disparities in medical diagnosis and treatment. This issue is particularly relevant in the field of radiology, where AI-driven models are widely used to interpret medical imaging data. In response to concerns about algorithmic bias in medical AI applications, a growing body of research is dedicated to investigating and mitigating these challenges. In this context, we explore potential biases in the prediction of lung diseases from chest X-ray images. We begin with a theoretical overview of deep learning models and introduce key concepts related to algorithmic bias and fairness. We then present our study, which utilizes publicly available datasets of chest X-ray images containing demographic metadata such as age, gender, and clinical history. Our approach is divided into two phases: first, we compare a model trained solely on image data with a model that incorporates both image features and demographic information to examine whether the inclusion of sensitive variables influences prediction outcomes, potentially leading to biased decision-making. We then analyze the methods used to detect these biases and investigate their possible sources. Finally, we propose a domain-adversarial neural network (DANN) model incorporating a gradient reversal layer to address and mitigate bias in lung disease prediction. We discuss the effectiveness of this approach in making AI-based diagnostic tools more robust and equitable, thereby ensuring fair and reliable healthcare decisions across diverse patient populations. By developing strategies for bias correction, we aim to contribute to the advancement of AI-driven medical technologies that promote high-quality, unbiased healthcare for all individuals.
2023
Prediction of Lung Diseases from X-Ray Images Using Domain-Adversarial Neural Networks
Artificial intelligence (AI) algorithms are increasingly being employed in the medical field, playing a pivotal role in decision-making processes related to diagnosis, treatment planning, and healthcare resource management. While deep learning (DL) models were initially regarded as neutral and objective tools, recent discussions have highlighted their susceptibility to biases that can inadvertently reinforce existing disparities in medical diagnosis and treatment. This issue is particularly relevant in the field of radiology, where AI-driven models are widely used to interpret medical imaging data. In response to concerns about algorithmic bias in medical AI applications, a growing body of research is dedicated to investigating and mitigating these challenges. In this context, we explore potential biases in the prediction of lung diseases from chest X-ray images. We begin with a theoretical overview of deep learning models and introduce key concepts related to algorithmic bias and fairness. We then present our study, which utilizes publicly available datasets of chest X-ray images containing demographic metadata such as age, gender, and clinical history. Our approach is divided into two phases: first, we compare a model trained solely on image data with a model that incorporates both image features and demographic information to examine whether the inclusion of sensitive variables influences prediction outcomes, potentially leading to biased decision-making. We then analyze the methods used to detect these biases and investigate their possible sources. Finally, we propose a domain-adversarial neural network (DANN) model incorporating a gradient reversal layer to address and mitigate bias in lung disease prediction. We discuss the effectiveness of this approach in making AI-based diagnostic tools more robust and equitable, thereby ensuring fair and reliable healthcare decisions across diverse patient populations. By developing strategies for bias correction, we aim to contribute to the advancement of AI-driven medical technologies that promote high-quality, unbiased healthcare for all individuals.
File in questo prodotto:
File Dimensione Formato  
TEZv7 (1).pdf

accesso aperto

Dimensione 2.64 MB
Formato Adobe PDF
2.64 MB Adobe PDF Visualizza/Apri

È consentito all'utente scaricare e condividere i documenti disponibili a testo pieno in UNITESI UNIPV nel rispetto della licenza Creative Commons del tipo CC BY NC ND.
Per maggiori informazioni e per verifiche sull'eventuale disponibilità del file scrivere a: unitesi@unipv.it.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14239/33397