Sex and Gender Bias in Medical Artificial Intelligence
Due to its many uses, artificial intelligence (AI) has become a major part of all aspects of life, including medicine. AI as a diagnostic tool is a revolutionary step in healthcare because it enables advances in precision medicine, a healthcare approach that emphasizes personalized treatment. In contrast to the currently dominant one-size-fits-all approach to healthcare, precision medicine accounts for genetic, environmental, and lifestyle differences when diagnosing conditions and designing treatment plans. However, since technology is commonly viewed as neutral and objective, people are not aware of its inherent biases. These biases can be harmful, especially in healthcare, because they not only amplify existing inequalities, but also result in tangible disparities. In a 2020 review, Davide Cirillo and his team of researchers from the Barcelona Supercomputing Center discuss the various ways that sex and gender biases can manifest in healthcare technology and AI, as well as their implications for health outcomes.
In order to utilize an AI technology, researchers must first train its algorithm on a dataset so it can learn patterns and gain the ability to make predictions from them. If this data contains biases, the AI will incorporate them into its predictions. Datasets typically reflect society: if a bias exists in the overall population, it will likely be present in the data. Additionally, the devices and methods used to collect data may magnify societal biases.
According to Cirillo and his team, studies have historically suffered from overrepresentation of men: male mouse models are favored in clinical research and participation of women of childbearing age is regulated due to ethical considerations. This overrepresentation not only results in less effective treatments, but provides faulty data sources for healthcare AI, contributing to worse health outcomes for women. For instance, according to the researchers, cardiologists are more likely to identify symptoms of heart attacks in men than women and often under-diagnose women with coronary artery disease. If an algorithm is trained on data sourced from such sources, the diagnoses it produces could reflect these biases and negatively impact women.
In order to utilize an AI technology, researchers must first train its algorithm on a dataset so it can learn patterns and gain the ability to make predictions from them. If this data contains biases, the AI will incorporate them into its predictions. Datasets typically reflect society: if a bias exists in the overall population, it will likely be present in the data. Additionally, the devices and methods used to collect data may magnify societal biases.
According to Cirillo and his team, studies have historically suffered from overrepresentation of men: male mouse models are favored in clinical research and participation of women of childbearing age is regulated due to ethical considerations. This overrepresentation not only results in less effective treatments, but provides faulty data sources for healthcare AI, contributing to worse health outcomes for women. For instance, according to the researchers, cardiologists are more likely to identify symptoms of heart attacks in men than women and often under-diagnose women with coronary artery disease. If an algorithm is trained on data sourced from such sources, the diagnoses it produces could reflect these biases and negatively impact women.
Image Source: MART PRODUCTION
Sex and gender biases can also manifest in healthcare technology that utilizes digital biomarkers, such as pulse, blood pressure, or number of steps, in order to continuously monitor health and diagnose conditions. This type of data can be especially useful for conditions where small changes in symptoms can indicate significant progression, such as Alzheimer’s disease, and can be used to train healthcare AI. According to Cirillo’s review, studies that analyze digital biomarkers often have sample sizes that are too small—ranging from tens to hundreds of people—and do not have balanced sex and gender representation. For instance, in a 2018 study that examined digital biomarkers for Parkinson’s disease, only 18.6% of the participants were women. If an AI algorithm is trained on such skewed data, it will detect symptoms more common in men and overlook symptoms that commonly appear in women. Thus, this algorithm would be less likely to accurately diagnose or treat women. The devices that collect biomarkers may also integrate errors into their measurement. For example, pulse oximetry devices, which measure the amount of oxygen in blood, have been found to show errors in predicted blood oxygen concentration based on sex and skin color darkness. Thus, the data collected from these sources more effectively represents white men than other groups.
According to the researchers, if AI is used as a tool in healthcare without expert oversight and design, it can propagate and amplify various biases, including sex and gender-based prejudice, in medicine. Thus, they highlight the importance of considering how societal structures can impact data instead of accepting it at face value.
According to the researchers, if AI is used as a tool in healthcare without expert oversight and design, it can propagate and amplify various biases, including sex and gender-based prejudice, in medicine. Thus, they highlight the importance of considering how societal structures can impact data instead of accepting it at face value.
Featured Image Source: Pixabay
RELATED ARTICLES
Vertical Divider
|
Vertical Divider
|
Vertical Divider
|