Making decisions: Bias in artificial intelligence and data‑driven diagnostic tools
Australian journal of general practice
BACKGROUND: Although numerous studies have shown the potential of artificial intelligence (AI) systems in drastically improving clinical practice, there are concerns that these AI systems could replicate existing biases. OBJECTIVE: This paper provides a brief overview of 'algorithmic bias', which refers to the tendency of some AI systems to perform poorly for disadvantaged or marginalised groups. DISCUSSION: AI relies on data generated, collected, recorded and labelled by humans. If AI systems remain unchecked, whatever biases that exist in the real world that are embedded in data will be incorporated into the AI algorithms. Algorithmic bias can be considered as an extension, if not a new manifestation, of existing social biases, understood as negative attitudes towards or the discriminatory treatment of some groups. In medicine, algorithmic bias can compromise patient safety and risks perpetuating disparities in care and outcome. Thus, clinicians should consider the risk of bias when deploying AI-enabled tools in their practice.
Open Access Status
This publication may be available as open access