“In this issue of The Lancet Digital Health, Xiaoxuan Liu and colleagues give their perspective on global auditing of medical artificial intelligence (AI). They call for the focus to shift from demonstrating the strengths of AI in health care to proactively discovering its weaknesses.
Machines make unpredictable mistakes in medicine, which differ significantly from those made by humans. Liu and colleagues state that errors made by AI tools can have far-reaching consequences because of the complex and opaque relationships between the analysis and the clinical output. Given that there is little human control over how an AI generates results and that clinical knowledge is not a prerequisite in AI development, there is a risk of an AI learning spurious correlations that seem valid during training but are unreliable when applied to real-world situations.”
Read more on Holding Artificial Intelligence to Account via The Lancet.