In this must-read piece, Solaiman & Malik (2024) dissect the evolving EU legal landscape for algorithmic care, where the Artificial Intelligence Act meets real-world clinical complexity. Doctors are no longer the only ones with decision-making power—AI systems are being trained, deployed, and (occasionally) hallucinate diagnoses with remarkable confidence. 

But when the AI makes a mistake… who takes the fall? The doctor? The developer? The data itself?

This paper explores how regulatory frameworks are shifting the traditional doctor–patient model, nudging us into a new triangle: doctor–patient–algorithm. Spoiler alert: Only one of them has a CE marking.

 The EU’s AI Act is more than just red tape—it’s an attempt to ensure transparency, accountability, and safety in algorithmic care. And if you’re in healthcare or med ed, this isn’t just legalese—it’s your future.

 Favourite quote?
“AI’s growing sophistication presents unique challenges that threaten to erode the autonomy gained by disempowering patients and doctors alike and shifting controls to external market forces. Although AI’s potential to enhance diagnostic accuracy and support informed decision-making seems promising, it risks over-reliance by doctors, diminished personal interaction with patients, and raises concerns about data privacy, opacity, and accountability.”

 Doctors may need to add “algorithm whisperer” to their CVs.

Read the full article here: https://academic.oup.com/medlaw/article/33/1/fwae033/7754853?login=false