FPH: Fair Predictions in Health Care

Summary of the context and overall objectives of the project

1) Problem/Issue Being Addressed 

Diagnostic disparities exist when predictive algorithms are used in medical contexts. For instance, a certain cancer might be misdiagnosed differently among men and women due to a consistent algorithmic threshold. Similarly, limited data from ethnic minorities can skew diagnosis, favoring the dominant ethnic group.

 2) Importance to Society

With the advent of AI in healthcare, ethical dilemmas have emerged, especially concerning potential biases in diagnostic tools. Ensuring AI-driven tools are unbiased is crucial as these decisions directly impact individuals’ well-being. It’s not just about using unbiased data but truly understanding what “fairness” means in this context.

3) Overall Objectives

The research project focuses on reviewing philosophical theories related to justice, fairness, and discrimination, understanding the intersections between probability theory and ethical considerations, exploring the epistemology of causality, and assessing real-world case studies to uncover biases in AI diagnostics in healthcare.

Project’s Conclusions

Drawing from the “fair equality of chances” principle by myself and colleagues (forthcoming in Economics and Philosophy), my research presents a fresh perspective on its relevance to healthcare. We identify intricate algorithmic biases that aren’t just discriminatory but can also mirror societal disparities on a broader scale. The findings underscore the necessity for a layered understanding of fairness in healthcare algorithms, recognizing the depth of bias and the ethical repercussions of maintaining status quo inequalities.

Work Performed and Main Results Achieved

The FPH project investigated what fairness requires of prediction-based decisions in healthcare — a domain where algorithmic disparities carry immediate consequences for patients. Hosted at the Department of Mathematics of Politecnico di Milano and funded by a Marie Skłodowska-Curie Individual Fellowship (Horizon 2020), the project ran from September 2021 to August 2023 and produced a sustained body of work that continues to develop.

Publications

The research spans moral philosophy, decision theory, and the formal analysis of fairness criteria. Publications resulting from the project include:

Fairness criteria and their moral foundations

Causality, discrimination and counterfactual fairness

Working papers

European Sponsorship - Dr Michele Loi - AI ETHICS EXPERT BUSINESS CONSULTANT RESEARCH LEADER Logo

Dissemination and teaching

Research from the project was presented at ACM FAccT (2021, 2022), the European Workshop on Algorithmic Fairness (EWAF), and through invited talks at Johannes Gutenberg University Mainz, Eindhoven University of Technology, and the IFIP Summer School on Privacy and Identity Management. At Politecnico di Milano, the project contributed to course modules on algorithmic discrimination and to the development of open-source teaching materials on inclusive machine learning, distributed through the Responsible Innovators of Tomorrow programme.

Broader engagement

The project fostered collaboration between philosophers and computer scientists — an approach that proved productive in both directions. Findings were communicated beyond academia through a partnership with AlgorithmWatch, podcast appearances (including Ethical Machines and The ReadME Project), and teaching materials designed for high-school educators.

Progress Beyond State of Art and Potential Impacts

The project advanced the philosophical foundations of algorithmic fairness in several directions. It showed that standard group fairness criteria — statistical parity, calibration, equalised odds — cannot be straightforwardly derived from established theories of distributive justice. This result, established formally in a mathematical proof, reframes the debate: rather than asking which fairness metric is “correct,” the question becomes what moral principles justify choosing one criterion over another in a given decision context.

The principle of fair equality of chances, developed across several of the project’s publications, offers one such justification. Applied to healthcare, it provides a framework for evaluating when prediction-based triage, screening, or resource allocation treats patients fairly — not merely accurately. The subsequent work on counterfactual fairness and causal equal protection extended this line of inquiry, connecting the philosophical analysis to the technical machinery of causal inference.

At Politecnico di Milano, the project created a productive bridge between the mathematics department and the philosophy of science group, contributing to a research environment where formal methods and normative reasoning inform each other. Several of the collaborations initiated during the fellowship — with Nicolò Cangiotti, Marcello Di Bello, Francesco Nappo, and Clinton Castro — continue to produce joint work.