A vast number of systems across the world currently use algorithmic decision making (ADM) to augment human decision making or even automate decisions that have previously been done by humans. When designed well, these systems promise both more accurate and more efficient decisions all the while saving large amounts of resources and freeing up human time. When ADM systems are not designed well, however, they can lead to unfair algorithms which discriminate against societal groups under the guise of objectivity and legitimacy. Whether systems are ultimately fair or not typically depends on the decisions made during the systems’ design. It is therefore important to properly understand the decisions that go into the design of ADM systems and how these decisions affect the fairness of the resulting system. To study this, we introduce the method of multiverse analysis for algorithmic fairness.
During the creation and design of an ADM system one needs to make a multitude of different decisions. Many of these decisions are made implicitly without knowing exactly how they will impact the final system and whether or not they will lead to fair outcomes. In our proposed adaptation of multiverse analysis for ADM we plan to turn these implicit decisions made during the design of an ADM system into explicit ones. Using the resulting decision space, we create a grid of all possible “universes” of decision-combinations. For each of these universes, a fairness metric is computed. Using the resulting dataset of possible decisions and fairness one can see how and which decisions impact fairness.
We demonstrate how multiverse analyses can be used to better understand variability and robustness of algorithmic fairness using an exemplary case study of predicting public health coverage. We show preliminary results illustrating how small decisions during the design of an ADM system can have surprising effects on its fairness and how to detect them using multiverse analysis.