As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
The explosion of interest in exploiting machine learning techniques in healthcare has brought the issue of inferring causation from observational data to centre stage. In our work in supporting the health decisions of the individual person/patient-as-person at the point of care, we cannot avoid making decisions about which options are to be included or excluded in a decision support tool. Should the researcher’s routine injunction to use their findings ‘with caution’, because of methodological limitations, lead to inclusion or exclusion? The task is one of deciding, first on causal plausibility, and then on causality. Like all decisions these are both sensitive to error preferences (trade-offs). We engage selectively with the Artificial Intelligence (AI) literature on the causality challenge and on the closely associated issue of the ‘explainability’ now demanded of ‘black box’ AI. Our commitment to embracing ‘lifestyle’ as well as ‘medical’ options for the individual person, leads us to highlight the key issue as that of who is to make the preference- sensitive decisions on causal plausibility and causality.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.