An Incremental Explanation of Inference in Bayesian Networks for Increasing Model Trustworthiness and Supporting Clinical Decision Making

Publication date: Available online 31 January 2020Source: Artificial Intelligence in MedicineAuthor(s): Evangelia Kyrimi, Somayyeh Mossadegh, Nigel Tai, William MarshAbstractVarious AI models are increasingly being considered as part of clinical decision-support tools. However, the trustworthiness of such models is rarely considered. Clinicians are more likely to use a model if they can understand and trust its predictions. Key to this is if its underlying reasoning can be explained. A Bayesian network (BN) model has the advantage that it is not a black-box and its reasoning can be explained. In this paper, we propose an incremental explanation of inference that can be applied to ‘hybrid’ BNs, i.e. those that contain both discrete and continuous nodes. The key questions that we answer are: (1) which important evidence supports or contradicts the prediction, and (2) through which intermediate variables does the information flow. The explanation is illustrated using a real clinical case study. A small evaluation study is also conducted.
Source: Artificial Intelligence in Medicine - Category: Bioinformatics Source Type: research
More News: Bioinformatics | Study