A taxonomy of explainable Bayesian networks

dc.contributor.authorDerks, Iena Petronella
dc.contributor.authorDe Waal, Alta
dc.date.accessioned2021-05-04T07:06:49Z
dc.date.available2021-05-04T07:06:49Z
dc.date.issued2020-12
dc.description.abstractArtificial Intelligence (AI), and in particular, the explainability thereof, has gained phenomenal attention over the last few years. Whilst we usually do not question the decision-making process of these systems in situations where only the outcome is of interest, we do however pay close attention when these systems are applied in areas where the decisions directly influence the lives of humans. It is especially noisy and uncertain observations close to the decision boundary which results in predictions which cannot necessarily be explained that may foster mistrust among end-users. This drew attention to AI methods for which the outcomes can be explained. Bayesian networks are probabilistic graphical models that can be used as a tool to manage uncertainty. The probabilistic framework of a Bayesian network allows for explainability in the model, reasoning and evidence. The use of these methods is mostly ad hoc and not as well organised as explainability methods in the wider AI research field. As such, we introduce a taxonomy of explainability in Bayesian networks. We extend the existing categorisation of explainability in the model, reasoning or evidence to include explanation of decisions. The explanations obtained from the explainability methods are illustrated by means of a simple medical diagnostic scenario. The taxonomy introduced in this paper has the potential not only to encourage end-users to efficiently communicate outcomes obtained, but also support their understanding of how and, more importantly, why certain predictions were made.en_ZA
dc.description.departmentStatisticsen_ZA
dc.description.librarianhj2021en_ZA
dc.description.urihttp://www.springer.comseries/7899en_ZA
dc.identifier.citationDerks I.P., de Waal A. (2020) A Taxonomy of Explainable Bayesian Networks. In: Gerber A. (eds) Artificial Intelligence Research. SACAIR 2021. Communications in Computer and Information Science, vol 1342. Springer, Cham. https://doi.org/10.1007/978-3-030-66151-9_14.en_ZA
dc.identifier.isbn978-3-030-66150-2 (print)
dc.identifier.isbn978-3-030-66151-9
dc.identifier.issn1865-0929
dc.identifier.other10.1007/978-3-030-66151-9_14
dc.identifier.urihttp://hdl.handle.net/2263/79751
dc.language.isoenen_ZA
dc.publisherSpringeren_ZA
dc.rights© Springer Nature Switzerland AG 2020. The original publication is available at : http://www.springer.comseries/7899.en_ZA
dc.subjectArtificial intelligence (AI)en_ZA
dc.subjectBayesian networken_ZA
dc.subjectReasoningen_ZA
dc.subjectExplainabilityen_ZA
dc.titleA taxonomy of explainable Bayesian networksen_ZA
dc.typePostprint Articleen_ZA

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Derks_Taxonomy_2020.pdf
Size:
416.93 KB
Format:
Adobe Portable Document Format
Description:
Postprint Article

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
1.75 KB
Format:
Item-specific license agreed upon to submission
Description: