Explainable Bayesian networks : taxonomy, properties and approximation methods

Show simple item record

dc.contributor.advisor De Waal, Alta
dc.contributor.postgraduate Derks, Iena Petronella
dc.date.accessioned 2024-07-30T13:11:52Z
dc.date.available 2024-07-30T13:11:52Z
dc.date.created 2024-09-03
dc.date.issued 2024-07-22
dc.description Thesis (PhD (Mathematical Statistics))--University of Pretoria, 2024. en_US
dc.description.abstract Technological advances have integrated artificial intelligence (AI) into various scientific fields, necessitating understanding AI-derived decisions. The field of explainable artificial intelligence (XAI) has emerged to address transparency concerns, offering both transparent models and post-hoc explanation techniques. Recent research emphasises the importance of developing transparent models, with a focus on enhancing the interpretability of these models. An example of a transparent model that would benefit from enhanced post-hoc explainability is Bayesian networks. This research investigates the current state of explainability in Bayesian networks. Literature includes three categories of explanation: explanation of the model, reasoning, and evidence. Drawing upon these categories, we formulate a taxonomy of explainable Bayesian networks. Following this, we extend the taxonomy to include explanation of decisions, an area recognised as neglected within the broader XAI research field. This includes using the same-decision probability, a threshold-based confidence measure, as a stopping and selection criteria for decision-making. Additionally, acknowledging computational efficiency as a concern in XAI, we introduce an approximate forward-gLasso algorithm as a solution for efficiently solving the most relevant explanation. We compare the proposed algorithm with a local, exhaustive forward search. The forward-gLasso algorithm demonstrates accuracy comparable to the forward search while reducing the average neighbourhood size, leading to computationally efficient explanations. All coding was done in R, building on existing packages for Bayesian networks. As a result, we develop an open-source R package capable of generating explanations of evidence for Bayesian networks. Lastly, we demonstrate the practical insights gained from applying post-hoc explanations on real-world data, such as the South African Victims of Crime Survey 2016 - 2017. en_US
dc.description.availability Unrestricted en_US
dc.description.degree PhD (Mathematical Statistics) en_US
dc.description.department Statistics en_US
dc.description.faculty Faculty of Economic And Management Sciences en_US
dc.identifier.citation * en_US
dc.identifier.doi 10.25403/UPresearchdata.26403883 en_US
dc.identifier.other S2024
dc.identifier.uri http://hdl.handle.net/2263/97333
dc.identifier.uri DOI: https://doi.org/10.25403/UPresearchdata.26403883.v1
dc.language.iso en en_US
dc.publisher University of Pretoria
dc.rights © 2023 University of Pretoria. All rights reserved. The copyright in this work vests in the University of Pretoria. No part of this work may be reproduced or transmitted in any form or by any means, without the prior written permission of the University of Pretoria.
dc.subject UCTD en_US
dc.subject Sustainable Development Goals (SDGs) en_US
dc.subject Explainable artificial intelligence en_US
dc.subject Bayesian networks
dc.subject Post-hoc explanation
dc.subject Same-decision probability
dc.subject Most relevant explanation
dc.title Explainable Bayesian networks : taxonomy, properties and approximation methods en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record