Trust in artificial intelligence for its adoption and use in organisational decision-making

Show simple item record

dc.contributor.advisor Bogie, Jill
dc.contributor.postgraduate Hulme-Jones, Graeme Edward
dc.date.accessioned 2025-03-25T08:13:51Z
dc.date.available 2025-03-25T08:13:51Z
dc.date.created 2025-05-05
dc.date.issued 2024-11
dc.description Mini Dissertation (MPhil (Corporate Strategy))--University of Pretoria, 2024. en_US
dc.description.abstract Considering the recent developments and mainstream attention on Artificial Intelligence, organisations are facing increased pressure to realise the potential benefits which this new generation of tools and techniques seek to unlock. However, to responsibly leverage and benefit from the advantages which AI promises to offer, those responsible for decision-making in organisations need to be willing to trust the technology. For these reasons, this qualitative research study focused on trust in artificial intelligence for its adoption and use in organisational decision-making, and the key related concepts of explainable AI (xAI) and transparency. The theoretical relevance of this research was to develop insights into, and new understanding of how trust in AI is formed for decision-making in organisations, as well as to reveal new insights and understanding of the relationship between the key concepts of xAI and transparency which lead to trust in AI. The study followed a qualitative, exploratory design with a phenomenological approach, to explore the lived experiences of the research phenomena from the perspective of individuals responsible for organisational decision-making. A total of 19 semi-structured interviews were conducted, with participants who were exposed to or had experience of Artificial Intelligence and its impact on their organisations. The participants were drawn from a setting of worldwide organisations, across 16 diverse sectors, from healthcare and financial services to defence and aviation. Rich insights and understanding of the main theoretical concepts and research phenomena were revealed through a systematic, thematic analysis. Several similarities were identified between the findings of the study and the literature, adding to the theoretical body of knowledge, while eight nuances of difference provided potential refinements, and three distinct differences highlighted potential extensions. Lastly, a conceptual framework was developed and refined through each stage of the research, culminating in a view of the potential research contributions in relation to extant theory. The research outcomes extended the theoretical understanding of trust formation in AI, for its adoption and use in organisational decision-making environments, while leading to actionable insights for organizations aiming to build trust in AI technologies. en_US
dc.description.availability Unrestricted en_US
dc.description.degree MPhil (Corporate Strategy) en_US
dc.description.department Gordon Institute of Business Science (GIBS) en_US
dc.description.faculty Gordon Institute of Business Science (GIBS) en_US
dc.description.sdg SDG-09: Industry, innovation and infrastructure en_US
dc.identifier.citation * en_US
dc.identifier.other A2025 en_US
dc.identifier.uri http://hdl.handle.net/2263/101685
dc.language.iso en en_US
dc.publisher University of Pretoria
dc.rights © 2024 University of Pretoria. All rights reserved. The copyright in this work vests in the University of Pretoria. No part of this work may be reproduced or transmitted in any form or by any means, without the prior written permission of the University of Pretoria.
dc.subject UCTD en_US
dc.subject Artificial Intelligence en_US
dc.subject Trust in AI en_US
dc.subject Explainable AI en_US
dc.subject xAI en_US
dc.subject Transparency en_US
dc.subject Adoption and Use en_US
dc.title Trust in artificial intelligence for its adoption and use in organisational decision-making en_US
dc.type Mini Dissertation en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record