Towards a Deep Reinforcement Learning based approach for real-time decision making and resource allocation for Prognostics and Health Management applications

dc.contributor.advisorHeyns, P.S. (Philippus Stephanus)
dc.contributor.emailrpjludeke@gmail.comen_ZA
dc.contributor.postgraduateLudeke, Ricardo Pedro João
dc.date.accessioned2021-02-12T10:06:02Z
dc.date.available2021-02-12T10:06:02Z
dc.date.created2021
dc.date.issued2020
dc.descriptionDissertation (MEng (Mechanical Engineering))--University of Pretoria, 2020.en_ZA
dc.description.abstractIndustrial operational environments are stochastic and can have complex system dynamics which introduce multiple levels of uncertainty. This uncertainty leads to sub-optimal decision making and resource allocation. Digitalisation and automation of production equipment and the maintenance environment enable predictive maintenance, meaning that equipment can be stopped for maintenance at the optimal time. Resource constraints in maintenance capacity could however result in further undesired downtime if maintenance cannot be performed when scheduled. In this dissertation the applicability of using a Multi-Agent Deep Reinforcement Learning based approach for decision making is investigated to determine the optimal maintenance scheduling policy in a fleet of assets where there are maintenance resource constraints. By considering the underlying system dynamics of maintenance capacity, as well as the health state of individual assets, a near-optimal decision making policy is found that increases equipment availability while also maximising maintenance capacity. The implemented solution is compared to a run-to-failure corrective maintenance strategy, a constant interval preventive maintenance strategy and a condition based predictive maintenance strategy. The proposed approach outperformed traditional maintenance strategies across several asset and operational maintenance performance metrics. It is concluded that Deep Reinforcement Learning based decision making for asset health management and resource allocation is more effective than human based decision making.en_ZA
dc.description.availabilityUnrestricteden_ZA
dc.description.degreeMEng (Mechanical Engineering)en_ZA
dc.description.departmentMechanical and Aeronautical Engineeringen_ZA
dc.description.librarianmi2025en
dc.description.sdgSDG-09: Industry, innovation and infrastructureen
dc.description.sdgSDG-12: Responsible consumption and productionen
dc.description.sdgSDG-11: Sustainable cities and communitiesen
dc.identifier.citation*en_ZA
dc.identifier.otherA2021en_ZA
dc.identifier.urihttp://hdl.handle.net/2263/78533
dc.language.isoenen_ZA
dc.publisherUniversity of Pretoria
dc.rights© 2019 University of Pretoria. All rights reserved. The copyright in this work vests in the University of Pretoria. No part of this work may be reproduced or transmitted in any form or by any means, without the prior written permission of the University of Pretoria.
dc.subjectUCTDen_ZA
dc.subjectMaintenance Policy Optimisationen_ZA
dc.subjectDeep Reinforcement Learningen_ZA
dc.subjectMulti-agent Reinforcement Learningen_ZA
dc.subject.otherEngineering, built environment and information technology theses SDG-09
dc.subject.otherSDG-09: Industry, innovation and infrastructure
dc.subject.otherEngineering, built environment and information technology theses SDG-12
dc.subject.otherSDG-12: Responsible consumption and production
dc.subject.otherEngineering, built environment and information technology theses SDG-11
dc.subject.otherSDG-11: Sustainable cities and communities
dc.titleTowards a Deep Reinforcement Learning based approach for real-time decision making and resource allocation for Prognostics and Health Management applicationsen_ZA
dc.typeDissertationen_ZA

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
RPJ_Ludeke_10177303_MEng_dissertation.pdf
Size:
2.13 MB
Format:
Adobe Portable Document Format
Description:

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
1.75 KB
Format:
Item-specific license agreed upon to submission
Description: