Training Feedforward Neural Networks with Bayesian Hyper-Heuristics

dc.contributor.advisorBosman, Anna
dc.contributor.coadvisorEngelbrecht, Andries
dc.contributor.coadvisorCleghorn, Christopher
dc.contributor.emailan.schreuder@up.ac.zaen_US
dc.contributor.postgraduateSchreuder, Arné
dc.date.accessioned2023-02-17T12:40:45Z
dc.date.available2023-02-17T12:40:45Z
dc.date.created2023-04
dc.date.issued2023
dc.descriptionDissertation (MSc (Computer Science))--University of Pretoria, 2023.en_US
dc.description.abstractMany different heuristics have been developed and used to train feedforward neural networks (FFNNs). However, selection of the best heuristic to train FFNNs is a time consuming and non-trivial exercise. Careful, systematic selection is required to ensure that the best heuristic is used to train FFNNs. In the past, selection was done by trial and error. A modern approach is to automate the heuristic selection process. Often it is found that a single approach is not sufficient. Research has proposed the use of hybridisation of heuristics. One such approach is referred to as hyper-heuristics (HHs). HHs focus on dynamically finding the best heuristic or combinations of heuristics in heuristic-space by making use of heuristic performance information. One such implementation of a HH is a population-based approach that guides the search process by dynamically selecting heuristics from a heuristic-pool to be applied to different entities that represent candidate solutions to the problem-space, and work together to find good solutions. This dissertation introduces a novel population-based Bayesian hyper-heuristic (BHH). An empirical study is done by using the BHH to train FFNNs. An in-depth behaviour analysis is done and the performance of the BHH is compared to that of ten popular low-level heuristics each with different search behaviours. The chosen heuristic pool consists of classic gradient-based heuristics as well as meta-heuristics. The empirical process is executed on fourteen datasets consisting of classification and regression problems with varying characteristics. Results are analysed for statistical significance and the BHH is shown to be able to train FFNNs well and provide an automated method for finding the best heuristic to train the FFNNs at various stages of the training process.en_US
dc.description.availabilityUnrestricteden_US
dc.description.degreeMSc (Computer Science)en_US
dc.description.departmentComputer Scienceen_US
dc.identifier.citation*en_US
dc.identifier.doi10.25403/UPresearchdata.22116878en_US
dc.identifier.otherA2023
dc.identifier.urihttps://repository.up.ac.za/handle/2263/89677
dc.language.isoenen_US
dc.publisherUniversity of Pretoria
dc.rights© 2022 University of Pretoria. All rights reserved. The copyright in this work vests in the University of Pretoria. No part of this work may be reproduced or transmitted in any form or by any means, without the prior written permission of the University of Pretoria.
dc.subjectHyper-heuristicsen_US
dc.subjectFeedforward neural networksen_US
dc.subjectBayesian statisticsen_US
dc.subjectMeta-learningen_US
dc.subjectSupervised learningen_US
dc.subjectUCTD
dc.titleTraining Feedforward Neural Networks with Bayesian Hyper-Heuristicsen_US
dc.typeDissertationen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Schreuder_Training_2023.pdf
Size:
42.79 MB
Format:
Adobe Portable Document Format
Description:

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
1.75 KB
Format:
Item-specific license agreed upon to submission
Description: