Approximation approaches for training neural network problems with dynamic mini-batch sub-sampled losses

dc.contributor.advisorWilke, Daniel Nicolas
dc.contributor.emailu11085160@tuks.co.zaen_ZA
dc.contributor.postgraduateChae, Younghwan
dc.date.accessioned2022-02-25T12:13:06Z
dc.date.available2022-02-25T12:13:06Z
dc.date.created2022-05-13
dc.date.issued2021
dc.descriptionThesis (PhD (Mechanical Engineering))--University of Pretoria, 2021.en_ZA
dc.description.abstractLearning rate schedule parameters is a sensitive and challenging hyperparameter to resolve in machine learning. It needs to be resolved whenever a model, data, data preprocessing or data batching changes. Implications of poorly resolving learning rates include poor models, high computing cost, excessive training time, and excessive carbon footprint. In addition, deep neural network (DNN) architectures routinely require billions of parameters, with GPT-3 utilizing 175 billion parameters and an estimated 12 million USD to train. Mini-batch sub-sampling introduces bias and variance that can manifest in several ways. Considering a line-search along a descent direction, the implications are smooth loss functions with large bias (static) or pointwise discontinuous loss functions with low bias but high variance in the function response. Two previous studies demonstrated that line searches have the potential to automate learning rate selection. In both cases, learning rates are resolved for point-wise discontinuous functions that include Bayesian regression and direct optimization using a gradient-only line search, GOLS. This study is an explorative study that investigates the potential of surrogates to resolve learning rates instead of direct optimization of the loss function. We aim to identify domains that warrant further investigation, for which purposes we introduced a new robustness measure to compare algorithms more sensibly. As a result, we start our surrogate investigation at the fundamental level, considering the most basic form for each approach. This isolates the essence and rids unnecessary complexity. We do, however, retain selected complexity that is deemed crucial such as dynamic sub-sampling. Hence, this study is an explorative study and not yet another study that proposes a state-of-the-art (SOTA) algorithm on a carefully curated dataset with carefully curated baseline algorithms against which to compare. The three fundamentally different approaches to resolve learning rates using surrogates are 1. The construction of one-dimensional quadratic surrogates for point-wise discontinuous functions to resolve learning rates by minimization; 2. The construction of one-dimensional classifiers to resolve learning rates from a gradient-only perspective using classification; 3. Sub-dimensional surrogates (higher than 1D) on smooth loss functions to isolate the identification of appropriate bases on simple test problems. This study concludes that both 1 and 2 further warrant investigation, with the longer-term goal to be extended to sub-dimensional surrogates to enhance efficiencyen_ZA
dc.description.availabilityUnrestricteden_ZA
dc.description.degreePhD (Mechanical Engineering)en_ZA
dc.description.departmentMechanical and Aeronautical Engineeringen_ZA
dc.description.sponsorshipDepartment of Mechanical and Aeronautical Engineeringen_ZA
dc.identifier.citation*en_ZA
dc.identifier.otherA2022en_ZA
dc.identifier.urihttp://hdl.handle.net/2263/84238
dc.language.isoenen_ZA
dc.publisherUniversity of Pretoria
dc.rights© 2022 University of Pretoria. All rights reserved. The copyright in this work vests in the University of Pretoria. No part of this work may be reproduced or transmitted in any form or by any means, without the prior written permission of the University of Pretoria.
dc.subjectUCTDen_ZA
dc.subjectLine search
dc.subjectSNN-GPP
dc.subjectGradient only
dc.subjectNeural network
dc.subjectApproximation
dc.titleApproximation approaches for training neural network problems with dynamic mini-batch sub-sampled lossesen_ZA
dc.typeThesisen_ZA

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Chae_Approximation_2021.pdf
Size:
12.69 MB
Format:
Adobe Portable Document Format
Description:
Thesis

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
1.75 KB
Format:
Item-specific license agreed upon to submission
Description: