dc.contributor.author | Chae, Younghwan | |
dc.contributor.author | Wilke, Daniel Nicolas | |
dc.contributor.author | Kafka, Dominic | |
dc.date.accessioned | 2024-07-19T08:16:44Z | |
dc.date.available | 2024-07-19T08:16:44Z | |
dc.date.issued | 2023-06 | |
dc.description.abstract | Mini-batch sub-sampling (MBSS) is favored in deep neural network training to reduce the computational cost. Still, it introduces an inherent sampling error, making the selection of appropriate learning rates challenging. The sampling errors can manifest either as a bias or variances in a line search. Dynamic MBSS re-samples a mini-batch at every function evaluation. Hence, dynamic MBSS results in point-wise discontinuous loss functions with smaller bias but larger variance than static sampled loss functions. However, dynamic MBSS has the advantage of having larger data throughput during training but requires resolving the complexity regarding discontinuities. This study extends the vanilla gradient-only surrogate line search (GOS-LS), a line search method using quadratic approximation models built with only directional derivative information for dynamic MBSS loss functions. We propose a conservative gradient-only surrogate line search (GOS-LSC) with strong convergence characteristics with a defined optimality criterion. For the first time, we investigate both GOS-LS’s and GOS-LSC’s performance on various optimizers, including SGD, RMSPROP, and ADAM on ResNet-18 and EfficientNet-B0. We also compare GOS-LS and GOS-LSC against the other existing learning rate methods. We quantify both the best-performing and most robust algorithms. For the latter, we introduce a relative robust criterion that allows us to quantify the difference between an algorithm and the best performing algorithm for a given problem. The results show that training a model with the recommended learning rate for a class of search directions helps to reduce the model errors in multimodal cases. The results also show that GOS-LS ranked first in training and test results, while GOS-LSC ranked third and second in training and test results among nine other learning rate strategies. | en_US |
dc.description.department | Mechanical and Aeronautical Engineering | en_US |
dc.description.librarian | hj2024 | en_US |
dc.description.sdg | SDG-09: Industry, innovation and infrastructure | en_US |
dc.description.sponsorship | The National Research Foundation (NRF), South Africa, and the Center for Asset Integrity Management (C-AIM), Department of Mechanical and Aeronautical Engineering, University of Pretoria, Pretoria, South Africa. | en_US |
dc.description.uri | https://link.springer.com/journal/10489 | en_US |
dc.identifier.citation | Chae, Y., Wilke, D.N. & Kafka, D. Gradient-only surrogate to resolve learning rates for robust and consistent training of deep neural networks. Applied Intelligence 53, 13741–13762 (2023). https://doi.org/10.1007/s10489-022-04206-8. | en_US |
dc.identifier.issn | 0924-669X (print) | |
dc.identifier.issn | 1573-7497 (online) | |
dc.identifier.other | 10.1007/s10489-022-04206-8 | |
dc.identifier.uri | http://hdl.handle.net/2263/97123 | |
dc.language.iso | en | en_US |
dc.publisher | Springer | en_US |
dc.rights | © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022. The original publication is available at : http://link.springer.comjournal/10489. | en_US |
dc.subject | Mini-batch sub-sampling (MBSS) | en_US |
dc.subject | Gradient-only surrogate line search (GOS-LS) | en_US |
dc.subject | Conservative gradient-only surrogate line search (GOS-LSC) | en_US |
dc.subject | Line search | en_US |
dc.subject | Learning rate | en_US |
dc.subject | Approximation model | en_US |
dc.subject | Stochastic gradient | en_US |
dc.subject | Stochastic non-negative gradient projection points (SNN-GPP) | en_US |
dc.subject | SDG-09: Industry, innovation and infrastructure | en_US |
dc.title | Gradient-only surrogate to resolve learning rates for robust and consistent training of deep neural networks | en_US |
dc.type | Postprint Article | en_US |