Abstract:
The phenomenon of overfitting, where a feed-forward neural network (FFNN) over trains
on training data at the cost of generalisation accuracy is known to be speci c to the
training algorithm used. This study investigates over tting within the context of particle
swarm optimised (PSO) FFNNs. Two of the most widely used PSO algorithms are
compared in terms of FFNN accuracy and a description of the over tting behaviour is
established. Each of the PSO components are in turn investigated to determine their
e ect on FFNN over tting. A study of the maximum velocity (Vmax) parameter is
performed and it is found that smaller Vmax values are optimal for FFNN training. The
analysis is extended to the inertia and acceleration coe cient parameters, where it is
shown that speci c interactions among the parameters have a dominant e ect on the
resultant FFNN accuracy and may be used to reduce over tting. Further, the signi cant
e ect of the swarm size on network accuracy is also shown, with a critical range being
identi ed for the swarm size for e ective training. The study is concluded with an
investigation into the e ect of the di erent activation functions. Given strong empirical
evidence, an hypothesis is made that stating the gradient of the activation function
signi cantly a ects the convergence of the PSO. Lastly, the PSO is shown to be a very
effective algorithm for the training of self-adaptive FFNNs, capable of learning from
unscaled data.