Abstract:
Neural Networks (NNs) play an integral role in modern machine learning development. Recent advances
in NN research have led to a wide array of applications, ranging from medical diagnosis [1] to complex
problems such as facial and object recognition [2] [3]. However, despite the increasingly powerful predictive
capabilities of NNs, some limitations exist which could cause more traditional methods to become the
preferred alternative. Most of these limitations result from the "black box" nature of the NN in which
the estimated model parameters are not interpretable. The output of traditional NNs also contain no
measure of uncertainty in its predictions, causing decision-making to become challenging when NN output
plays an important role such as in automatic medical imaging and autonomous vehicles. To address these
challenges, we investigate a probabilistic approach to NNs through Bayesian inference and discuss di erent
methods in approximating the posterior distributions of NN parameters. We investigate results when
extending the NN structure to deeper architectures such as Convolutional Neural Networks and discuss
the advantage of extracting additional information from the posterior predictive distribution to measure
prediction uncertainty.