Abstract:
Many posterior distributions take intractable forms and thus
require variational inference where analytical solutions cannot be found.
Variational Inference and Monte Carlo Markov Chains (MCMC) are es-
tablished mechanism to approximate these intractable values. An alter-
native approach to sampling and optimisation for approximation is a di-
rect mapping between the data and posterior distribution. This is made
possible by recent advances in deep learning methods. Latent Dirichlet
Allocation (LDA) is a model which o ers an intractable posterior of this
nature. In LDA latent topics are learnt over unlabelled documents to
soft cluster the documents. This paper assesses the viability of learning
latent topics leveraging an autoencoder (in the form of Autoencoding
variational Bayes) and compares the mimicked posterior distributions to
that achieved by VI. After conducting various experiments the proposed
AEVB delivers inadequate performance. Under Utopian conditions com-
parable conclusion are achieved which are generally unattainable. Fur-
ther, model speci cation becomes increasingly complex and deeply cir-
cumstantially dependant - which is in itself not a deterrent but does war-
rant consideration. In a recent study, these concerns were highlighted and
discussed theoretically. We con rm the argument empirically by dissect-
ing the autoencoder's iterative process. In investigating the autoencoder,
we see performance degrade as models grow in dimensionality. Visual-
ization of the autoencoder reveals a bias towards the initial randomised
topics.