Autoencoding variational Bayes for latent Dirichlet allocation
dc.contributor.author | Wolpe, Zach | |
dc.contributor.author | De Waal, Alta | |
dc.date.accessioned | 2020-07-22T11:06:16Z | |
dc.date.available | 2020-07-22T11:06:16Z | |
dc.date.issued | 2019 | |
dc.description.abstract | Many posterior distributions take intractable forms and thus require variational inference where analytical solutions cannot be found. Variational Inference and Monte Carlo Markov Chains (MCMC) are es- tablished mechanism to approximate these intractable values. An alter- native approach to sampling and optimisation for approximation is a di- rect mapping between the data and posterior distribution. This is made possible by recent advances in deep learning methods. Latent Dirichlet Allocation (LDA) is a model which o ers an intractable posterior of this nature. In LDA latent topics are learnt over unlabelled documents to soft cluster the documents. This paper assesses the viability of learning latent topics leveraging an autoencoder (in the form of Autoencoding variational Bayes) and compares the mimicked posterior distributions to that achieved by VI. After conducting various experiments the proposed AEVB delivers inadequate performance. Under Utopian conditions com- parable conclusion are achieved which are generally unattainable. Fur- ther, model speci cation becomes increasingly complex and deeply cir- cumstantially dependant - which is in itself not a deterrent but does war- rant consideration. In a recent study, these concerns were highlighted and discussed theoretically. We con rm the argument empirically by dissect- ing the autoencoder's iterative process. In investigating the autoencoder, we see performance degrade as models grow in dimensionality. Visual- ization of the autoencoder reveals a bias towards the initial randomised topics. | en_ZA |
dc.description.department | Statistics | en_ZA |
dc.description.librarian | am2020 | en_ZA |
dc.description.uri | http://ceur-ws.org | en_ZA |
dc.identifier.citation | Wolpe Z. & De Waal, A. 2019, 'Autoencoding variational Bayes for latent Dirichlet allocation', CEUR Workshop Proceedings, vol. 2540, pp. 1-12. | en_ZA |
dc.identifier.issn | 1613-0073 | |
dc.identifier.uri | http://hdl.handle.net/2263/75390 | |
dc.language.iso | en | en_ZA |
dc.publisher | CEUR Workshop Proceedings | en_ZA |
dc.rights | © 2019 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). | en_ZA |
dc.subject | Autoencoders | en_ZA |
dc.subject | Natural language processing (NLP) | en_ZA |
dc.subject | Deep learning | en_ZA |
dc.subject | Variational inference | en_ZA |
dc.subject | Monte Carlo Markov chains (MCMC) | en_ZA |
dc.subject | Latent Dirichlet allocation (LDA) | en_ZA |
dc.title | Autoencoding variational Bayes for latent Dirichlet allocation | en_ZA |
dc.type | Article | en_ZA |