Abstract:
South Africa has eleven official languages and amongst the eleven languages only 9
languages are local low-resourced languages. As a result, it is essential to build the
resources for these languages so that they can benefit from advances in the field of natural
language processing. In this project, the focus was to create annotated datasets for the
isiZulu and siSwati local languages based on news topic classification tasks and present
the findings from these baseline classification models. Due to the shortage of data for
these local South African languages, the datasets that were created were augmented and
oversampled to increase data size and overcome class classification imbalance. In total,
four different classification models were used namely Logistic regression, Naive bayes,
XGBoost and LSTM. These models were trained on three different word embeddings
namely Count vectorizer, TFIDF vectorizer and word2vec. The results of this study
showed that XGBoost, Logistic regression and LSTM, trained from word2vec performed
better than the other combinations.