Abstract:
The University of Pretoria has experienced a significant increase in student numbers in recent years. This increase has necessarily impacted on the Department of Mathematics and Applied Mathematics. The department is understaffed in terms of lecturing staff, which impacts negatively on postgraduate study and research outputs. The disproportion between teaching staff and the lecturing load and research demands has led to an excessive grading and administrative load on staff. The department decided to use multiple choice questions in assessments that could be graded by means of computer software. The responses of the multiple choice questions are captured on optical reader forms that are processed centrally. Multiple choice questions are combined with constructed response questions (written questions) in semester tests and end-of-term examinations. The quality of the multiple choice questions has never before been determined. This research project asks the research question: How do the multiple choice questions in mathematics, as posed to first-year engineering students at the University of Pretoria, comply with the principles of good assessment for determining quality? A quantitative secondary analysis is performed on data that was sourced from the first-year engineering calculus module WTW 158 for the years 2015, 2016 and 2017. The study shows that, in most cases, the questions are commendable with well-balanced indices of discrimination and difficulty including well-chosen functional distractors. The item analysis included determining the cognitive level of each multiple choice question. The problematic questions are highlighted and possible recommendations are made to improve or revise such questions for future usage.