This research is important because it has identified a gap in the existing knowledge base. A term is therefore coined to label a computer-based test mode effect, the so-called Item Randomisation Effect, discussed in detail in this thesis. Item Randomisation Effect is a test mode effect occurring in computer-based testing contexts, especially noticeable in test-takers that may be susceptible to test anxiety. The practise of randomising multiple choice items in computer-based test venues is commonplace, mainly as a deterrent for cheating. Previous research attempted to determine the degree of equivalence across testing modalities of any test. The need was to ensure test-takers in paper-based tests would not have an advantage/disadvantage over test-takers given the same test in a computer-based mode. Such studies have a nomothetic perspective. This research contrasts with those earlier studies in that it has an ideographic perspective because it is concerned with the performance of individuals taking any test in the computer-based modality. This subtle difference in perspective may account for the apparent gap in the existing educational research literature. Evidence of Item Randomisation Effect was found in this study but further research into this test mode effect is necessary.
Dissertation (MEd (Computer-Integrated Education))--University of Pretoria, 2008.