In this paper the use of the Rasch model is explored as a transparent, systematic and theoretically
underpinned response to quality issues that are widely recognised as problematic in the refinement of
Likert scale questionnaires. Key issues are the choice of length of scale, the pursuit of a favourable
estimate of Cronbach’s alpha at the possible expense of construct validity, and the fact that total raw
scores arise from ordinal data but are used and interpreted as if measurement had occurred. We use a
questionnaire under development for the measurement of perceptions of first-year chemistry students
on demonstrator effectiveness to illustrate the process of Rasch analysis and instrument refinement.
This process involves investigation of fit of the data to the model, possible violations of the assumption
of local independence, and several aspects of item functioning. We identified disordered response
categories as the probable reason for misfit in this data set and propose strategies for modification of
items so that they can be retained rather than rejected.