Usability is a critical aspect of the success of any application. It can be the deciding factor
for which an application is chosen and can have a dramatic effect on the productivity of
users. Eye tracking has been successfully utilised as a usability evaluation tool, because of
the strong link between where a person is looking and their cognitive activity. Currently,
eye tracking usability evaluation is a time–intensive process, requiring extensive human
expert analysis. It is therefore only feasible for small–scale usability testing.
This study developed a method to reduce the time expert analysts spend interpreting
eye tracking results, by automating part of the analysis process. This was accomplished
by comparing the visual strategy of a benchmark user against the visual strategies of the
remaining participants. A comparative study demonstrates how the resulting metrics
highlight the same tasks with usability issues, as identified by an expert analyst. The
method also produces visualisations to assist the expert in identifying problem areas on
the user interface.
Eye trackers are now available for various mobile devices, providing the opportunity to
perform large–scale, remote eye tracking usability studies. The proposed approach makes
it feasible to analyse these extensive eye tracking datasets and improve the usability of