Abstract:
INTRODUCTION : Remote interpretation of automated audiometry offers the potential to enable asynchronous tele-audiology assessment and diagnosis in areas where synchronous tele-audiometry may not be possible or practical. The aim of this study was to compare remote interpretation of manual and automated audiometry.
METHODS : Five audiologists each interpreted manual and automated audiograms obtained from 42 patients. The main outcome variable was the audiologist’s recommendation for patient management (which included treatment recommendations, referral or discharge) between the manual and automated audiometry test. Cohen’s Kappa and Krippendorff’s Alpha were used to calculate and quantify the intra- and inter-observer agreement, respectively, and McNemar’s test was used to assess the audiologist-rated accuracy of audiograms. Audiograms were randomised and audiologists were blinded as to whether they were interpreting a manual or automated audiogram.
RESULTS : Intra-observer agreement was substantial for management outcomes when comparing interpretations for manual and automated audiograms. Inter-observer agreement was moderate between clinicians for determining management decisions when interpreting both manual and automated audiograms. Audiologists were 2.8 times more likely to question the accuracy of an automated audiogram compared to a manual audiogram.
DISCUSSION : There is a lack of agreement between audiologists when interpreting audiograms, whether recorded with automated or manual audiometry. The main variability in remote audiogram interpretation is likely to be individual clinician variation, rather than automation.