Asynchronous interpretation of manual and automated audiometry : agreement and reliability

Show simple item record

dc.contributor.author Brennan-Jones, Christopher G.
dc.contributor.author Eikelboom, Robert H.
dc.contributor.author Bennett, Rebecca J.
dc.contributor.author Tao, Karina F.M.
dc.contributor.author Swanepoel, De Wet
dc.date.accessioned 2019-05-15T08:45:46Z
dc.date.available 2019-05-15T08:45:46Z
dc.date.issued 2018-01
dc.description.abstract INTRODUCTION : Remote interpretation of automated audiometry offers the potential to enable asynchronous tele-audiology assessment and diagnosis in areas where synchronous tele-audiometry may not be possible or practical. The aim of this study was to compare remote interpretation of manual and automated audiometry. METHODS : Five audiologists each interpreted manual and automated audiograms obtained from 42 patients. The main outcome variable was the audiologist’s recommendation for patient management (which included treatment recommendations, referral or discharge) between the manual and automated audiometry test. Cohen’s Kappa and Krippendorff’s Alpha were used to calculate and quantify the intra- and inter-observer agreement, respectively, and McNemar’s test was used to assess the audiologist-rated accuracy of audiograms. Audiograms were randomised and audiologists were blinded as to whether they were interpreting a manual or automated audiogram. RESULTS : Intra-observer agreement was substantial for management outcomes when comparing interpretations for manual and automated audiograms. Inter-observer agreement was moderate between clinicians for determining management decisions when interpreting both manual and automated audiograms. Audiologists were 2.8 times more likely to question the accuracy of an automated audiogram compared to a manual audiogram. DISCUSSION : There is a lack of agreement between audiologists when interpreting audiograms, whether recorded with automated or manual audiometry. The main variability in remote audiogram interpretation is likely to be individual clinician variation, rather than automation. en_ZA
dc.description.department Speech-Language Pathology and Audiology en_ZA
dc.description.librarian hj2019 en_ZA
dc.description.uri http://jtt.sagepub.com en_ZA
dc.identifier.citation Brennan-Jones, C.G., Eikelboom, R.H., Bennett, R.J. et al. 2018, 'Asynchronous interpretation of manual and automated audiometry: agreement and reliability', Journal of Telemedicine and Telecare, vol. 24, no. 1, pp. 37-43. en_ZA
dc.identifier.issn 1357-633X (print)
dc.identifier.issn 1758-1109 (online)
dc.identifier.other 10.1177/1357633X16669899
dc.identifier.uri http://hdl.handle.net/2263/69132
dc.language.iso en en_ZA
dc.publisher Sage en_ZA
dc.rights © The Author(s) 2016 en_ZA
dc.subject Automated audiometry en_ZA
dc.subject Audiometry en_ZA
dc.subject Audiology en_ZA
dc.subject eHealth en_ZA
dc.subject Hearing loss en_ZA
dc.subject Tele-audiology en_ZA
dc.subject Telehealth en_ZA
dc.subject Need en_ZA
dc.subject Impact en_ZA
dc.subject Accuracy en_ZA
dc.subject Hearing assessment en_ZA
dc.subject Clinical validation en_ZA
dc.subject Pure-tone audiometry en_ZA
dc.title Asynchronous interpretation of manual and automated audiometry : agreement and reliability en_ZA
dc.type Postprint Article en_ZA


Files in this item

This item appears in the following Collection(s)

Show simple item record