Active transfer learning for audiogram estimation

Show simple item record

dc.contributor.author Twinomurinzi, Hossana
dc.contributor.author Myburgh, Hermanus Carel
dc.contributor.author Barbour, Dennis L.
dc.date.accessioned 2024-12-06T04:59:11Z
dc.date.available 2024-12-06T04:59:11Z
dc.date.issued 2024-03
dc.description DATA AVAILABITY STATEMENT: The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author. en_US
dc.description.abstract Computational audiology (CA) has grown over the last few years with the improvement of computing power and the growth of machine learning (ML) models. There are today several audiogram databases which have been used to improve the accuracy of CA models as well as reduce testing time and diagnostic complexity. However, these CA models have mainly been trained on single populations. This study integrated contextual and prior knowledge from audiogram databases of multiple populations as informative priors to estimate audiograms more precisely using two mechanisms: (1) a mapping function drawn from feature-based homogeneous Transfer Learning (TL) also known as Domain Adaptation (DA) and (2) Active Learning (Uncertainty Sampling) using a stream-based query mechanism. Simulations of the Active Transfer Learning (ATL) model were tested against a traditional adaptive staircase method akin to the Hughson-Westlake (HW) method for the left ear at frequencies v = 0.25, 0.5, 1, 2, 4, 8 kHz, resulting in accuracy and reliability improvements. ATL improved HW tests from a mean of 41.3 sound stimuli presentations and reliability of +9.02 dB down to 25.3 + 1.04 dB. Integrating multiple databases also resulted in classifying the audiograms into 18 phenotypes, which means that with increasing data-driven CA, higher precision is achievable, and a possible re-conceptualisation of the notion of phenotype classifications might be required. The study contributes to CA in identifying an ATL mechanism to leverage existing audiogram databases and CA models across different population groups. Further studies can be done for other psychophysical phenomena using ATL. en_US
dc.description.department Electrical, Electronic and Computer Engineering en_US
dc.description.sdg SDG-03:Good heatlh and well-being en_US
dc.description.sdg SDG-09: Industry, innovation and infrastructure en_US
dc.description.uri https://www.frontiersin.org/journals/digital-health en_US
dc.identifier.citation Twinomurinzi, H., Myburgh, H. & Barbour, D.L. (2024) Active transfer learning for audiogram estimation. Frontiers in Digital Health 6:1267799. doi: 10.3389/fdgth.2024.1267799. en_US
dc.identifier.issn 2673-253X (online)
dc.identifier.other 10.3389/fdgth.2024.1267799
dc.identifier.uri http://hdl.handle.net/2263/99785
dc.language.iso en en_US
dc.publisher Frontiers Media en_US
dc.rights © 2024 Twinomurinzi, Myburgh and Barbour. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). en_US
dc.subject Active learning en_US
dc.subject Active transfer learning en_US
dc.subject Audiogram estimation en_US
dc.subject Audiology en_US
dc.subject Audiometry en_US
dc.subject Transfer learning en_US
dc.subject SDG-03: Good health and well-being en_US
dc.subject SDG-09: Industry, innovation and infrastructure en_US
dc.subject Computational audiology en_US
dc.subject Machine learning en_US
dc.title Active transfer learning for audiogram estimation en_US
dc.type Article en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record