Active transfer learning for audiogram estimation

dc.contributor.authorTwinomurinzi, Hossana
dc.contributor.authorMyburgh, Hermanus Carel
dc.contributor.authorBarbour, Dennis L.
dc.date.accessioned2024-12-06T04:59:11Z
dc.date.available2024-12-06T04:59:11Z
dc.date.issued2024-03
dc.descriptionDATA AVAILABITY STATEMENT: The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author.en_US
dc.description.abstractComputational audiology (CA) has grown over the last few years with the improvement of computing power and the growth of machine learning (ML) models. There are today several audiogram databases which have been used to improve the accuracy of CA models as well as reduce testing time and diagnostic complexity. However, these CA models have mainly been trained on single populations. This study integrated contextual and prior knowledge from audiogram databases of multiple populations as informative priors to estimate audiograms more precisely using two mechanisms: (1) a mapping function drawn from feature-based homogeneous Transfer Learning (TL) also known as Domain Adaptation (DA) and (2) Active Learning (Uncertainty Sampling) using a stream-based query mechanism. Simulations of the Active Transfer Learning (ATL) model were tested against a traditional adaptive staircase method akin to the Hughson-Westlake (HW) method for the left ear at frequencies v = 0.25, 0.5, 1, 2, 4, 8 kHz, resulting in accuracy and reliability improvements. ATL improved HW tests from a mean of 41.3 sound stimuli presentations and reliability of +9.02 dB down to 25.3 + 1.04 dB. Integrating multiple databases also resulted in classifying the audiograms into 18 phenotypes, which means that with increasing data-driven CA, higher precision is achievable, and a possible re-conceptualisation of the notion of phenotype classifications might be required. The study contributes to CA in identifying an ATL mechanism to leverage existing audiogram databases and CA models across different population groups. Further studies can be done for other psychophysical phenomena using ATL.en_US
dc.description.departmentElectrical, Electronic and Computer Engineeringen_US
dc.description.sdgSDG-03:Good heatlh and well-beingen_US
dc.description.sdgSDG-09: Industry, innovation and infrastructureen_US
dc.description.urihttps://www.frontiersin.org/journals/digital-healthen_US
dc.identifier.citationTwinomurinzi, H., Myburgh, H. & Barbour, D.L. (2024) Active transfer learning for audiogram estimation. Frontiers in Digital Health 6:1267799. doi: 10.3389/fdgth.2024.1267799.en_US
dc.identifier.issn2673-253X (online)
dc.identifier.other10.3389/fdgth.2024.1267799
dc.identifier.urihttp://hdl.handle.net/2263/99785
dc.language.isoenen_US
dc.publisherFrontiers Mediaen_US
dc.rights© 2024 Twinomurinzi, Myburgh and Barbour. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY).en_US
dc.subjectActive learningen_US
dc.subjectActive transfer learningen_US
dc.subjectAudiogram estimationen_US
dc.subjectAudiologyen_US
dc.subjectAudiometryen_US
dc.subjectTransfer learningen_US
dc.subjectSDG-03: Good health and well-beingen_US
dc.subjectSDG-09: Industry, innovation and infrastructureen_US
dc.subjectComputational audiologyen_US
dc.subjectMachine learningen_US
dc.titleActive transfer learning for audiogram estimationen_US
dc.typeArticleen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Twinomurinzi_Active_2024.pdf
Size:
1.73 MB
Format:
Adobe Portable Document Format
Description:
Article

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description: