Abstract:
Computational audiology (CA) has grown over the last few years with the
improvement of computing power and the growth of machine learning (ML)
models. There are today several audiogram databases which have been used
to improve the accuracy of CA models as well as reduce testing time and
diagnostic complexity. However, these CA models have mainly been trained
on single populations. This study integrated contextual and prior knowledge
from audiogram databases of multiple populations as informative priors to
estimate audiograms more precisely using two mechanisms: (1) a mapping
function drawn from feature-based homogeneous Transfer Learning (TL) also
known as Domain Adaptation (DA) and (2) Active Learning (Uncertainty
Sampling) using a stream-based query mechanism. Simulations of the Active
Transfer Learning (ATL) model were tested against a traditional adaptive
staircase method akin to the Hughson-Westlake (HW) method for the left ear
at frequencies v = 0.25, 0.5, 1, 2, 4, 8 kHz, resulting in accuracy and reliability
improvements. ATL improved HW tests from a mean of 41.3 sound stimuli
presentations and reliability of +9.02 dB down to 25.3 + 1.04 dB. Integrating
multiple databases also resulted in classifying the audiograms into 18
phenotypes, which means that with increasing data-driven CA, higher
precision is achievable, and a possible re-conceptualisation of the notion of
phenotype classifications might be required. The study contributes to CA in
identifying an ATL mechanism to leverage existing audiogram databases and
CA models across different population groups. Further studies can be done for
other psychophysical phenomena using ATL.