Abstract:
Validation is needed for any newly developed model or framework because it requires several real-life applications.
The investment made into e-learning in medical education is daunting, as is the expectation for a positive return on
investment. The medical education domain requires data-wise implementation of e-learning as the debate continues about
the fitness of e-learning in medical education. The domain seldom employs frameworks or models to evaluate students'
performance in e-learning contexts. However, when utilized, the Kirkpatrick evaluation model is a common choice. This
model has faced significant criticism for its failure to incorporate constructs that assess technology and its influence on
learning. This paper aims to assess the efficiency of a model developed to determine the effectiveness of e-learning in
medical education, specifically targeting student performance. The model was validated through Delphi-based Expert
Judgement Techniques (EJT), and Cronbach's alpha was used to determine the reliability of the proposed model. Simple
Correspondence Analysis (SCA) was used to measure if stability is reached among experts. Fourteen experts, professors,
senior lecturers, and researchers with an average of 12 years of experience in designing and evaluating students'
performance in e-learning in medical education participated in the evaluation of the model based on two rounds of
questionnaires developed to operationalize the constructs of the model. During the first round, the model had 64 %
agreement from all experts; however, 100% agreement was achieved after the second round, with all statements achieving
an average of 52% strong agreement and 48% agreement from all 14 experts; the evaluation dimension had the most
substantial agreements, next to the design dimension. The results suggest that the model is valid and may be applied as Key
Performance Metrics when designing and evaluating e-learning courses in medical education.