Operationalizing a weighted performance scoring model for sustainable e-learning in medical education : insights from expert judgement

Show simple item record

dc.contributor.author Oluwadele, Deborah
dc.contributor.author Singh, Yashik
dc.contributor.author Adeliyi, Timothy
dc.date.accessioned 2025-01-22T07:56:02Z
dc.date.available 2025-01-22T07:56:02Z
dc.date.issued 2024-07
dc.description.abstract Validation is needed for any newly developed model or framework because it requires several real-life applications. The investment made into e-learning in medical education is daunting, as is the expectation for a positive return on investment. The medical education domain requires data-wise implementation of e-learning as the debate continues about the fitness of e-learning in medical education. The domain seldom employs frameworks or models to evaluate students' performance in e-learning contexts. However, when utilized, the Kirkpatrick evaluation model is a common choice. This model has faced significant criticism for its failure to incorporate constructs that assess technology and its influence on learning. This paper aims to assess the efficiency of a model developed to determine the effectiveness of e-learning in medical education, specifically targeting student performance. The model was validated through Delphi-based Expert Judgement Techniques (EJT), and Cronbach's alpha was used to determine the reliability of the proposed model. Simple Correspondence Analysis (SCA) was used to measure if stability is reached among experts. Fourteen experts, professors, senior lecturers, and researchers with an average of 12 years of experience in designing and evaluating students' performance in e-learning in medical education participated in the evaluation of the model based on two rounds of questionnaires developed to operationalize the constructs of the model. During the first round, the model had 64 % agreement from all experts; however, 100% agreement was achieved after the second round, with all statements achieving an average of 52% strong agreement and 48% agreement from all 14 experts; the evaluation dimension had the most substantial agreements, next to the design dimension. The results suggest that the model is valid and may be applied as Key Performance Metrics when designing and evaluating e-learning courses in medical education. en_US
dc.description.department Informatics en_US
dc.description.sdg SDG-03:Good heatlh and well-being en_US
dc.description.sdg SDG-04:Quality Education en_US
dc.description.uri https://academic-publishing.org/index.php/ejel en_US
dc.identifier.citation Oluwadele, D., Singh, Y. and Adeliyi, T. 2024. “Operationalizing a Weighted Performance Scoring Model for Sustainable e-Learning in Medical Education: Insights from Expert Judgement”, Electronic Journal of e-Learning, 22(8), pp 24- 40, https://doi.org/10.34190/ejel.22.8.3427. en_US
dc.identifier.issn 1479-4403 (online)
dc.identifier.other 10.34190/ejel.22.8.3427
dc.identifier.uri http://hdl.handle.net/2263/100235
dc.language.iso en en_US
dc.publisher Academic Publishing International Limited en_US
dc.rights © 2024 Deborah Oluwadele, Yashik Singh, Timothy Adeliyi. This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. en_US
dc.subject E-learning evaluation model en_US
dc.subject Medical education en_US
dc.subject Content validation en_US
dc.subject Performance optimization en_US
dc.subject Expert judgment technique en_US
dc.subject SDG-03: Good health and well-being en_US
dc.subject SDG-04: Quality education en_US
dc.title Operationalizing a weighted performance scoring model for sustainable e-learning in medical education : insights from expert judgement en_US
dc.type Article en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record