dc.contributor.author |
Oluwadele, Deborah
|
|
dc.contributor.author |
Singh, Yashik
|
|
dc.contributor.author |
Adeliyi, Timothy
|
|
dc.date.accessioned |
2024-05-22T04:51:55Z |
|
dc.date.available |
2024-05-22T04:51:55Z |
|
dc.date.issued |
2023 |
|
dc.description.abstract |
The performance evaluation of e-learning in medical education has been the subject of much research lately. Researchers are yet to achieve a consensus on the definition of performance or the suitable constructs, metrics, models, and methods to help understand student performance. Through a systematic review, this study put forward a working definition of what constitutes performance evaluation to reduce the ambiguity, arbitrariness, and multiplicity surrounding performance evaluation of e-learning in medical education. A systematic review of published articles on performance evaluation of e-learning in medical education was performed on the SCOPUS, Web of Science, PubMed, and EBSCOHost databases using search terms deduced from the PICOS model. Following the PRISMA guidelines relevant published papers were searched and exported to Endnote. Screening and quality appraisal were done on Rayyan. Three thousand four hundred and thirty-nine published studies were retrieved and screened using predetermined inclusion and exclusion criteria. One hundred and three studies passed all the criteria and were reviewed. The reviewed literature used 30 constructs to operationalize performance. The leading constructs are knowledge and effectiveness. Both constructs were used by 60% of the authors of the reviewed literature to define student performance. Knowledge gain, satisfaction, and learning outcome are the most common metrics used by 81%, 26%, and 15% of the reviewed literature to measure student performance. The study discovered that most researchers forget to evaluate the “e” or electronic component of e-learning when evaluating performance. The constructs operationalized and metrics measured were primarily focused on learning outcomes with minimal focus on technology-related metrics or the influence of the electronic mode of delivery on the learning process or evaluation outcome. Only 6% of the reviewed literature applied evaluation models to guide their evaluation process - mostly the Kirkpatrick evaluation model. Also, most of the included studies used randomization as an experimental control method, mainly using pre-and post-test surveys. Modern evaluation methods were rarely used. Only 1% of the reviewed literature used Google Analytics, and 2% used data from a learning management system. This study increments the existing body of knowledge in performance evaluation of e-learning in medical education by providing a convergence of constructs, metrics, models, and methods and proposing a roadmap to guide students’ performance evaluation process from the synthesis of findings and the gaps identified through the systematic review of existing literature in the domain. This roadmap will assist in informing researchers of grey areas to consider when evaluating performance to ensure more quality research outputs in the domain. |
en_US |
dc.description.department |
Informatics |
en_US |
dc.description.librarian |
am2024 |
en_US |
dc.description.sdg |
SDG-04:Quality Education |
en_US |
dc.description.uri |
https://academic-publishing.org/index.php/ejel/index |
en_US |
dc.identifier.citation |
Oluwadele, D., Singh, Y. & Adeliyi, T. 2023, 'An explorative review of the constructs, metrics, models, and methods for evaluating e-learning performance in medical education', The Electronic Journal of e-Learning, vol. 21, no. 5, pp. 394-412, doi : 10.34190/ejel.21.5.3089. |
en_US |
dc.identifier.issn |
1479-4403 (online) |
|
dc.identifier.other |
10.34190/ejel.21.5.3089 |
|
dc.identifier.uri |
http://hdl.handle.net/2263/96129 |
|
dc.language.iso |
en |
en_US |
dc.publisher |
Academic Publishing Limited |
en_US |
dc.rights |
© 2023 Oluwaseun Oluwadele, Yashik Singh, Timothy Adeliyi. This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. |
en_US |
dc.subject |
e-Learning |
en_US |
dc.subject |
e-Learning evaluation |
en_US |
dc.subject |
Factors |
en_US |
dc.subject |
e-Learning performance |
en_US |
dc.subject |
Medical education |
en_US |
dc.subject |
Roadmap |
en_US |
dc.subject |
Systematic literature review |
en_US |
dc.subject |
SDG-04: Quality education |
en_US |
dc.title |
An explorative review of the constructs, metrics, models, and methods for evaluating e-learning performance in medical education |
en_US |
dc.type |
Article |
en_US |