Massive Open Online Course (MOOC) Evaluation Methods: A Systematic Review
Alturkistani A., LAM C., Foley K., Stenfors T., Blum E., VAN VELTHOVEN M., MEINERT E.
Background: Massive open online courses (MOOCs) have the potential for broad education impact due to many learners undertaking these courses. Despite their reach, there is a lack of knowledge about which methods are used for evaluating these courses. Objective: This review aims to identify current MOOC evaluation methods in order to inform future study designs. Methods: We systematically searched the following databases: (1) SCOPUS; (2) Education Resources Information Center (ERIC); (3) IEEE Xplore; (4) Medline/PubMed; (5) Web of Science; (6) British Education Index and (7) Google Scholar search engine for studies from January 2008 until October 2018. Two reviewers independently screened abstracts and titles of the studies. Published studies in English that evaluated MOOCs were included. The study design of the evaluations, the underlying motivation for the evaluation studies, data collection and data analysis methods were quantitatively and qualitatively analyzed. The quality of the included studies was appraised using the Cochrane Collaboration Risk of Bias Tool for RCTs, the NIH - National Heart, Lung and Blood Institute quality assessment tool for cohort observational studies, and for “Before-After (Pre-Post) Studies With No Control Group”. Results: The initial search resulted in 3275 studies, and 33 eligible studies were included in this review. Sixteen studies used a quantitative study design, 11 a qualitative and 6 a mixed-methods study designs. Eighteen studies looked at learner characteristics and behaviour and 23 at learning outcomes and experiences. Twelve studies used one data source, 11 used two data sources, seven used 3, four used 2 and one used 5 . Three studies used more than three data sources in their evaluation. In terms of the data analysis methods, quantitative methods were the most prominent with descriptive and inferential statistics the top two preferred methods. Twenty-six studies with cross-sectional design had a low-quality assessment, whereas randomized controlled trials and quasi-experimental studies received higher quality assessment. Conclusions: MOOC evaluation data collection and data analysis methods should be determined carefully based on the aim of the evaluation. The MOOC evaluations are subject to bias, which could be reduced by using pre-MOOC measures for comparison or by controlling for confounding variables. Future MOOC evaluations should consider using more diverse data sources and data analysis methods.