Cookies on this website
We use cookies to ensure that we give you the best experience on our website. If you click 'Continue' we'll assume that you are happy to receive all cookies and you won't see this message again. Click 'Find out more' for information on how to change your cookie settings.

<sec> <title>BACKGROUND</title> <p>Massive open online courses (MOOCs) have potential for broad education impact due to the large number of learners undertaking these courses. Despite their reach, there is a lack of knowledge about which methods are used for evaluating these courses.</p> </sec> <sec> <title>OBJECTIVE</title> <p>This review aims to identify current MOOC evaluation methods in order to inform future study designs.</p> </sec> <sec> <title>METHODS</title> <p>We systematically searched the following databases: (1) SCOPUS; (2) Education Resources Information Center (ERIC); (3) IEEE Xplore; (4) Medline/PubMed; (5) Web of Science; (6) British Education Index and (7) Google Scholar search engine for studies from January 2008 until October 2018. Two independent reviewers screened abstracts and titles of the studies. Published studies in English including MOOC evaluation studies were included. The study design of the evaluations, the underlying motivation for the evaluation studies, data collection and data analysis methods were quantitatively and qualitatively analysed. The quality of the included studies was appraised using the Cochrane Collaboration Risk of Bias Tool for RCTs, the NIH - National Heart, Lung and Blood Institute quality assessment tool for cohort observational studies, and for “Before-After (Pre-Post) Studies With No Control Group” to assess the risk of bias.</p> </sec> <sec> <title>RESULTS</title> <p>The initial search resulted in 3275 studies, 776 of which were duplicates. Thirty-three eligible studies were included in this review. Studies mostly had a cross-sectional design evaluating one version of a MOOC. We found that studies mostly had a learner-focused, teaching-focused or platform-focused motivation to evaluate the MOOC. The most used data collection methods were surveys, learning management system data and quiz grades and the most used data analysis methods were descriptive and inferential statistics. The methods for evaluating the outcomes of these courses are diverse and unstructured. Most studies with cross-sectional design had low quality assessment, whereas randomised controlled trial and the quasi-experimental studies received better quality assessment.</p> </sec> <sec> <title>CONCLUSIONS</title> <p>MOOC evaluation data collection and data analysis methods should be determined carefully based on the aim of the evaluation. Currently available MOOC evaluations are subject to some methodological bias, which should be taken into account to reduce its effects on evaluation findings. There are many ways studies could try to reduce bias either by using pre-MOOC measures for comparison or by controlling for confounding variables. ¬¬¬Future MOOC evaluations should consider using more diverse data sources and data analysis methods.</p> </sec> <sec> <title>CLINICALTRIAL</title> <p>N/A</p> </sec>

Original publication

DOI

10.2196/preprints.13851

Type

Journal article

Publisher

JMIR Publications Inc.

Publication Date

27/02/2019