Data Collection Approaches to Enable Evaluation of a Massive Open Online Course About Data Science for Continuing Education in Health Care: Case Study (Preprint)
Alturkistani A., Majeed A., Car J., Brindley D., Wells G., Meinert E.
<sec> <title>BACKGROUND</title> <p>This study presents learner perceptions of a pilot massive open online course (MOOC).</p> </sec> <sec> <title>OBJECTIVE</title> <p>The objective of this study was to explore data collection approaches to help inform future MOOC evaluations on the use of semistructured interviews and the Kirkpatrick evaluation model.</p> </sec> <sec> <title>METHODS</title> <p>A total of 191 learners joined 2 course runs of a limited trial of the MOOC. Moreover, 7 learners volunteered to be interviewed for the study. The study design drew on semistructured interviews of 2 learners transcribed and analyzed using Braun and Clark’s method for thematic coding. This limited participant set was used to identify how the Kirkpatrick evaluation model could be used to evaluate further implementations of the course at scale.</p> </sec> <sec> <title>RESULTS</title> <p>The study identified several themes that could be used for further analysis. The themes and subthemes include learner background (educational, professional, and topic significance), MOOC learning (learning achievement and MOOC application), and MOOC features (MOOC positives, MOOC negatives, and networking). There were insufficient data points to perform a Kirkpatrick evaluation.</p> </sec> <sec> <title>CONCLUSIONS</title> <p>Semistructured interviews for MOOC evaluation can provide a valuable in-depth analysis of learners’ experience of the course. However, there must be sufficient data sources to complete a Kirkpatrick evaluation to provide for data triangulation. For example, data from precourse and postcourse surveys, quizzes, and test results could be used to improve the evaluation methodology.</p> </sec>