Effectiveness of e-Learning in a Medical School 2.0 Model: Comparison of Item Analysis for Student-Generated vs. Faculty-Generated Multiple-Choice Questions
Bryan W. Janzen, Connor Sommerfeld, Adrian C.C. Gooi
Abstract
Background: Early reports in the literature describe using student-generated questions as a method of student learning as well as augmenting question exam banks. Reports on the performance of student-generated questions versus faculty-generated questions, however, remain limited. This study aims to compare the question performance of student-generated versus faculty-generated multiple-choice questions (MCQ).
Objectives: To determine if student-generated questions using mobile audience response systems and online discussion boards have similar item discrimination scores as faculty-generated questions.
Methods: A team-based learning session was used to create 113 student-generated multiple-choice questions (SGQs). A 20 question MCQ quiz was presented to a second year medical school class made of 10 randomly selected SGQs and 10 randomly selected faculty-generated multiple-choice questions (FGQs). Item analysis was performed on the test results.
Results: The data showed no statistical difference in the point-biserial scores between the two groups (average point-biserial 0.31 students vs 0.36 faculty, p=0.14), with 90% of student-generated and 100% of faculty-generated questions meeting a cut-off of point-biserial score >0.2. Interestingly, student-generated questions were statistically more difficult than the faculty-generated questions (Item Difficulty score 0.46 students vs 0.69 faculty, p=0.003).
Conclusions: This study suggests that student-generated compared to faculty-generated MCQs have similar item discrimination scores, but are perhaps more difficult questions.