Publication Details

Kaltoft, M. K., Nielsen, J. B., Salkeld, G., Lander, J. & Dowie, J. (2015). Bringing Feedback in From the Outback via a Generic and Preference-Sensitive Instrument for Course Quality Assessment. JMIR Research Protocols, 4 (1), e15-1-e15-7.


Background: Much effort and many resources have been put into developing ways of eliciting valid and informative student feedback on courses in medical, nursing, and other health professional schools. Whatever their motivation, items, and setting, the response rates have usually been disappointingly low, and there seems to be an acceptance that the results are potentially biased. Objective: The objective of the study was to look at an innovative approach to course assessment by students in the health professions. This approach was designed to make it an integral part of their educational experience, rather than a marginal, terminal, and optional add-on as "feedback". It becomes a weighted, but ungraded, part of the course assignment requirements. Methods: A ten-item, two-part Internet instrument, MyCourseQuality (MCQ-10D), was developed following a purposive review of previous instruments. Shorthand labels for the criteria are: Content, Organization, Perspective, Presentations, Materials, Relevance, Workload, Support, Interactivity, and Assessment. The assessment is unique in being dually personalized. In part 1, at the beginning of the course, the student enters their importance weights for the ten criteria. In part 2, at its completion, they rate the course on the same criteria. Their ratings and weightings are combined in a simple expected-value calculation to produce their dually personalized and decomposable MCQ score. Satisfactory (technical) completion of both parts contributes 10% of the marks available in the course. Providers are required to make the relevant characteristics of the course fully transparent at enrollment, and the course is to be rated as offered. A separate item appended to the survey allows students to suggest changes to what is offered. Students also complete (anonymously) the standard feedback form in the setting concerned. Results: Piloting in a medical school and health professional school will establish the organizational feasibility and acceptability of the approach (a version of which has been employed in one medical school previously), as well as its impact on provider behavior and intentions, and on student engagement and responsiveness. The priorities for future improvements in terms of the specified criteria are identified at both individual and group level. The group results from MCQ will be compared with those from the standard feedback questionnaire, which will also be completed anonymously by the same students (or some percentage of them). Conclusions: We present a protocol for the piloting of a student-centered, dually personalized course quality instrument that forms part of the assignment requirements and is therefore an integral part of the course. If, and how, such an essentially formative Student-Reported Outcome or Experience Measure can be used summatively, at unit or program level, remains to be determined, and is not our concern here.



Link to publisher version (DOI)