Abstract

The paramedic profession is rapidly evolving and has witnessed significant expansion in the scope of practice and the public expectations of the paramedic role in recent years. Increasing demands for greater knowledge and skills for paramedics has implications for the university programs tasked with their pre-employment training. The certification of paramedic student knowledge typically occurs incrementally across degree programs with aggregate results used to determine student qualification. There are concerns regarding learning sustainability of this approach. The narrowed focus of assessment practices within siloed subjects often neglects the more holistic and integrated paramedic knowledge requirements. Programmatic assessment is becoming increasingly common within medical education, offering more comprehensive, longitudinal information about student knowledge, ability and progress, obtained across an entire program of study. A common instrument of programmatic assessment is the progress test, which evaluates student understanding in line with the full broad expectations of the discipline, and is administered frequently across an entire curriculum, regardless of student year level. Our project explores the development, implementation and evaluation of modified progress testing approaches within a single semester capstone undergraduate paramedic topic. We describe the first reported approaches to interpret the breadth of knowledge requirements for the discipline and prepare and validate this as a multiple-choice test instrument. We examined students at three points across the semester, twice with an identical MCQ test spaced 10 weeks apart, and finally with an oral assessment informed by student’s individual results on the second test. The changes in student performance between two MCQ tests were evaluated, as were the results of the final oral assessment. We also analysed student feedback relating to their perceptions and experiences. Mean student correct response increased by 65 percent between test 1 and 2, with substantial declines in numbers of incorrect and don’t know responses. Our results demonstrate a substantial increase in correct responses between the two tests, a high mean score in the viva, and broad agreement about the significant impact the approaches have had on learning growth.

Share

COinS