Our initial intention was to set out to explore the potential for implementing outcome-based assessment practices in a first year statistics unit. It has evolved into a search for a pedagogy enabling clear definition of the desired learning outcomes in our students, and alignment of instruction, student resources and assessment with these outcomes. Previous learning and assessment materials were used as starting points for the pilot study. The aim was to develop a model which might feasibly be repeated in the context of our subject and related ones, and which might be potentially applied in other disciplines. A recent revision of Bloom’s Taxonomy was chosen to facilitate classification of an intended hierarchy of desired knowledge, processes and skills. Unit objectives were behaviourally framed to provide identifiable indicators of student outcome achievement. Assessment was similarly classified to align the desired and observable outcomes. Appropriate instructional techniques and authentic assessment were chosen commensurate with student interest and the development of higher order cognitive skills. An evaluation rubric was used to clarify the requirements of the assessment task. This provided focus material to foster organization of responses. To prevent students privileging one outcome above another, no marking schema was included. Students worked independently in teams of two to complete complementary sections of the assignment. However discussion of their work with their partner was not only expected, but encouraged. Team solution fostered cooperative learning without providing opportunities for plagiarism. Reflective practices and student and staff feedback throughout the study led to refinements in the procedures. Summative evaluation encompassed results of student assessment and surveys and interviews of both staff and students. Semantic problems resulted in difficulties in using the revised Bloom’s taxonomy to classify assessment questions. The word ‘explain’ at times required demonstration with a simple diagram, at others, identifying the elements of a formula or even analysis to determine the relevance of a particular model. There is an obvious potential for variability in interpretation of the skills required by some questions and hence the possibility of a lack of reliability in classification of the objectives and assessment. A need arises therefore to refine the taxonomy for more specific use within the discipline.
Morris, M., Porter, A., & Griffiths, D. (2004). Assessment is bloomin' luverly: developing assessment that enhances learning. Journal of University Teaching & Learning Practice, 1(2), 46-62. https://doi.org/10.53761/184.108.40.206