•  
 
Journal of Student Engagement: Education Matters

Abstract

A central objective of educational assessment is to maximise the accuracy (validity) and consistency (reliability) of the methods used to assess students’ competencies. Different tests, however, often employ different methods of assessing the same domain-specific skills (e.g., spelling). As a result, questions have arisen concerning the legitimacy of using these various modes interchangeably as a proxy for students’ abilities. To investigate the merit of these contentions, this study examined university students’ spelling performance across three commonly employed test modalities (i.e., dictation, error correction, proofreading). To further examine whether these test types vary in the cognitive load they place on test takers, correlations between working memory and spelling scores were also examined. Results indicated that the modes of assessment were not equivalent indices of individuals’ orthographic knowledge. Specifically, performance in the dictation and error correction conditions were superior to that in the proofreading condition. Moreover, correlational analyses revealed that working memory accounted for significant variance in performance in the dictation and error correction conditions (but not in the proofreading condition). These findings suggest that not all standardised assessment methods accurately capture student competencies and that these domain-specific assessments should seek to minimise the domain-general cognitive demands placed on test takers.

Included in

Education Commons

Share

COinS