Student evaluation surveys provide instructors with feedback regarding development opportunities and they form the basis of promotion and tenure decisions. Student evaluations have been extensively studied, but one dimension hitherto neglected is the actual measurement aspect: which questions to ask, how to ask them, and what answer options to offer to students to get the most valid results. This study investigates whether cross-cultural response styles affect the validity of student evaluations. If they do, then the student mix in a class can affect an instructor's evaluation, potentially producing biased feedback and prompting inappropriate decisions by university committees. This article discusses two main response styles, demonstrates the nature of the bias they can cause in student evaluation surveys using simulated artificial data, and illustrates three cases based on real student evaluation data in which marketing instructors' teaching quality assessments may be heavily biased because of response styles. The authors propose a simple method to check for response style contamination in student evaluation data and they discuss some practical implications.