We use cookies to provide you with the best possible website experience. This includes cookies that are necessary for the operation of the site, as well as cookies used for anonymous statistics, comfort settings, or displaying personalized content. You can decide which categories you want to allow. Please note that depending on your settings, some features of the website may not be available.

Cookie settings

These necessary cookies are required to enable the core functionality of the website. Opting out of these cookies is not possible.

cb-enable
This cookie stores the user's cookie consent status for the current domain. Expiry: 1 year.
laravel_session
Stores the session ID to recognize the user when the page reloads and to restore their login session. Expiry: 2 hours.
XSRF-TOKEN
Provides CSRF protection for forms. Expiry: 2 hours.
IZA Discussion Paper No. 5620
April 2011
Evaluating Students' Evaluations of Professors

published in: Economics of Education Review, 2014, 41, 71-88

This paper contrasts measures of teacher effectiveness with the students' evaluations for the same teachers using administrative data from Bocconi University (Italy). The effectiveness measures are estimated by comparing the subsequent performance in follow-on coursework of students who are randomly assigned to teachers in each of their compulsory courses. We find that, even in a setting where the syllabuses are fixed and all teachers in the same course present exactly the same material, teachers still matter substantially. The average difference in subsequent performance between students who were assigned to the best and worst teacher (on the effectiveness scale) is approximately 43% of a standard deviation in the distribution of exam grades, corresponding to about 5.6% of the average grade. Additionally, we find that our measure of teacher effectiveness is negatively correlated with the students' evaluations: in other words, teachers who are associated with better subsequent performance receive worst evaluations from their students. We rationalize these results with a simple model where teachers can either engage in real teaching or in teaching-to-the-test, the former requiring higher students’ effort than the latter. Teaching-to-the-test guarantees high grades in the current course but does not improve future outcomes. Hence, if students are myopic and evaluate better teachers from which they derive higher utility in a static framework, the model is capable of predicting our empirical finding that good teachers receive bad evaluations, especially when teaching-to-the-test is very effective (for example, with multiple choice tests). Consistently with the predictions of the model, we also find that classes in which high skill students are over-represented produce evaluations that are less at odds with estimated teacher effectiveness.

Communications
Mark Fallak
mark.fallak@liser.lu
+352 585-855-526
World of Labour
Olga Nottmeyer
olga.nottmeyer@liser.lu
+352 585-855-501
Network Coordination
Christina Gathmann
christina.gathmann@liser.lu

The IZA@LISER Network is a global community of scholars dedicated to excellence in labor economics and related fields, now coordinated at the Luxembourg Institute of Socio-Economic Research (LISER) following its transition from Bonn.

About IZA@LISER Network
Contact
IZA Network (Current Site Operator):

Luxembourg Institute of Socio-Economic Research (LISER)
11, Porte des Sciences
Maison des Sciences Humaines
L-4366 Esch-sur-Alzette / Belval, Luxembourg

IZA Institute (In Liquidation):

Forschungsinstitut zur Zukunft der Arbeit GmbH i. L.
Schaumburg-Lippe-Str. 5-9, 53113 Bonn. Germany
Phone: +49 228 3894-0 | Fax: +49 228 3894-510
E-Mail: info@iza.org | Web: www.iza.org
Represented by: Martin T. Clemens (Liquidator)