Interpreting SIRS
Interpreting the Student Instructional Rating Survey data and other student feedback
Student feedback on end-of-course surveys should not be interpreted as evaluations of teaching or as direct measures of their learning, since students have limited expertise in education (Boring et al., 2016; Linse, 2017). Students are, however, important stakeholders within the university and are uniquely qualified to comment on certain aspects of the course.
Although student ratings are not comprehensive evaluations, they can contribute to a holistic understanding of teaching effectiveness when interpreted carefully. The Rutgers Guidelines for the Evaluation of Effective Teaching, based on the Teaching Quality Framework (Finkelstein et al., 2015), outline seven critical areas: instructional modality, preparation, teaching practices, presentation and engagement, student learning outcomes, professional development and service, and mentoring. Student feedback can provide relevant evidence within each of these competency domains.
To carefully and equitably interpret student ratings, evaluators must treat student feedback as a researcher interprets other survey results. Here are strategies that are both well-defined in the literature and used by Rutgers department and school administrators when reviewing student feedback:
- Disregard Outliers. When there are a few extremely negative or critical responses that are not in line with feedback from a majority of students, these responses should be discounted.
- Consider Response Rate. When the response rate for a survey is very low and a few negative or critical responses are received, the entire survey may be discounted, and evaluators should focus on other evidence.
- Seek Evidence. When the response rate is strong and a substantial number of students express negative or critical responses, an evaluator should seek additional evidence to see whether it corroborates student concerns, including by giving instructors opportunities to respond and to adjust instruction if possible. Even when substantial student dissatisfaction is expressed, additional evidence may lead an evaluator to decide that the teaching methods and approach meet the standards of the department.
By using these strategies when evaluating teaching for personnel decisions, undergraduate directors, department chairs, and others involved in evaluating teaching can ensure that student feedback is used most appropriately as part of the holistic evaluation of teaching. See Drue & Bifulco (2025) for examples of how these principles promote more equitable evaluation of teaching.
OTEAR regularly runs an “Interpreting SIRS and other Forms of Student Feedback” workshop to educate and support our community. This workshop is especially beneficial for those who regularly engage with SIRS, but we encourage anyone from the university community interested in learning about best practices in interpreting student feedback to join us. Materials and a past recording can be found on this Canvas page. Please sign up for the OTEAR email listserv to receive notifications about upcoming sessions.
References:
Boring, A., Ottoboni, K., & Stark, P. B. (2016). Student evaluations of teaching (mostly) do not measure teaching effectiveness. ScienceOpen Research, 1–11.
Drue, C.R., Bifulco, C.A. (2025). Beyond the Numbers: How Directors and Chairs Interpret Student Feedback to Equitably Evaluate Teaching. J Acad Ethics. https://doi.org/10.1007/s10805-025-09611-5
Finkelstein, N., Reinholz, D. L., Corbo, J. C., & Bernstein, D. J. (2015). Towards a teaching framework for assessing and promoting teaching quality at CU-Boulder, Boulder, CO: Center for STEM Learning., Report from the STEM Institutional Transformation Action Research [SITAR] Project).
Linse, A. R. (2017). Interpreting and using student ratings data: Guidance for faculty serving as administrators and on evaluation committees. Studies in Educational Evaluation, 54, 94–106.