Fresno State Student Ratings of Instruction (FSSRI)
Frequently Asked Questions
First, make sure that opting out is permissible. Each department has a policy about this. Check Departmental Policies. When you opt out, your department chair will be notified, in order to verify that this is permissible.
Please consider opting out of labs or activity classes that are attached directly to lectures if they have the same instructor. Please consider opting out of independent study or thesis units with very few students.
Navigate to the For Faculty/Instructions page to view a video walk-through of opting out of student ratings for a specific class.
No. The dates are set each semester to run the last two weeks of instruction.
No. SmartEvals does not offer this function.
The survey consists of 12 required areas (four that are related to instructional design, four that are related to instructional delivery, and four that are related to assessment).These are specified and required by APM 322. Therefore, you must have at least one item in each of the 12 categories. But each category includes multiple options. You may have up to 24 total questions. You may add written questions in addition to the standard items.
Navigate to the For Faculty/Instructions page to view a video walk-through of selecting/adding questions.
If you do nothing, then your course will have a survey consisting of the 12 default items. Afterward, the report will be generated and will be available to you and your Dean’s Office, through SmartEvals, to be placed in your personnel file.
Potential Problems with Student Ratings of Instruction
There is very strong evidence from randomized controlled trials that there is a gender bias in student ratings. Within the same course, if students think their instructor is female (a perception that can be randomly assigned in online classes), they rate the instructor more poorly than if they think the instructor is male. The discrepancy can be as large as half a point on a 5-point scale. (e.g., https://thekeep.eiu.edu/cgi/viewcontent.cgi?article=1509&context=jcba)
Race has been more difficult to study. There are more racial/ethnic groups than gender groups, and it is confounded with issues such as immigration status and language skills. A small replication of the design described above with the perception of an instructor's race in an online class found evidence of race bias with a much smaller effect size than gender bias. A very thoughtful discussion of this study, and this entire body of literature, can be found here: https://www.cambridge.org/core/journals/ps-political-science-and-politics/article/exploring-bias-in-student-evaluations-gender-race-and-ethnicity/91670F6003965C5646680D314CF02FA4
That said, both gender bias and race bias are rampant within the written comments on student ratings. This is why they are not included in personnel files, but instead or seen only by the instructor and department chair. In addition, gender and race bias are much bigger problems when ratings are generated by low-quality instruments that have not been created by social scientists and vetted for reliability and validity. RateMyProfessor ratings are a classic example of ratings that are not reliable or valid, and therefore quite vulnerable to bias. The Fresno State SRI Questionnaire was created by scholars with expertise in survey construction and tested thoroughly for reliability and validity. We have not found evidence of gender and race bias here on our own campus using our new instrument, but we continue to review institutional data for these problems.
There are some calls within academia to abolish the use of student ratings altogether because of this evidence of bias. In a recent survey of Fresno State faculty, we found that only 15% of respondents take this position. Furthermore, 95% of respondents report that they have used information from student ratings to improve their own teaching. And the faculty union has never taken a position against student ratings. Therefore, we think it is unlikely that student ratings are going away on the Fresno State campus. That would remove students' voices entirely from the evaluation of instruction and from the process of improving instruction. The Student Ratings Committee hopes to generate discussion on the Fresno State campus about how we want to address this issue, while continuing to honor the voices of our students.
Some student ratings are surely just popularity contests. When the question is totally vague (e.g., "how satisfied are you with this instructor?") then the answer is bound to be a general impression, because that was what was asked for. Ratings based on items like this are not related to how much students actually learn. e.g., https://www.sciencedirect.com/science/article/abs/pii/S0191491X16300323
Our Student Ratings Committee strove to create an instrument for student ratings that would be different from this. The Fresno State Student Ratings of Instruction instrument is based on the following principles:
Student cannot accurately report:
- how much they learned, because human being are not good reporters of this, especially when the learning is new. To learn more about why, read about the Dunning Kruger Effect. https://www.nytimes.com/2020/05/07/learning/the-dunning-kruger-effect-why-incompetence-begets-confidence.html
- invisible things, like how much their faculty care about students, or how knowledgeable they are in their fields.
Therefore, the FSSRI does not include items such as these.
But students can report:
- Whether or not THEY understood things like the purpose of their assignments, how they would be graded, that their questions were welcome, etc. Their understanding tells us if our efforts at conveying these things were successful.
- About our directly observable behaviors. If those behaviors are known to produce learning, as demonstrated in published empirical research, then it's worth asking students if faculty did those things.
Therefore, the FSSRI includes only items such as these.
While some student ratings instruments may be popularity contests, ours is not.