Section 5.4 Interpreting Feedback

Section 5.4 Interpreting Feedback

Section 5.4 Interpreting Feedback

For many staff the output from SET is not developmentally useful. One reason for this is the reliance on the survey approach (see Section 4.6.1) which yields predominantly quantitative data. Another is the contradictory nature of the feedback/comments, representing the diverse preferences and expectations of students in large HE classes.


An example of this provided by Hendry and Dean (2002) queries how a lecturer should respond to the finding that 40% of their students perceived the lecturers to be poorly organised. The figure fails to illuminate exactly why 40% were disgruntled or why the remaining 60% perceived the same classes as being well organised.


Palmer (2011) draws attention to the problems of using and interpreting mean scores when data do not come from a large, normally distributed population. Without such a sample there are difficulties calculating accurate confidence intervals and urges caution in using these scores in decision making.


Echoing Cashin’s (1990) recommendation that academics should have some form of clear written guide of how to interpret information Palmer (2011) suggests the Rating Interpretation Guides (RIGs) system (Lemos et al. 2010; Neumann 2000; Santhanam et al. 2000; Smith 2008). Although the specifics of various RIGs-style systems vary, the core element is the provision of a norm-based set of benchmarks for the ranking or comparison of SET results which are based on units of study that are similar in certain relevant respects (e.g., class size, year or level, or discipline grouping) to the target unit.


Cashin’s second suggestion was to encourage discussion of feedback with an appointed ‘instructional consultant’ within their own School or College to assist with interpretation and in developing an action plan for implementation.


Wongsurawat (2011) discusses some important considerations when interpreting qualitative feedback and the problems generated by the necessarily anonymous nature of the SET process. One suggestion around this problem of ‘under-determination’ is, prior to supplying written comments, asking students to rate various attributes of the course using a Likert scale, and investigating the correlation of each student’s ratings with the class mean. Wongsurawat (2011) has proposed a conceptual framework to help determine if any given comment is likely to be a majority sentiment or a minority concern.


Activity 5.4

Based on the feedback you received from last semester’s SFM report, what mean scores did you receive on the core questions?

How did you interpret these?

What did those scores mean for you?

Submit your answers


Resources

Wongsurawat, W. (2011). What’s a comment worth? How to better understand student evaluations of teaching. Quality Assurance in Education, 19(1), 67-83

Interpreting UCD SFM scores: http://www.ucd.ie/t4cms/Student%20Feedback%20Responding%20Constructively.pdf

Interpreting feedback: http://www.stanford.edu/dept/CTL/cgi-bin/docs/newsletter/student_evaluations.pdf


Back to 5.3 Continue to Section 5.5 Back To Section 5

Page tools