Since the results of the National Student Survey for 2015 has been released a few days ago, I have tried to see how NSS scores relate to research performance as measured by GPA score for the 2014 Research Excellence framework for Politics departments. The graph below plots UK Politics departments by overall satisfaction scores among undergraduate students in the NSS (question 22 of the NSS) as taken from here, and their GPA score (the average number of stars for research output, impact and “environment”) as taken from here. Here’s how it looks (you can access an interactive version by clicking on the graph):
Does research performance increase the quality of teaching, or does it hamper it? Many people say that there is a positive relationship between research performance and teaching quality. Research should feed back into teaching and vice-versa. Following this logic, there should be a clearly positive relationship between student satisfaction (with the obvious problems of taking this as a measure of quality) and research output. However, there are also obvious trade-offs between the time necessary to deliver good teaching, supervise students, etc. and that necessary to do the things that are considered in research rankings: writing articles in high-ranking journals, or preparing research grants.
If you draw a regression line, it actually slopes downwards, which means that departments with high research scores also have on average lower student satisfaction scores. However, the relationship is not significant and the correlation is weak (R-square-0.05). It may be driven by the London factor: UCL, LSE and King’s have low student satisfaction scores but high GPAs. At the other end, Liverpool Hope and Surrey did extremely well for student satisfaction but didn’t fare very well in the REF. Essex, Oxford and Warwick do well on both counts. As a whole, this casts some doubt on the idea that better research means better teaching, with the caveats linked to measurement in both areas.