This article is more than 3 years old

The government’s response to the Pearce review of TEF doesn’t do justice to it

The government's failure to seriously engage with the Pearce review's proposals sets the TEF on a course towards irrelevance, argues Paul Ashwin.
This article is more than 3 years old

Paul Ashwin is Professor of Higher Education at Lancaster University.

Make no mistake, the Pearce review of the TEF is a serious piece of work.

It takes an evidence-informed look at the purposes and design of the TEF and fully engages with the HE sector’s concerns about the exercise. It develops a holistic and systematic set of proposals for the future development of the TEF that even-handedly take account of the strengths and limitations of its current design.

In contrast, the DfE response picks and chooses elements it likes and dislikes while ignoring the underlying logic of the review. As a result, its response is made up of an incoherent set of proposals that largely end up mumbling uncertainly about what they will ask the Office for Students to do in the future.

Rather than having the report since August 2019, it feels like a piece of homework that DfE forgot and had to cobble together on the bus while chatting to their mates. It completely misses (avoids?) the key conclusions of the review. In particular, while the Pearce review tries to shift the TEF to a sustainable focus on enhancement, the DFE response still seems obsessed with a metrics-based exercise that, if pursued, is most likely to result in the TEF’s demise.

Metric weakness meets imperial response

In undertaking its independent review of the TEF, the Pearce review straightforwardly addressed the central weaknesses of the TEF. It asked the Office for National Statistics (ONS) to examine the metrics used in the TEF. While the ONS was very polite about the efforts that were put into developing the TEF metrics, its conclusions are pretty damning. It finds that the metrics do not meet the National Audit Office’s guidelines for the development of performance measures.

In particular, as highlighted by many commentators during the development and operation of the TEF, the metrics are open to institutional gaming and many do not measure elements that are under the control of the institutions being assessed.

More damagingly, the ONS suggest that the metrics should not be set by the government but should rather be identified by those who have some expertise in the area. The Pearce review also found, to the surprise, surely, of absolutely nobody, that applicants, careers advisors and employers are not greatly interested in the TEF outcomes and would rather rely on other measures of educational quality.

In the face of these findings, the Pearce review inevitably concludes that the purposes of the TEF should be to identify incidents of excellence and to support enhancement. The review also recommends that, as the TEF does not directly measure teaching, it should be re-titled the Educational Excellence Framework.

The weakness of the metrics and the need for the TEF to focus on enhancement are crucial to understanding the coherent set of proposals offered by the Pearce Review. Given that the TEF can no longer pretend to be about precisely measuring excellence, it is instead re-designed to offer a way of supporting enhancement.

Similarly, as it cannot work as a subject level assessment exercise, this is elegantly incorporated into the institutional exercise as metrics to inform institutional enhancement but to also provide public information for interested applicants.

The dog ate my homework

The DfE response reads like a hurried note from a Department who had an independent review forced on them as a condition for getting a higher education bill passed. It completely misses or ignores the strong reasons why the Pearce review re-focuses the TEF on enhancement.

While the department accepts that the primary purpose should be enhancement, it insists it should inform applicant choice even though giving institutional measures of quality are completely useless for those applying for particular degree programmes – the quality of which will vary within a single institution.

DfE wants to keep the name TEF because of its “brand value” even though it appears to be completely lacking in such value for the groups they are most bothered about: applicants and employers.

It completely ignores the ONS conclusion that government agencies should not be identifying the metrics used for the TEF and even seems to imply that the OfS should look to develop more national metrics. The challenges of developing such metrics is why the Pearce Review concluded that institutions should use their own evidence to examine for Teaching and Learning Environment and Educational Gains sections of the proposed new TEF.

However, while this made complete sense in the enhancement world of the Pearce review, where institutions could show how this data informed improvement, it makes no sense at all in the metrics-world of the DfE with its obsession with comparing the incomparable.

The sense of incoherence in the DfE response is strengthened by the tensions between different parts of its conclusions. For example, the TEF is supposed to inform applicants’ choice but only measures institutional quality every 4-5 years, even though this means it cannot provide the timely information about particular degree programmes that applicants need.

The excellent work of the Pearce review offers a sustainable future for the TEF that integrates it into the quality architecture of English higher education and positions it as a key support for enhancing educational quality across the sector.

The great pity is that the DfE response suggests that its fixation with measurement will result in the TEF simply being a very expensive and relatively short-lived footnote in history.

4 responses to “The government’s response to the Pearce review of TEF doesn’t do justice to it

  1. Excellent article. Even if we only focus on the misnomer (the T in TEF) there is a huge problem! How odd of the Govt not to acknowledge the issue here! Who know that it could be gamed? McNamara fallacy anyone?

  2. Fabulous article – the logic of metrics is so pervasive that no report, however well-written, can cause a moment’s critical reflection

Leave a Reply