Friday week (October 4) is the deadline for providing input to the consultation on changes in the PBRF process (for foreigners: the national research evaluation program that allocates a chunk of long-term research funding to universities). Here’s the consultation document, if you haven’t read it yet.
This is what I’m planning to say. It’s also open for public feedback.
Background: I was a member of the PBRF MIST review panel, and also submitted a portfolio. I moved to NZ from the USA in 2010, so this was my first involvement with the PBRF process.
Observations.
0. Block funding that is not based on specific grant applications is tremendously beneficial to research in New Zealand. Evaluation and feedback is somewhat valuable.
1. In my (admittedly biased) opinion the PBRF process, at least in my field, produced good ratings of research. Before the panel process began, I was concerned that applied statistics, in particular, might be under-rated by the panel. This was not the case.
I’m not claiming the ratings are perfect – there is not a well-defined ‘right answer’ and there may well have been some individuals who were misjudged – but I cannot envisage a system that would do significantly better, and it is easy to see how it could be done much worse. Interdisciplinary fields such as statistics, in particular, can fare very badly under a bibliometric approach.
2. The system is very expensive. The consultation document quotes a figure of 4% of the total PBRF funds for the 6 year period, but since most of the effort was in a half-year period, that’s nearly 50% in that 6 months.
The effort of compiling and optimising a research portfolio was required even for people who had no realistic prospect of a fundable rating. There was also a large burden on senior researchers both within the institutions and on the panels – and if you believe the PBRF funding formula, their time is especially valuable.
3. The funding is tilted very strongly towards ’A’s, a rating that even the best junior researchers will not be able to attain. This has implications for developing research excellence in New Zealand, and also for equity – if our equity initiatives are working at all, top younger researchers will be more diverse than their seniors.
4. The main impact of the Christchurch earthquakes will be in the next round, not the last round. The quakes happened late enough that most research disrupted by them would not have been published in time to be eligible.
5. While ‘Peer Esteem’ could be conceptually distinct from ‘Contribution to the Research Environment,’ evidence of the latter is largely based on the former.
Suggestions.
There should be a subcategory of the B grade for excellent but relatively junior researchers (eg, 5 or fewer years experience) which explicitly requires less substance beyond the Nominated Research Outputs and provides more funding than the standard B category.
Combine the ‘Peer Esteem’ and ‘Contribution to the Research Environment’ categories and reduce the number of examples allowed to, say, 20 for the combined category. Reducing to 8, as in the consultation document, is going too far, both in reducing information and in making researchers and institutions second-guess the panel’s opinions.
Removing the Preliminary/Preparatory scores seems a serious mistake to me, though I expect panel chairs would ask panel members to pre-score applications and compare scores in any case. Committing to Independent scores before any discussion takes place, and requiring separate scoring for outputs and PE/CRE are important in ensuring careful evaluation, especially of senior people whose productivity has declined. Once the component scores exist, I can see no reason not to communicate them to the researcher.
Allow science outreach/science communication to qualify in the combined PE/CRE category. It is entirely appropriate for research funding to support science communication, and science communication should be recognised as a valuable product of the research process.
The 5:3:1 weighting for A:B:C grades is, in my opinion, excessive (and I’m one of the beneficiaries of it). It should be reduced either explicitly or by introducing extra grades for less senior researchers.
The TEC should consider carefully how to take into account the disruption in research in the Christchurch area for at least the first two years of the next PBRF cycle.
I do not think that drastically reducing the number of ‘Other Research Outputs’, as suggested in the consultation document, is helpful. This will slightly increase the workload on academics to select the appropriate subset, for relatively little benefit to the panels. There would be some reduction in validation effort for institutions and TEC, but the validation should be easier for new outputs than it has been for older ones.
Mechanisms should be investigated for simply not submitting portfolios for staff who are clearly likely to receive ‘R’ grades. Compiling these portfolios is a pointless effort for the staff involved and evaluating them is a waste of time for the panels.
Special circumstances, in particular part-time work, should be allowed for even if it rarely makes a difference. There will sometimes be a difference, and it is important that there is some way for a researcher to be able to describe the problem and for the panel to be allowed to take it into account.
Unrealistic dreams.
A bibliometric system (including alternative metrics) combined with panel evaluation of a stratified probability sample of research portfolios could give similar accuracy for funding allocations at the level of academic units, with enormously less work.
I am not under the delusion that this has any chance of happening.