Reporter 418, 6 April 1998


Research Assessment: consultation by the Funding Councils (RAE 2/97)

Response from the University of Leeds

Preamble

A major purpose of Research Assessment Exercises (RAE) is to enhance the quality of research in the United Kingdom, and exercises should be conducted in ways which achieve this purpose. We are hence concerned about the negative consequences of successive RAEs, including:

The University’s response below highlights the areas where we believe change in the exercise is most needed to address our concerns

Question 1: Should the funding bodies use a form of research assessment exercise similar to previous exercises to make judgements of research quality for

allocating research funding? What other approach might we take which would be as robust, effective and acceptable to HEIs as a form of RAE?

YES: No other approach proposed, but we would highlight our concerns set out in the preamble.

Question 2: Should research assessment be concerned only with the question of research quality as opposed to, for example, its value for money or its relevance to wealth creation and the quality of life?

NO: The University is committed to producing the highest quality research to international standards. It is also committed to contributing to wealth creation and quality of life, through its research activity, at local, regional, national and international levels. The University believes that the emphasis on wealth creation, and applied research for business and industry, is sometimes seen as in conflict with the criteria for determining quality in research assessment exercises (RAE). The Funding Councils need to work hard to allay fears about this perceived conflict, so that researchers can work effectively to achieve excellence and to meet the needs of the nation, appropriately to their fields.

There are a number of ways in which the Funding Councils could act to allay fears. One would be to include consideration of both excellence and relevance explicitly in the RAE. This would have to be done very sensitively, to reflect the different balances between pure research and application in different disciplines. Disciplinary differences could be addressed by allowing individual RAE panels to determine the weight they will give to excellence, and to relevance, and to announce this as part of criteria-setting. Proper regard would need to be given to achievement of quality of life objectives as an important component of relevance, as well as contribution to wealth creation.

Another option would be for the Funding Councils to introduce a separate stream of funding for Applications, which would reflect the importance of this aspect of research. Added to teaching and research funding streams, this would provide a wider range of rewards and incentives for the higher education system, reflecting its diverse mission.

The discussion on relevance above should not be extended to an argument for the allocation of the related funds for research grants through the Research Councils. The University remains fully committed to the dual support system of funding, and, in particular, a block grant distributed by the Funding Councils.

Question 3: Should research assessment cover all academic research, and adopt a broad and inclusive definition of research?

YES: The University supports the current definition of research and scope of the exercise. The Councils need to ensure that the definition of research used in the exercise is appropriate to subjects in which performance is a component of research activity.

Question 4: Should there be a single UK-wide exercise rather than separate exercises conducted by each of the funding bodies?

NO VIEW: Enough institutions need to be involved in any exercise so that there are sufficient submissions in a unit of assessment (UOA) to allow robust judgements to be made on national and international standards.

Question 5: Should the method of assessment continue to be based primarily on peer review?

YES: with the proviso, following from the University’s answer to Q2, that users will need to be involved appropriately. The method of assessment might then be more appropriately termed “merit review”.

Question 6: Should the exercise continue to be based only on single-discipline units of assessment (UOAs)?

YES, in the main, but the University is particularly concerned that improvement is made in the treatment of interdisciplinary research, and provides proposals on this under Q27, which have implications for the construction of UOAs. See also our answer to Q23. (We also note that the current UOAs are not all single disciplines.)

Question 7: Should all submissions for assessment conform to a common framework and common data definitions?

YES: The current balance between a common framework, but with scope for individual panels to act in ways appropriate to their disciplines, is the only realistic way of running an exercise which needs to be demonstrably fair but also reflect disciplinary differences. The common framework and data definitions are also essential to the assessment of interdisciplinary research. The University has some concerns over the balance between common and subject-specific elements (further discussed in Q25-26).

Question 8: Should the exercise continue to be a prospective one, based on past performance?

YES: However, the University is concerned that the single census date for the exercise may lead to recruitment practices which are focused artificially around that date, rather than focused on real strategic timings and objectives. Consideration should be given to allowing outputs to be divided between institutions, if research active staff move within the relevant period. However, it is recognised that attributing outputs will not properly reflect the distribution of effort, and that it would be very difficult to attribute the latter in any easily administered way.

Question 9: Should ratings of research quality continue to be the only output of a future assessment exercise, or should we consider adding a greater formative element? If so, how much developmental feedback should the exercise provide at the subject level and on submissions?

NO: Developmental feedback is an important part of the process given that, in our view, the main purpose of RAEs is to improve research. Developmental feedback should be provided, particularly, for new subjects (such as nursing).

The Funding Councils should provide feedback to inform universities and their staff in the management and conduct of research; the Councils themselves should guard against any suggestion that they are seeking to form or manage research.

On a lesser note, as one of the tools in its management of research, this University conducts strategic research reviews of departments. It uses external consultants, sometimes drawn from RAE panels, for this purpose, and has found this very valuable. The funding bodies should ensure that members of panels may act in this way, which was not always the case after the 1996 exercise.

Question 10: What additional financial cost would be acceptable to include a greater developmental aspect in the RAE?

The University believes that a limit of no more than 1% of the funds to be distributed should be set as the maximum cost of the exercise. If the periods between exercises are extended (for example, every 5 years), then more could be spent on exercises and the cost could stay comfortably within the limit imposed. We believe that developmental feedback, improvements in the treatment of interdisciplinary research and greater attention to relevance, could all be delivered within the resource limit.

On a lesser note, improvements in the software used in the exercise could assist in delivering it more efficiently, possibly by the employment of proprietary software.

Question 11: Are there activities essential to research which the present definition excludes but where there is a strong case for taking these into account in assessing the full range of a department's or unit's research ?

NO

The University would stress above all that panels should make clear their definitions, and what they regard as research, in their criteria, and act consistently with these in assessment,

Question 12: What should be the interval between the 1996 RAE and the next and subsequent exercises?

Four or five years between exercises is acceptable. (It is assumed that the period from which outputs must be drawn to contribute to an exercise will continue to be six years in the humanities, and four years in other subjects. However, there should be no “gap” year for outputs. Hence, if the next RAE is held in 2001, on the five year cycle, the period for outputs for the sciences and engineering will also need to be five years.)

If the next exercise were held in 2001, we believe it would be valuable to constitute the panels to establish and announce criteria earlier than was the pattern for the 1996 exercise. This would assist institutions in preparing for the exercise.

Question 13: Would it be practicable in administrative and funding terms to assess only some subjects in any one year? Would this approach reduce the overall burden of research assessment on higher education institutions and researchers and be more attractive to them?

NO VIEW: The University believes that staggering the exercise would be valuable to spread the burden of the assessment process, and also to gain greater stability in funding (as QR changes would not all be made in one year). However, it recognises the difficulty in achieving a sensible split between UOAs, and especially the problems posed by this for the proper treatment of interdisciplinary research. The University believes that further detailed investigation is required by the Councils on the practicability of a split in UOAs. This issue might then be raised again in the second consultation to be conducted by the Councils.

Question 14: Should we consider further an interim, opt-in exercise? If so, what kind of exercise would be robust and equitable in principle, workable in practice and attractive to HEIs?

NO: The costs of such an exercise would be disproportionate to the funds likely to be redistributed as a result (as primarily those with lower ratings would apply for reassessment). There might be an argument for an interim exercise for new and emerging fields.

Question 15: Should we implement the Dearing Committee's proposal to provide an incentive not to participate in the RAE? How else could we meet the need to strike a balance between teaching and research activity?

YES: This proposal is worth developing further. However, proper, direct encouragement for quality and innovation in teaching is also required.

Question 16: If peer review is retained as the primary method of assessment, should this be supplemented by quantitative methods, and if so, how?

MAYBE: Panels should have the opportunity to use quantitative techniques if they so wish. This should be declared clearly in criteria-setting. However, quantitative methods will be wholly inappropriate in many subjects, and so no standardisation of approach across all UOAs should be attempted. If the Councils decide to set up a “watchdog panel”, referred to in Q26, then it might be valuable to include an expert in scientometrics, who could advise panels on the reliability of any techniques they might wish to employ.

Question 17: Should an element of self-assessment be introduced to the assessment process, and if so what should this comprise?

NO: This is unlikely to be sufficiently reliable to be used in an exercise upon which resource allocation hangs, and hence will not lessen any of the burden on panels.

Question 18: How could we discourage HEIs from exaggerating their achievements? Would publishing whole or parts of submissions bring value to the RAE? Should we consider a deposit which would be forfeited if the actual grade fell significantly below the grade claimed?

The University believes publishing the submissions would assist in encouraging honesty.

Question 19: Would an element of visiting improve the RAE?

On balance, NO, visiting might be valuable in borderline cases, or at the extremes of the scale, but we see this as of only marginal value and it would add considerably to the cost of the exercise.

Question 20: Could it be achieved in a cost-effective way?

Question 21: Should we explore the possibility of conducting visits for submissions in certain categories, and if so which ones?

See Q19.

Question 22: Should we consider reconfiguring large units of assessment, and in particular medical subjects, into better defined units?

The University believes that serious consideration should be given to the reconfiguration of medicine and health units of assessment. The size of these UOAs leads to both funding and assessment problems. Consistent with our other responses to this consultation, we believe that the structure of these UOAs need to reflect the multidisciplinary and application-based nature of research in modern medicine and health. We also believe that the number and diversity of submissions made to UOAs 1-3 in the 1996 RAE meant that critical rating decisions were sometimes made by the very limited subset of a panel (even an individual) which was informed about the topic. This is not a robust method for the future.

Detailed analysis is required to reconfigure UOAs, and this analysis needs to take into consideration also the proper configuration of UOAs 10 and 11 and biology UOAs. Our speculative proposals (and these would need to be tested further) are:

a. If UOA 1 is retained, then the placing of medical/clinical physics should be considered given that such research can be returned in other UOAs. The main areas embraced in UOA 1 are different aspects of laboratory medicine, and consideration might be given to dividing the UOA, so that the workload is not so heavy, and more specialist sub-panels can be constructed.

b. If UOA 2 is retained, the inclusion of psychiatry in UOA 2 is inappropriate, and this area might better be included elsewhere. The residual of UOA 2 might then more appropriately be termed population-based studies, embracing epidemiology and primary and community care.

c. UOA 10, Nursing, should be disbanded, and this, with psychiatry, should be included with other clinical research mainly returned in UOA 3. UOA 3 should then be disaggregated into 8-10 UOAs, or possibly sub-panels, which represent areas of clinical research practice (focusing on topics such as: neurology and mental health, cancer, cardio-vascular, muscular/skeletal, reproduction, child health, etc). Similarly UOA 11 could also be incorporated into this schema.

d. The whole should be overseen by an overarching “super-panel”.

Question 23: Apart from medical subjects, was the number, division and coverage of the 1996 subject-based UOAs and assessment panels appropriate? Are there instances where a strong case can be made for combining or subdividing UOAs or panel remits, on which we might further consult subject communities?

This University believes that one of the most significant and important challenges for the Funding Councils is to improve the treatment of interdisciplinary research. The “map” of research is continually changing, and what were once interdisciplinary groupings sometimes emerge as strong fields which have sufficient critical mass to justify their own UOAs or panels. As a result, the Funding Councils need to facilitate a wide debate at the outset of preparing for any RAE on the proper UOA and panel construction, so that these capture change and give proper attention to new and emerging fields.

The University has not conducted an extensive survey of views on the construction of individual UOAs and panels, which may be better facilitated through subject associations. However, strong arguments have been made in the University for the splitting of the Education UOA, which had a large number of submissions in the last exercise, into initial education and post-compulsory education and training.

Question 24: In addition to suggestions in questions 22 and 23 above, could the use of sub-panels ensure coverage of broadly defined subjects without requiring additional panels?

POSSIBLY: A range of techniques may be necessary to capture the full spectrum of research and assess all properly, including super- and sub- panels (see Q7) and more attention to individuals who represent emerging fields and interdisciplinary areas (see Q27).

Question 25: What is the desirable balance between maintaining some common criteria and working methods for different panels, and allowing panels to set criteria appropriate to the subject area?

Question 26: Should formal mechanisms be introduced to ensure greater interaction and comparability between panels? If so, how far should we strive to set limits to moderation and conformity?

The University is aware of concerns in some subjects that there were surprising variations in the criteria of individual UOAs within a broad cognate group (for example, engineering). Also, the University understands that the specificity of the criteria for individual UOAs made it more difficult to cross-refer submissions to other panels with different criteria, which made the assessment of interdisciplinary research more difficult. Improvements need to be made in these areas for the next exercise. One option would be more extensive use of over-arching “super-panels” which would ensure that differences between panels in broad cognate groups were justifiable. The University understands that the Councils are also considering the creation of a “watchdog panel” to oversee processes (for example of cross-referral), and it would support further investigation of this.

It is most important that researchers and institutions are clear about how they are being assessed, that is clarity and openness in criteria-setting.

Question 27: Could the measures outlined in paragraphs 48-52 above be combined with a broadly discipline-based framework to improve the assessment of interdisciplinary research? What other measures would be both feasible and effective?

We are doubtful that interdisciplinary research is a single “animal” that can be treated in any one way. A number of solutions are required to respond to the different problems posed in assessing different types and natures of inter-, cross- and multi- disciplinary work. Part of the solution to improving the treatment of interdisciplinary research, will be to ensure that UOAs and panels do capture emerging fields when these have gained critical mass (see our answer to Q23). Part of the solution may lie in better use of “super-panels” to ensure appropriate levels of conformity which can assist in cross-referring submissions (see our answer to Q7). A process panel may also assist (Q7) where cross-referral across broad cognate groups is required, or where there are concerns that a minority interest within a UOA may not be given proper attention.

Part of the answer may also lie in changing the overall conception of shaping the exercise, from seeking to impose divisions and boundaries to research (definition of UOAs and panels) to attempting to describe a seamless web of activity. A practical way of doing this would be for the Councils to consult, not just on the definitions of UOAs, but also on emerging fields. These would be defined as groupings of research activity which were not likely to fall squarely within the remit of a UOA, and/or which were unlikely to have sufficient critical mass to justify an entire UOA, or might be missed when composing a panel. The Councils could then ensure that such emerging fields were always represented on some panel; and submissions could then be flagged as requiring consideration from that specific individual, whatever the UOA to which they were submitted.

Question 28: For a future exercise to assess research quality, should we continue to appoint panel members as outlined above, or should we cast the net more widely? In particular:
a.Should panel chairs be appointed by the funding bodies on the basis of nominations from outgoing chairs alone? What alternatives are possible (for example, the new chair to be elected by the outgoing panel)?
b.Should we continue to seek a degree of continuity in membership of panels, and to what extent?

The University is generally supportive of the methods used previously to appoint panel members and chairs. Our answer to Q2 will mean that more attention will need to be given to user representation and inclusion of those who can give a view on relevance. Further to the points we have made on interdisciplinary research and emerging fields, it may be necessary to turn over the memberships of panels faster than one-third per exercise, as is currently the practice. It is important that the Councils look widely for nominations to panels.

Question 29: How can we attract and induct appropriately qualified users and in particular people from industry into the process of academic peer review?

Question 30: If the problem is the inability of those concerned to devote the necessary time to the process, is there a way to involve them which would not require so much time?

If the exercise is changed to give more emphasis to wealth creation and quality of life, then users may have more interest in being involved. Users have played an active part in Technology Foresight, as an example. The Funding Councils should begin by targeting users who already have strong alliances with higher education. It may be most-effective to get user-inputs through innovative methods, such as email networks, rather than to expect users to play a conventional part in committee working. Also, more strategic use might be made of users’ time, perhaps by panels asking them for specific inputs, such as reviewing the relevance of a particular submission.

Question 31: To what extent should we seek to appoint overseas academics to assessment panels? Would it be possible to involve overseas academics short of full membership of panels?

Question 32: Should we seek other ways of adding an international dimension to the process, for example by requiring international moderation for top grades?

Overseas assessment will not be appropriate or useful in all subjects. In some, international moderation might be valuable if it can be achieved in a cost-effective way, for example through email or inputs by correspondence.

Question 33: In a future exercise to assess research quality, should we adopt a rating scale that makes no reference to sub-areas of research activity?

NO

Question 34: Should HEIs still be able to identify within submissions the sub-areas of research conducted in the unit ?

YES

The University is in favour of retaining sub-areas in the scale, and in allowing institutions to cite sub-areas in submissions.

Question 35: Should decimal scores be allowed, based on averaging scores awarded by individual panel members?

The University can see the attractions of a more differentiated scale to ease fluctuations in funding. However, it is understood that not all panels worked in ways which would generate decimal scores.

Question 36: Should the present seven-point scale be retained, and, if so, should it be numbered from 1-7?

YES: numbered 1-7.

Question 37: Should the rating be expressed as the percentage of work of national excellence and the percentage of work of international excellence?

NO: The current definitions are workable and the community is used to operating them.

We are also concerned that greater discrimination in the rating, such as proposed in Q37, might single out new researchers, to their detriment.

Question 38: Is there a case for panels identifying a minimum number of staff required for the attainment of the highest grade?

Question 39: Should such decisions be made through consultation with subject communities?

Question 40: Should the rating scale be modified to reflect more directly the proportion of staff submitted in highly rated departments bearing in mind the potential consequences for submission patterns and for the calculation of funding?

The University is concerned that the reputation of research in higher education may be driven down by a few who would trade on the false currency of a high rating, which actually applies to only a very small proportion of staff. The current system for designating rating and proportion does lend itself to this abuse.

A radical option to avoid this false trading would be to require 100% submission of academic staff as a condition of entry into research assessment exercises. This would ensure that the rating was truly representative of the research culture of the department. If the Councils stop short of this, we believe that the award of 5, and certainly of 5*, should be conditional on submission of a substantial majority of staff (90% plus).

Question 41: Are there additional issues concerning the assessment of research which we should consider over the longer term?

Question 42: Are there other issues or concerns relevant to the assessment of research which this paper does not cover?

The University would stress the importance of as early as possible announcement of the date and framework of the next exercise. We consider it essential that the Councils should make early announcement of the funding consequences of the next exercise.

The University believes that indications from the funding bodies on the likely future patterns of RAEs beyond the next exercise would assist in longer-term planning, which is essential in research. Long-term planning would also be assisted if there was as much consistency as possible in the framework and processes used in successive RAEs.

The University considers that improvement in the treatment of new researchers in the exercise is needed. We recognise the difficulties faced by panels in attempting to assess potential rather than achievement. However, many panels were able to make very positive comments in the last RAE about the importance of the inclusion of new researchers in vigorous research groups and departments, and to assess submissions taking due account of relative immaturity. The Funding Councils should ensure that such good practices are adopted by all panels. Again, this is critical to ensure the long-term health of research, and to avoid any distortion in the academic labour market.

The Funding Councils also have to ensure that panels act truly as peers, putting to one side their roles as competitors for scarce research resources. The Councils need to ensure that the work of panels is not compromised by fears that those on panels gain directly in the process. Reassurance on this might be achieved through the work of the “watchdog panel”, referred to earlier in this response. This panel might oversee such matters as declarations of interest by members of panels, and other processes adopted by panels to ensure that peers act in ways that transcend their institutional allegiances.

[Main news stories | University home page | Events]