- Search Menu
- Sign in through your institution
- Advance articles
- Author Guidelines
- Submission Site
- Open Access
- Why Publish?
- About Research Evaluation
- Editorial Board
- Advertising and Corporate Services
- Journals Career Network
- Self-Archiving Policy
- Dispatch Dates
- Journals on Oxford Academic
- Books on Oxford Academic
Article Contents
1. introduction, what is meant by impact, 2. why evaluate research impact, 3. evaluating research impact, 4. impact and the ref, 5. the challenges of impact evaluation, 6. developing systems and taxonomies for capturing impact, 7. indicators, evidence, and impact within systems, 8. conclusions and recommendations.
- < Previous
Assessment, evaluations, and definitions of research impact: A review
- Article contents
- Figures & tables
- Supplementary Data
Teresa Penfield, Matthew J. Baker, Rosa Scoble, Michael C. Wykes, Assessment, evaluations, and definitions of research impact: A review, Research Evaluation , Volume 23, Issue 1, January 2014, Pages 21–32, https://doi.org/10.1093/reseval/rvt021
- Permissions Icon Permissions
This article aims to explore what is understood by the term ‘research impact’ and to provide a comprehensive assimilation of available literature and information, drawing on global experiences to understand the potential for methods and frameworks of impact assessment being implemented for UK impact assessment. We take a more focused look at the impact component of the UK Research Excellence Framework taking place in 2014 and some of the challenges to evaluating impact and the role that systems might play in the future for capturing the links between research and impact and the requirements we have for these systems.
When considering the impact that is generated as a result of research, a number of authors and government recommendations have advised that a clear definition of impact is required ( Duryea, Hochman, and Parfitt 2007 ; Grant et al. 2009 ; Russell Group 2009 ). From the outset, we note that the understanding of the term impact differs between users and audiences. There is a distinction between ‘academic impact’ understood as the intellectual contribution to one’s field of study within academia and ‘external socio-economic impact’ beyond academia. In the UK, evaluation of academic and broader socio-economic impact takes place separately. ‘Impact’ has become the term of choice in the UK for research influence beyond academia. This distinction is not so clear in impact assessments outside of the UK, where academic outputs and socio-economic impacts are often viewed as one, to give an overall assessment of value and change created through research.
an effect on, change or benefit to the economy, society, culture, public policy or services, health, the environment or quality of life, beyond academia
Impact is assessed alongside research outputs and environment to provide an evaluation of research taking place within an institution. As such research outputs, for example, knowledge generated and publications, can be translated into outcomes, for example, new products and services, and impacts or added value ( Duryea et al. 2007 ). Although some might find the distinction somewhat marginal or even confusing, this differentiation between outputs, outcomes, and impacts is important, and has been highlighted, not only for the impacts derived from university research ( Kelly and McNicol 2011 ) but also for work done in the charitable sector ( Ebrahim and Rangan, 2010 ; Berg and Månsson 2011 ; Kelly and McNicoll 2011 ). The Social Return on Investment (SROI) guide ( The SROI Network 2012 ) suggests that ‘The language varies “impact”, “returns”, “benefits”, “value” but the questions around what sort of difference and how much of a difference we are making are the same’. It is perhaps assumed here that a positive or beneficial effect will be considered as an impact but what about changes that are perceived to be negative? Wooding et al. (2007) adapted the terminology of the Payback Framework, developed for the health and biomedical sciences from ‘benefit’ to ‘impact’ when modifying the framework for the social sciences, arguing that the positive or negative nature of a change was subjective and can also change with time, as has commonly been highlighted with the drug thalidomide, which was introduced in the 1950s to help with, among other things, morning sickness but due to teratogenic effects, which resulted in birth defects, was withdrawn in the early 1960s. Thalidomide has since been found to have beneficial effects in the treatment of certain types of cancer. Clearly the impact of thalidomide would have been viewed very differently in the 1950s compared with the 1960s or today.
In viewing impact evaluations it is important to consider not only who has evaluated the work but the purpose of the evaluation to determine the limits and relevance of an assessment exercise. In this article, we draw on a broad range of examples with a focus on methods of evaluation for research impact within Higher Education Institutions (HEIs). As part of this review, we aim to explore the following questions:
What are the reasons behind trying to understand and evaluate research impact?
What are the methodologies and frameworks that have been employed globally to assess research impact and how do these compare?
What are the challenges associated with understanding and evaluating research impact?
What indicators, evidence, and impacts need to be captured within developing systems
What are the reasons behind trying to understand and evaluate research impact? Throughout history, the activities of a university have been to provide both education and research, but the fundamental purpose of a university was perhaps described in the writings of mathematician and philosopher Alfred North Whitehead (1929) .
‘The justification for a university is that it preserves the connection between knowledge and the zest of life, by uniting the young and the old in the imaginative consideration of learning. The university imparts information, but it imparts it imaginatively. At least, this is the function which it should perform for society. A university which fails in this respect has no reason for existence. This atmosphere of excitement, arising from imaginative consideration transforms knowledge.’
In undertaking excellent research, we anticipate that great things will come and as such one of the fundamental reasons for undertaking research is that we will generate and transform knowledge that will benefit society as a whole.
One might consider that by funding excellent research, impacts (including those that are unforeseen) will follow, and traditionally, assessment of university research focused on academic quality and productivity. Aspects of impact, such as value of Intellectual Property, are currently recorded by universities in the UK through their Higher Education Business and Community Interaction Survey return to Higher Education Statistics Agency; however, as with other public and charitable sector organizations, showcasing impact is an important part of attracting and retaining donors and support ( Kelly and McNicoll 2011 ).
The reasoning behind the move towards assessing research impact is undoubtedly complex, involving both political and socio-economic factors, but, nevertheless, we can differentiate between four primary purposes.
HEIs overview. To enable research organizations including HEIs to monitor and manage their performance and understand and disseminate the contribution that they are making to local, national, and international communities.
Accountability. To demonstrate to government, stakeholders, and the wider public the value of research. There has been a drive from the UK government through Higher Education Funding Council for England (HEFCE) and the Research Councils ( HM Treasury 2004 ) to account for the spending of public money by demonstrating the value of research to tax payers, voters, and the public in terms of socio-economic benefits ( European Science Foundation 2009 ), in effect, justifying this expenditure ( Davies Nutley, and Walter 2005 ; Hanney and González-Block 2011 ).
Inform funding. To understand the socio-economic value of research and subsequently inform funding decisions. By evaluating the contribution that research makes to society and the economy, future funding can be allocated where it is perceived to bring about the desired impact. As Donovan (2011) comments, ‘Impact is a strong weapon for making an evidence based case to governments for enhanced research support’.
Understand. To understand the method and routes by which research leads to impacts to maximize on the findings that come out of research and develop better ways of delivering impact.
The growing trend for accountability within the university system is not limited to research and is mirrored in assessments of teaching quality, which now feed into evaluation of universities to ensure fee-paying students’ satisfaction. In demonstrating research impact, we can provide accountability upwards to funders and downwards to users on a project and strategic basis ( Kelly and McNicoll 2011 ). Organizations may be interested in reviewing and assessing research impact for one or more of the aforementioned purposes and this will influence the way in which evaluation is approached.
It is important to emphasize that ‘Not everyone within the higher education sector itself is convinced that evaluation of higher education activity is a worthwhile task’ ( Kelly and McNicoll 2011 ). The University and College Union ( University and College Union 2011 ) organized a petition calling on the UK funding councils to withdraw the inclusion of impact assessment from the REF proposals once plans for the new assessment of university research were released. This petition was signed by 17,570 academics (52,409 academics were returned to the 2008 Research Assessment Exercise), including Nobel laureates and Fellows of the Royal Society ( University and College Union 2011 ). Impact assessments raise concerns over the steer of research towards disciplines and topics in which impact is more easily evidenced and that provide economic impacts that could subsequently lead to a devaluation of ‘blue skies’ research. Johnston ( Johnston 1995 ) notes that by developing relationships between researchers and industry, new research strategies can be developed. This raises the questions of whether UK business and industry should not invest in the research that will deliver them impacts and who will fund basic research if not the government? Donovan (2011) asserts that there should be no disincentive for conducting basic research. By asking academics to consider the impact of the research they undertake and by reviewing and funding them accordingly, the result may be to compromise research by steering it away from the imaginative and creative quest for knowledge. Professor James Ladyman, at the University of Bristol, a vocal adversary of awarding funding based on the assessment of research impact, has been quoted as saying that ‘…inclusion of impact in the REF will create “selection pressure,” promoting academic research that has “more direct economic impact” or which is easier to explain to the public’ ( Corbyn 2009 ).
Despite the concerns raised, the broader socio-economic impacts of research will be included and count for 20% of the overall research assessment, as part of the REF in 2014. From an international perspective, this represents a step change in the comprehensive nature to which impact will be assessed within universities and research institutes, incorporating impact from across all research disciplines. Understanding what impact looks like across the various strands of research and the variety of indicators and proxies used to evidence impact will be important to developing a meaningful assessment.
What are the methodologies and frameworks that have been employed globally to evaluate research impact and how do these compare? The traditional form of evaluation of university research in the UK was based on measuring academic impact and quality through a process of peer review ( Grant 2006 ). Evidence of academic impact may be derived through various bibliometric methods, one example of which is the H index, which has incorporated factors such as the number of publications and citations. These metrics may be used in the UK to understand the benefits of research within academia and are often incorporated into the broader perspective of impact seen internationally, for example, within the Excellence in Research for Australia and using Star Metrics in the USA, in which quantitative measures are used to assess impact, for example, publications, citation, and research income. These ‘traditional’ bibliometric techniques can be regarded as giving only a partial picture of full impact ( Bornmann and Marx 2013 ) with no link to causality. Standard approaches actively used in programme evaluation such as surveys, case studies, bibliometrics, econometrics and statistical analyses, content analysis, and expert judgment are each considered by some (Vonortas and Link, 2012) to have shortcomings when used to measure impacts.
Incorporating assessment of the wider socio-economic impact began using metrics-based indicators such as Intellectual Property registered and commercial income generated ( Australian Research Council 2008 ). In the UK, more sophisticated assessments of impact incorporating wider socio-economic benefits were first investigated within the fields of Biomedical and Health Sciences ( Grant 2006 ), an area of research that wanted to be able to justify the significant investment it received. Frameworks for assessing impact have been designed and are employed at an organizational level addressing the specific requirements of the organization and stakeholders. As a result, numerous and widely varying models and frameworks for assessing impact exist. Here we outline a few of the most notable models that demonstrate the contrast in approaches available.
The Payback Framework is possibly the most widely used and adapted model for impact assessment ( Wooding et al. 2007 ; Nason et al. 2008 ), developed during the mid-1990s by Buxton and Hanney, working at Brunel University. It incorporates both academic outputs and wider societal benefits ( Donovan and Hanney 2011 ) to assess outcomes of health sciences research. The Payback Framework systematically links research with the associated benefits ( Scoble et al. 2010 ; Hanney and González-Block 2011 ) and can be thought of in two parts: a model that allows the research and subsequent dissemination process to be broken into specific components within which the benefits of research can be studied, and second, a multi-dimensional classification scheme into which the various outputs, outcomes, and impacts can be placed ( Hanney and Gonzalez Block 2011 ). The Payback Framework has been adopted internationally, largely within the health sector, by organizations such as the Canadian Institute of Health Research, the Dutch Public Health Authority, the Australian National Health and Medical Research Council, and the Welfare Bureau in Hong Kong ( Bernstein et al. 2006 ; Nason et al. 2008 ; CAHS 2009; Spaapen et al. n.d. ). The Payback Framework enables health and medical research and impact to be linked and the process by which impact occurs to be traced. For more extensive reviews of the Payback Framework, see Davies et al. (2005) , Wooding et al. (2007) , Nason et al. (2008) , and Hanney and González-Block (2011) .
A very different approach known as Social Impact Assessment Methods for research and funding instruments through the study of Productive Interactions (SIAMPI) was developed from the Dutch project Evaluating Research in Context and has a central theme of capturing ‘productive interactions’ between researchers and stakeholders by analysing the networks that evolve during research programmes ( Spaapen and Drooge, 2011 ; Spaapen et al. n.d. ). SIAMPI is based on the widely held assumption that interactions between researchers and stakeholder are an important pre-requisite to achieving impact ( Donovan 2011 ; Hughes and Martin 2012 ; Spaapen et al. n.d. ). This framework is intended to be used as a learning tool to develop a better understanding of how research interactions lead to social impact rather than as an assessment tool for judging, showcasing, or even linking impact to a specific piece of research. SIAMPI has been used within the Netherlands Institute for health Services Research ( SIAMPI n.d. ). ‘Productive interactions’, which can perhaps be viewed as instances of knowledge exchange, are widely valued and supported internationally as mechanisms for enabling impact and are often supported financially for example by Canada’s Social Sciences and Humanities Research Council, which aims to support knowledge exchange (financially) with a view to enabling long-term impact. In the UK, UK Department for Business, Innovation, and Skills provided funding of £150 million for knowledge exchange in 2011–12 to ‘help universities and colleges support the economic recovery and growth, and contribute to wider society’ ( Department for Business, Innovation and Skills 2012 ). While valuing and supporting knowledge exchange is important, SIAMPI perhaps takes this a step further in enabling these exchange events to be captured and analysed. One of the advantages of this method is that less input is required compared with capturing the full route from research to impact. A comprehensive assessment of impact itself is not undertaken with SIAMPI, which make it a less-suitable method where showcasing the benefits of research is desirable or where this justification of funding based on impact is required.
The first attempt globally to comprehensively capture the socio-economic impact of research across all disciplines was undertaken for the Australian Research Quality Framework (RQF), using a case study approach. The RQF was developed to demonstrate and justify public expenditure on research, and as part of this framework, a pilot assessment was undertaken by the Australian Technology Network. Researchers were asked to evidence the economic, societal, environmental, and cultural impact of their research within broad categories, which were then verified by an expert panel ( Duryea et al. 2007 ) who concluded that the researchers and case studies could provide enough qualitative and quantitative evidence for reviewers to assess the impact arising from their research ( Duryea et al. 2007 ). To evaluate impact, case studies were interrogated and verifiable indicators assessed to determine whether research had led to reciprocal engagement, adoption of research findings, or public value. The RQF pioneered the case study approach to assessing research impact; however, with a change in government in 2007, this framework was never implemented in Australia, although it has since been taken up and adapted for the UK REF.
In developing the UK REF, HEFCE commissioned a report, in 2009, from RAND to review international practice for assessing research impact and provide recommendations to inform the development of the REF. RAND selected four frameworks to represent the international arena ( Grant et al. 2009 ). One of these, the RQF, they identified as providing a ‘promising basis for developing an impact approach for the REF’ using the case study approach. HEFCE developed an initial methodology that was then tested through a pilot exercise. The case study approach, recommended by the RQF, was combined with ‘significance’ and ‘reach’ as criteria for assessment. The criteria for assessment were also supported by a model developed by Brunel for ‘measurement’ of impact that used similar measures defined as depth and spread. In the Brunel model, depth refers to the degree to which the research has influenced or caused change, whereas spread refers to the extent to which the change has occurred and influenced end users. Evaluation of impact in terms of reach and significance allows all disciplines of research and types of impact to be assessed side-by-side ( Scoble et al. 2010 ).
The range and diversity of frameworks developed reflect the variation in purpose of evaluation including the stakeholders for whom the assessment takes place, along with the type of impact and evidence anticipated. The most appropriate type of evaluation will vary according to the stakeholder whom we are wishing to inform. Studies ( Buxton, Hanney and Jones 2004 ) into the economic gains from biomedical and health sciences determined that different methodologies provide different ways of considering economic benefits. A discussion on the benefits and drawbacks of a range of evaluation tools (bibliometrics, economic rate of return, peer review, case study, logic modelling, and benchmarking) can be found in the article by Grant (2006) .
Evaluation of impact is becoming increasingly important, both within the UK and internationally, and research and development into impact evaluation continues, for example, researchers at Brunel have developed the concept of depth and spread further into the Brunel Impact Device for Evaluation, which also assesses the degree of separation between research and impact ( Scoble et al. working paper ).
Although based on the RQF, the REF did not adopt all of the suggestions held within, for example, the option of allowing research groups to opt out of impact assessment should the nature or stage of research deem it unsuitable ( Donovan 2008 ). In 2009–10, the REF team conducted a pilot study for the REF involving 29 institutions, submitting case studies to one of five units of assessment (in clinical medicine, physics, earth systems and environmental sciences, social work and social policy, and English language and literature) ( REF2014 2010 ). These case studies were reviewed by expert panels and, as with the RQF, they found that it was possible to assess impact and develop ‘impact profiles’ using the case study approach ( REF2014 2010 ).
From 2014, research within UK universities and institutions will be assessed through the REF; this will replace the Research Assessment Exercise, which has been used to assess UK research since the 1980s. Differences between these two assessments include the removal of indicators of esteem and the addition of assessment of socio-economic research impact. The REF will therefore assess three aspects of research:
Environment
Research impact is assessed in two formats, first, through an impact template that describes the approach to enabling impact within a unit of assessment, and second, using impact case studies that describe the impact taking place following excellent research within a unit of assessment ( REF2014 2011a ). HEFCE indicated that impact should merit a 25% weighting within the REF ( REF2014 2011b ); however, this has been reduced for the 2014 REF to 20%, perhaps as a result of feedback and lobbying, for example, from the Russell Group and Million + group of Universities who called for impact to count for 15% ( Russell Group 2009 ; Jump 2011 ) and following guidance from the expert panels undertaking the pilot exercise who suggested that during the 2014 REF, impact assessment would be in a developmental phase and that a lower weighting for impact would be appropriate with the expectation that this would be increased in subsequent assessments ( REF2014 2010 ).
The quality and reliability of impact indicators will vary according to the impact we are trying to describe and link to research. In the UK, evidence and research impacts will be assessed for the REF within research disciplines. Although it can be envisaged that the range of impacts derived from research of different disciplines are likely to vary, one might question whether it makes sense to compare impacts within disciplines when the range of impact can vary enormously, for example, from business development to cultural changes or saving lives? An alternative approach was suggested for the RQF in Australia, where it was proposed that types of impact be compared rather than impact from specific disciplines.
Providing advice and guidance within specific disciplines is undoubtedly helpful. It can be seen from the panel guidance produced by HEFCE to illustrate impacts and evidence that it is expected that impact and evidence will vary according to discipline ( REF2014 2012 ). Why should this be the case? Two areas of research impact health and biomedical sciences and the social sciences have received particular attention in the literature by comparison with, for example, the arts. Reviews and guidance on developing and evidencing impact in particular disciplines include the London School of Economics (LSE) Public Policy Group’s impact handbook (LSE n.d.), a review of the social and economic impacts arising from the arts produced by Reeve ( Reeves 2002 ), and a review by Kuruvilla et al. (2006) on the impact arising from health research. Perhaps it is time for a generic guide based on types of impact rather than research discipline?
What are the challenges associated with understanding and evaluating research impact? In endeavouring to assess or evaluate impact, a number of difficulties emerge and these may be specific to certain types of impact. Given that the type of impact we might expect varies according to research discipline, impact-specific challenges present us with the problem that an evaluation mechanism may not fairly compare impact between research disciplines.
5.1 Time lag
The time lag between research and impact varies enormously. For example, the development of a spin out can take place in a very short period, whereas it took around 30 years from the discovery of DNA before technology was developed to enable DNA fingerprinting. In development of the RQF, The Allen Consulting Group (2005) highlighted that defining a time lag between research and impact was difficult. In the UK, the Russell Group Universities responded to the REF consultation by recommending that no time lag be put on the delivery of impact from a piece of research citing examples such as the development of cardiovascular disease treatments, which take between 10 and 25 years from research to impact ( Russell Group 2009 ). To be considered for inclusion within the REF, impact must be underpinned by research that took place between 1 January 1993 and 31 December 2013, with impact occurring during an assessment window from 1 January 2008 to 31 July 2013. However, there has been recognition that this time window may be insufficient in some instances, with architecture being granted an additional 5-year period ( REF2014 2012 ); why only architecture has been granted this dispensation is not clear, when similar cases could be made for medicine, physics, or even English literature. Recommendations from the REF pilot were that the panel should be able to extend the time frame where appropriate; this, however, poses difficult decisions when submitting a case study to the REF as to what the view of the panel will be and whether if deemed inappropriate this will render the case study ‘unclassified’.
5.2 The developmental nature of impact
Impact is not static, it will develop and change over time, and this development may be an increase or decrease in the current degree of impact. Impact can be temporary or long-lasting. The point at which assessment takes place will therefore influence the degree and significance of that impact. For example, following the discovery of a new potential drug, preclinical work is required, followed by Phase 1, 2, and 3 trials, and then regulatory approval is granted before the drug is used to deliver potential health benefits. Clearly there is the possibility that the potential new drug will fail at any one of these phases but each phase can be classed as an interim impact of the original discovery work on route to the delivery of health benefits, but the time at which an impact assessment takes place will influence the degree of impact that has taken place. If impact is short-lived and has come and gone within an assessment period, how will it be viewed and considered? Again the objective and perspective of the individuals and organizations assessing impact will be key to understanding how temporal and dissipated impact will be valued in comparison with longer-term impact.
5.3 Attribution
Impact is derived not only from targeted research but from serendipitous findings, good fortune, and complex networks interacting and translating knowledge and research. The exploitation of research to provide impact occurs through a complex variety of processes, individuals, and organizations, and therefore, attributing the contribution made by a specific individual, piece of research, funding, strategy, or organization to an impact is not straight forward. Husbands-Fealing suggests that to assist identification of causality for impact assessment, it is useful to develop a theoretical framework to map the actors, activities, linkages, outputs, and impacts within the system under evaluation, which shows how later phases result from earlier ones. Such a framework should be not linear but recursive, including elements from contextual environments that influence and/or interact with various aspects of the system. Impact is often the culmination of work within spanning research communities ( Duryea et al. 2007 ). Concerns over how to attribute impacts have been raised many times ( The Allen Consulting Group 2005 ; Duryea et al. 2007 ; Grant et al. 2009 ), and differentiating between the various major and minor contributions that lead to impact is a significant challenge.
Figure 1 , replicated from Hughes and Martin (2012) , illustrates how the ease with which impact can be attributed decreases with time, whereas the impact, or effect of complementary assets, increases, highlighting the problem that it may take a considerable amount of time for the full impact of a piece of research to develop but because of this time and the increase in complexity of the networks involved in translating the research and interim impacts, it is more difficult to attribute and link back to a contributing piece of research.
Time, attribution, impact. Replicated from ( Hughes and Martin 2012 ).
This presents particular difficulties in research disciplines conducting basic research, such as pure mathematics, where the impact of research is unlikely to be foreseen. Research findings will be taken up in other branches of research and developed further before socio-economic impact occurs, by which point, attribution becomes a huge challenge. If this research is to be assessed alongside more applied research, it is important that we are able to at least determine the contribution of basic research. It has been acknowledged that outstanding leaps forward in knowledge and understanding come from immersing in a background of intellectual thinking that ‘one is able to see further by standing on the shoulders of giants’.
5.4 Knowledge creep
It is acknowledged that one of the outcomes of developing new knowledge through research can be ‘knowledge creep’ where new data or information becomes accepted and gets absorbed over time. This is particularly recognized in the development of new government policy where findings can influence policy debate and policy change, without recognition of the contributing research ( Davies et al. 2005 ; Wooding et al. 2007 ). This is recognized as being particularly problematic within the social sciences where informing policy is a likely impact of research. In putting together evidence for the REF, impact can be attributed to a specific piece of research if it made a ‘distinctive contribution’ ( REF2014 2011a ). The difficulty then is how to determine what the contribution has been in the absence of adequate evidence and how we ensure that research that results in impacts that cannot be evidenced is valued and supported.
5.5 Gathering evidence
Gathering evidence of the links between research and impact is not only a challenge where that evidence is lacking. The introduction of impact assessments with the requirement to collate evidence retrospectively poses difficulties because evidence, measurements, and baselines have, in many cases, not been collected and may no longer be available. While looking forward, we will be able to reduce this problem in the future, identifying, capturing, and storing the evidence in such a way that it can be used in the decades to come is a difficulty that we will need to tackle.
Collating the evidence and indicators of impact is a significant task that is being undertaken within universities and institutions globally. Decker et al. (2007) surveyed researchers in the US top research institutions during 2005; the survey of more than 6000 researchers found that, on average, more than 40% of their time was spent doing administrative tasks. It is desirable that the assignation of administrative tasks to researchers is limited, and therefore, to assist the tracking and collating of impact data, systems are being developed involving numerous projects and developments internationally, including Star Metrics in the USA, the ERC (European Research Council) Research Information System, and Lattes in Brazil ( Lane 2010 ; Mugabushaka and Papazoglou 2012 ).
Ideally, systems within universities internationally would be able to share data allowing direct comparisons, accurate storage of information developed in collaborations, and transfer of comparable data as researchers move between institutions. To achieve compatible systems, a shared language is required. CERIF (Common European Research Information Format) was developed for this purpose, first released in 1991; a number of projects and systems across Europe such as the ERC Research Information System ( Mugabushaka and Papazoglou 2012 ) are being developed as CERIF-compatible.
In the UK, there have been several Jisc-funded projects in recent years to develop systems capable of storing research information, for example, MICE (Measuring Impacts Under CERIF), UK Research Information Shared Service, and Integrated Research Input and Output System, all based on the CERIF standard. To allow comparisons between institutions, identifying a comprehensive taxonomy of impact, and the evidence for it, that can be used universally is seen to be very valuable. However, the Achilles heel of any such attempt, as critics suggest, is the creation of a system that rewards what it can measure and codify, with the knock-on effect of directing research projects to deliver within the measures and categories that reward.
Attempts have been made to categorize impact evidence and data, for example, the aim of the MICE Project was to develop a set of impact indicators to enable impact to be fed into a based system. Indicators were identified from documents produced for the REF, by Research Councils UK, in unpublished draft case studies undertaken at King’s College London or outlined in relevant publications (MICE Project n.d.). A taxonomy of impact categories was then produced onto which impact could be mapped. What emerged on testing the MICE taxonomy ( Cooke and Nadim 2011 ), by mapping impacts from case studies, was that detailed categorization of impact was found to be too prescriptive. Every piece of research results in a unique tapestry of impact and despite the MICE taxonomy having more than 100 indicators, it was found that these did not suffice. It is perhaps worth noting that the expert panels, who assessed the pilot exercise for the REF, commented that the evidence provided by research institutes to demonstrate impact were ‘a unique collection’. Where quantitative data were available, for example, audience numbers or book sales, these numbers rarely reflected the degree of impact, as no context or baseline was available. Cooke and Nadim (2011) also noted that using a linear-style taxonomy did not reflect the complex networks of impacts that are generally found. The Goldsmith report ( Cooke and Nadim 2011 ) recommended making indicators ‘value free’, enabling the value or quality to be established in an impact descriptor that could be assessed by expert panels. The Goldsmith report concluded that general categories of evidence would be more useful such that indicators could encompass dissemination and circulation, re-use and influence, collaboration and boundary work, and innovation and invention.
While defining the terminology used to understand impact and indicators will enable comparable data to be stored and shared between organizations, we would recommend that any categorization of impacts be flexible such that impacts arising from non-standard routes can be placed. It is worth considering the degree to which indicators are defined and provide broader definitions with greater flexibility.
It is possible to incorporate both metrics and narratives within systems, for example, within the Research Outcomes System and Researchfish, currently used by several of the UK research councils to allow impacts to be recorded; although recording narratives has the advantage of allowing some context to be documented, it may make the evidence less flexible for use by different stakeholder groups (which include government, funding bodies, research assessment agencies, research providers, and user communities) for whom the purpose of analysis may vary ( Davies et al. 2005 ). Any tool for impact evaluation needs to be flexible, such that it enables access to impact data for a variety of purposes (Scoble et al. n.d.). Systems need to be able to capture links between and evidence of the full pathway from research to impact, including knowledge exchange, outputs, outcomes, and interim impacts, to allow the route to impact to be traced. This database of evidence needs to establish both where impact can be directly attributed to a piece of research as well as various contributions to impact made during the pathway.
Baselines and controls need to be captured alongside change to demonstrate the degree of impact. In many instances, controls are not feasible as we cannot look at what impact would have occurred if a piece of research had not taken place; however, indications of the picture before and after impact are valuable and worth collecting for impact that can be predicted.
It is now possible to use data-mining tools to extract specific data from narratives or unstructured data ( Mugabushaka and Papazoglou 2012 ). This is being done for collation of academic impact and outputs, for example, Research Portfolio Online Reporting Tools, which uses PubMed and text mining to cluster research projects, and STAR Metrics in the US, which uses administrative records and research outputs and is also being implemented by the ERC using data in the public domain ( Mugabushaka and Papazoglou 2012 ). These techniques have the potential to provide a transformation in data capture and impact assessment ( Jones and Grant 2013 ). It is acknowledged in the article by Mugabushaka and Papazoglou (2012) that it will take years to fully incorporate the impacts of ERC funding. For systems to be able to capture a full range of systems, definitions and categories of impact need to be determined that can be incorporated into system development. To adequately capture interactions taking place between researchers, institutions, and stakeholders, the introduction of tools to enable this would be very valuable. If knowledge exchange events could be captured, for example, electronically as they occur or automatically if flagged from an electronic calendar or a diary, then far more of these events could be recorded with relative ease. Capturing knowledge exchange events would greatly assist the linking of research with impact.
The transition to routine capture of impact data not only requires the development of tools and systems to help with implementation but also a cultural change to develop practices, currently undertaken by a few to be incorporated as standard behaviour among researchers and universities.
What indicators, evidence, and impacts need to be captured within developing systems? There is a great deal of interest in collating terms for impact and indicators of impact. Consortia for Advancing Standards in Research Administration Information, for example, has put together a data dictionary with the aim of setting the standards for terminology used to describe impact and indicators that can be incorporated into systems internationally and seems to be building a certain momentum in this area. A variety of types of indicators can be captured within systems; however, it is important that these are universally understood. Here we address types of evidence that need to be captured to enable an overview of impact to be developed. In the majority of cases, a number of types of evidence will be required to provide an overview of impact.
7.1 Metrics
Metrics have commonly been used as a measure of impact, for example, in terms of profit made, number of jobs provided, number of trained personnel recruited, number of visitors to an exhibition, number of items purchased, and so on. Metrics in themselves cannot convey the full impact; however, they are often viewed as powerful and unequivocal forms of evidence. If metrics are available as impact evidence, they should, where possible, also capture any baseline or control data. Any information on the context of the data will be valuable to understanding the degree to which impact has taken place.
Perhaps, SROI indicates the desire to be able to demonstrate the monetary value of investment and impact by some organizations. SROI aims to provide a valuation of the broader social, environmental, and economic impacts, providing a metric that can be used for demonstration of worth. This is a metric that has been used within the charitable sector ( Berg and Månsson 2011 ) and also features as evidence in the REF guidance for panel D ( REF2014 2012 ). More details on SROI can be found in ‘A guide to Social Return on Investment’ produced by The SROI Network (2012) .
Although metrics can provide evidence of quantitative changes or impacts from our research, they are unable to adequately provide evidence of the qualitative impacts that take place and hence are not suitable for all of the impact we will encounter. The main risks associated with the use of standardized metrics are that
The full impact will not be realized, as we focus on easily quantifiable indicators
We will focus attention towards generating results that enable boxes to be ticked rather than delivering real value for money and innovative research.
They risk being monetized or converted into a lowest common denominator in an attempt to compare the cost of a new theatre against that of a hospital.
7.2 Narratives
Narratives can be used to describe impact; the use of narratives enables a story to be told and the impact to be placed in context and can make good use of qualitative information. They are often written with a reader from a particular stakeholder group in mind and will present a view of impact from a particular perspective. The risk of relying on narratives to assess impact is that they often lack the evidence required to judge whether the research and impact are linked appropriately. Where narratives are used in conjunction with metrics, a complete picture of impact can be developed, again from a particular perspective but with the evidence available to corroborate the claims made. Table 1 summarizes some of the advantages and disadvantages of the case study approach.
The advantages and disadvantages of the case study approach
By allowing impact to be placed in context, we answer the ‘so what?’ question that can result from quantitative data analyses, but is there a risk that the full picture may not be presented to demonstrate impact in a positive light? Case studies are ideal for showcasing impact, but should they be used to critically evaluate impact?
7.3 Surveys and testimonies
One way in which change of opinion and user perceptions can be evidenced is by gathering of stakeholder and user testimonies or undertaking surveys. This might describe support for and development of research with end users, public engagement and evidence of knowledge exchange, or a demonstration of change in public opinion as a result of research. Collecting this type of evidence is time-consuming, and again, it can be difficult to gather the required evidence retrospectively when, for example, the appropriate user group might have dispersed.
The ability to record and log these type of data is important for enabling the path from research to impact to be established and the development of systems that can capture this would be very valuable.
7.4 Citations (outside of academia) and documentation
Citations (outside of academia) and documentation can be used as evidence to demonstrate the use research findings in developing new ideas and products for example. This might include the citation of a piece of research in policy documents or reference to a piece of research being cited within the media. A collation of several indicators of impact may be enough to convince that an impact has taken place. Even where we can evidence changes and benefits linked to our research, understanding the causal relationship may be difficult. Media coverage is a useful means of disseminating our research and ideas and may be considered alongside other evidence as contributing to or an indicator of impact.
The fast-moving developments in the field of altmetrics (or alternative metrics) are providing a richer understanding of how research is being used, viewed, and moved. The transfer of information electronically can be traced and reviewed to provide data on where and to whom research findings are going.
The understanding of the term impact varies considerably and as such the objectives of an impact assessment need to be thoroughly understood before evidence is collated.
While aspects of impact can be adequately interpreted using metrics, narratives, and other evidence, the mixed-method case study approach is an excellent means of pulling all available information, data, and evidence together, allowing a comprehensive summary of the impact within context. While the case study is a useful way of showcasing impact, its limitations must be understood if we are to use this for evaluation purposes. The case study does present evidence from a particular perspective and may need to be adapted for use with different stakeholders. It is time-intensive to both assimilate and review case studies and we therefore need to ensure that the resources required for this type of evaluation are justified by the knowledge gained. The ability to write a persuasive well-evidenced case study may influence the assessment of impact. Over the past year, there have been a number of new posts created within universities, such as writing impact case studies, and a number of companies are now offering this as a contract service. A key concern here is that we could find that universities which can afford to employ either consultants or impact ‘administrators’ will generate the best case studies.
The development of tools and systems for assisting with impact evaluation would be very valuable. We suggest that developing systems that focus on recording impact information alone will not provide all that is required to link research to ensuing events and impacts, systems require the capacity to capture any interactions between researchers, the institution, and external stakeholders and link these with research findings and outputs or interim impacts to provide a network of data. In designing systems and tools for collating data related to impact, it is important to consider who will populate the database and ensure that the time and capability required for capture of information is considered. Capturing data, interactions, and indicators as they emerge increases the chance of capturing all relevant information and tools to enable researchers to capture much of this would be valuable. However, it must be remembered that in the case of the UK REF, impact is only considered that is based on research that has taken place within the institution submitting the case study. It is therefore in an institution’s interest to have a process by which all the necessary information is captured to enable a story to be developed in the absence of a researcher who may have left the employment of the institution. Figure 2 demonstrates the information that systems will need to capture and link.
Research findings including outputs (e.g., presentations and publications)
Communications and interactions with stakeholders and the wider public (emails, visits, workshops, media publicity, etc)
Feedback from stakeholders and communication summaries (e.g., testimonials and altmetrics)
Research developments (based on stakeholder input and discussions)
Outcomes (e.g., commercial and cultural, citations)
Impacts (changes, e.g., behavioural and economic)
Overview of the types of information that systems need to capture and link.
Attempting to evaluate impact to justify expenditure, showcase our work, and inform future funding decisions will only prove to be a valuable use of time and resources if we can take measures to ensure that assessment attempts will not ultimately have a negative influence on the impact of our research. There are areas of basic research where the impacts are so far removed from the research or are impractical to demonstrate; in these cases, it might be prudent to accept the limitations of impact assessment, and provide the potential for exclusion in appropriate circumstances.
This work was supported by Jisc [DIINN10].
Google Scholar
Google Preview
Email alerts
Citing articles via.
- X (formerly Twitter)
- Recommend to your Library
Affiliations
- Online ISSN 1471-5449
- Print ISSN 0958-2029
- Copyright © 2024 Oxford University Press
- About Oxford Academic
- Publish journals with us
- University press partners
- What we publish
- New features
- Open access
- Institutional account management
- Rights and permissions
- Get help with access
- Accessibility
- Advertising
- Media enquiries
- Oxford University Press
- Oxford Languages
- University of Oxford
Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide
- Copyright © 2024 Oxford University Press
- Cookie settings
- Cookie policy
- Privacy policy
- Legal notice
This Feature Is Available To Subscribers Only
Sign In or Create an Account
This PDF is available to Subscribers Only
For full access to this pdf, sign in to an existing account, or purchase an annual subscription.
- Open access
- Published: 18 March 2015
A narrative review of research impact assessment models and methods
- Andrew J Milat 1 , 2 ,
- Adrian E Bauman 2 &
- Sally Redman 2 , 3
Health Research Policy and Systems volume 13 , Article number: 18 ( 2015 ) Cite this article
26k Accesses
36 Altmetric
Metrics details
Research funding agencies continue to grapple with assessing research impact. Theoretical frameworks are useful tools for describing and understanding research impact. The purpose of this narrative literature review was to synthesize evidence that describes processes and conceptual models for assessing policy and practice impacts of public health research.
The review involved keyword searches of electronic databases, including MEDLINE, CINAHL, PsycINFO, EBM Reviews, and Google Scholar in July/August 2013. Review search terms included ‘research impact’, ‘policy and practice’, ‘intervention research’, ‘translational research’, ‘health promotion’, and ‘public health’. The review included theoretical and opinion pieces, case studies, descriptive studies, frameworks and systematic reviews describing processes, and conceptual models for assessing research impact. The review was conducted in two phases: initially, abstracts were retrieved and assessed against the review criteria followed by the retrieval and assessment of full papers against review criteria.
Thirty one primary studies and one systematic review met the review criteria, with 88% of studies published since 2006. Studies comprised assessments of the impacts of a wide range of health-related research, including basic and biomedical research, clinical trials, health service research, as well as public health research. Six studies had an explicit focus on assessing impacts of health promotion or public health research and one had a specific focus on intervention research impact assessment. A total of 16 different impact assessment models were identified, with the ‘payback model’ the most frequently used conceptual framework. Typically, impacts were assessed across multiple dimensions using mixed methodologies, including publication and citation analysis, interviews with principal investigators, peer assessment, case studies, and document analysis. The vast majority of studies relied on principal investigator interviews and/or peer review to assess impacts, instead of interviewing policymakers and end-users of research.
Conclusions
Research impact assessment is a new field of scientific endeavour and there are a growing number of conceptual frameworks applied to assess the impacts of research.
Peer Review reports
There is increasing recognition that health research investment should lead to improvements in policy [ 1 - 3 ], practice, resource allocation, and, ultimately, the health of the community [ 4 , 5 ]. However, research impacts are complex, non-linear, and unpredictable in nature and there is a propensity to ‘count what can be easily measured’, rather than measuring what ‘counts’ in terms of significant, enduring changes [ 6 ].
Traditional academic-oriented indices of research productivity, such as number of papers, impact factors of journals, citations, research funding, and esteem measures, are well established and widely used by research granting bodies and academic institutions [ 7 ], but they do not always relate well to the ultimate goals of applied health research [ 6 , 8 , 9 ]. Governments are signaling that research metrics of research quality and productivity are insufficient to determine research value because they say little about the real world benefits of research [ 10 - 12 ]. At the same time, research funders continue to grapple with the fundamental problem of assessing broader impacts of research. This task is made more challenging because there are currently no agreed systematic approaches to measuring broader research impacts, particularly impacts on health policy and practice [ 13 , 14 ].
Recent years have seen the development of a number of frameworks that can assist in better describing and understanding the impact of research. Conceptual frameworks can help organize data collection, analysis, and reporting to promote clarity and consistency in the impact assessments made. In the context of this review, research impact is defined as: “… any type of output of research activities which can be considered a ‘positive return’ for the scientific community, health systems, patients, and the society in general ” [ 13 ], p. 2.
In light of these gaps in the literature, the purpose of this narrative literature review was to synthesize evidence that describes processes and conceptual models for assessing research impacts, with a focus on policy and practice impacts of public health research.
Literature review search strategy
The review involved keyword searches of electronic databases including MEDLINE (general medicine), CINAHL (nursing and allied health), PsycINFO (psychology and related behavioural and social sciences), EBM Reviews, Cochrane Database of Systematic Reviews 2005 to May 2013, and Google Scholar. Review search terms included ‘research impact’ OR ‘policy and practice’ AND ‘intervention research’ AND ‘translational research’ AND ‘health promotion’ AND ‘public health’.
The review included theoretical and opinion pieces, case studies, descriptive studies, frameworks and systematic reviews describing processes, and conceptual models for assessing research impact.
The review was conducted in two phases in July/August 2013. In phase 1, abstracts were retrieved and assessed against the review criteria. For abstracts that met the review criteria in phase 1, full papers were retrieved and were assessed for inclusion in the final review. Studies included in the review met the following criteria: i) published in English from January 1990 to June 2013; ii) described processes, theories, or frameworks associated with the assessment of research impact; and iii) were theoretical and opinion pieces, case studies, descriptive studies, frameworks, or systematic reviews.
Due the dearth of public health and health promotion-specific research impact assessment, papers with a focus on clinical or health services research impact assessment were included. The reference lists of the final papers were checked to ensure inclusion of further relevant papers; where such articles were considered relevant, they were included in the review. The search process is shown in Figure 1 .
Literature search process and numbers of papers identified, excluded, and included in the review of research impact assessment.
Findings of the literature review
An initial review of abstracts in electronic databases against the inclusion criteria yielded 431 abstracts and searches of reference lists and the grey literature identified a further 9 documents. Of the 434 abstracts and documents reviewed, 39 met the inclusion criteria and full papers were retrieved. Upon review of the full publications against the review criteria, a further 7 papers were excluded as they did not meet the review criteria, leaving 32 publications in the review [ 8 , 9 , 13 , 15 - 44 ]. A summary of characteristics of studies included in the review that have a focus on processes, theories, or frameworks associated with the assessment of research impact including reference details, study type, domains of impact, methods and indicators, frameworks applied or proposed, and key lessons learned is provided in Additional file 1 : Table S1.
Study characteristics
The review identified 31 primary studies and 1 systematic review that met the review criteria. Six of the studies were reports found in the grey literature. Interestingly, 88% of studies that met the review criteria were published since 2006. The studies in the review included assessments of the impacts of a wide range of health-related research, including basic and biomedical research, clinical trials, health service research, as well as public health research. Six studies [ 22 , 23 , 34 , 36 , 40 , 43 ] had an explicit focus on assessing impacts of health promotion or public health research and 1 had a specific focus on intervention research impact assessment [ 36 ].
The majority of studies were conducted in Australia, United Kingdom, and North America, noting that the review was limited to studies published in English. The unit of assessment varied greatly from researchers (research teams [ 22 ] to whole institutions [ 15 ]) to research disciplines (e.g., prevention research [ 23 ], cancer research [ 41 ], tobacco control research [ 43 ]) or type of grants, for example, from public funding bodies [ 17 , 24 ]. The most frequently applied research methods across studies in rank order were publication and citation analysis, interviews with principal investigators, peer assessment, case studies, and document analysis. The nature of frameworks and methods used to measure research impacts will now be examined in greater detail.
Frameworks and methods for measuring research impacts
Indices of traditional research productivity such as number of papers, impact factors of journals, and citations figured prominently in studies in the literature review [ 18 , 23 , 41 ].
Across the majority of studies in this review, research impact was assessed using multiple dimensions and methodological approaches. A total of 16 different impact assessment models were identified, with the ‘payback model’ being the most frequently used conceptual framework [ 15 , 24 , 29 , 31 , 44 ]. Other frequently used models included health economics frameworks [ 19 , 21 , 37 ], variants of Research Program Logic Models [ 9 , 35 , 42 ], and the Research Impact Framework [ 8 , 30 ]. A number of recent frameworks, including the Health Services Research Impact Framework [ 20 ] and the Banzi Health Research Impact Framework [ 13 , 34 , 36 ], are hybrids of previous conceptual approaches and categorize impacts and benefits in many dimensions, trying to integrate them. Commonly applied frameworks identified in the review, including the Payback model, Research Impact Framework, health economics models, and the new hybrid Health Research Impact Framework, will now be examined in greater detail.
The payback model was developed by Buxton and Hanney [ 45 ] and takes into account resources, research processes, primary outputs, dissemination, secondary outputs and applications, and benefits or final outcomes provided by the research. Categories of outcome in the ‘payback’ framework include i) knowledge production (journal articles, books/book chapters, conference proceeding, reports); ii) use of research in the research system (acquisition of formal qualifications by members of the research team, career advancement, and use of project findings for methodology in subsequent research); iii) use of research project findings in health system policy/decision making (findings used in policy/decision making at any level of the health service such as geographic level and organisation level); iv) application of the research findings through changed behaviour (changes in behaviour observed or expected through the application of findings to research-informed policies at a geographical, organisation and population level); v) factors influencing the utilization of research (impact of research dissemination in terms of policy/decision making/behavioural change); and vi) health/health service/economic benefits (improved service delivery, cost savings, improved health, or increased equity).
The model is usually applied as a semi-structured interview guide for researchers to identify the impact of their research and is often accompanied by bibliometric analysis and verification processes. The payback categories have been found to be applicable to assessing impact of research [ 15 , 24 , 29 ], especially the more proximal impacts on knowledge production, research targeting, capacity building and absorption, and informing practice, policy, and product development. The model has been found to be less effective in eliciting information about the longer term categories of impact on health and health sector benefits and economics [ 29 ].
The Research Impact Framework was developed in the UK by Kuruvilla et al. [ 8 , 30 ], and draws upon both the research impact literature and UK research assessment criteria for publically funded research, and was validated through empirical analysis of research projects at the London School of Hygiene & Tropical Medicine. The framework is built around four categories of impact, namely i) research related, ii) policy, iii) service, and iv) societal. Within each of these areas, further descriptive categories are identified. For example, the nature of research impact on policy can be described using the Weiss categorisation of ‘instrumental use’, where research findings drive policy-making; ‘mobilisation of support’, where research provides support for policy proposals; ‘conceptual use’, where research influences the concepts and language of policy deliberations; and ‘redefining/wider influence’, where research leads to rethinking and changing established practices and beliefs [ 30 ]. The framework is applied as a semi-structured interview guide for researchers to identify the impact of their research. Users of the framework have reported that it enables the systematic identification of a range of specific and verifiable impacts and allows consideration of the unintended effects of research [ 30 ].
The framework proposed by Banzi et al. [ 13 ] is an adaption of the Canadian Academy of Health Science impact model [ 25 ] in light of a systematic review and includes five broad categories of research impact, namely i) advancing knowledge, ii) capacity building, iii) informing decision-making, iv) health and other sector benefits, and v) broad socio-economic benefits. The Banzi framework proposes a set of indicators for each domain. To illustrate, indicators for informing decision making include citation in guidelines, policy documents, and plans; references used as background for successful funding proposals; consulting, support activity, and contributing to advisory committees; patents and industrial collaboration; packages of material and communication to key target audiences about findings. This multidimensional framework takes into account several aspects of research impact and use, as well as comprehensive analytical approaches including bibliometric analysis, surveys, audit, document review, case studies, and panel assessment. Panel assessments generally involve a process asking experts to assess the merits of research against impact criteria.
Economic models used to assess impacts of research varied from cost benefit analysis to return on investment and employed a variety of methods for determining economic benefits of research. The National Institutes of Medicine study in 1993 was among the first studies to attempt to systematically monetize the benefits of medical research. It provided estimates of savings for health care systems (direct costs) and savings for the community as a whole (indirect costs), and quantified benefits in terms of quality adjusted life years. On the other hand, the Deloitte Access Economics study [ 21 ] built on the foundations of the 1993 analysis to estimate the returns on investment in research in Australia for the main disease areas and employed of health system expenditure modelling and monetised total quality adjusted life years gained. According to Buxton et al. [ 19 ], measuring only health care savings is generally seen as too narrow a focus, and their analysis considered the benefits, or indirect cost savings, in avoiding lost production and the further activity stimulated by research.
The aforementioned models all attempted to quantify a mix of more proximal research and policy and practice impacts, as well as more distal societal and economic benefits of research. It is also interesting to note that across the studies in this review, only four [ 16 , 29 , 34 , 36 ] interviewed non-academic end-users of research in impact assessment processes, with the vast majority of studies relying on principal investigator interviews and/or peer review processes to assess impacts.
Comprehensive monitoring and measurement of research impact is a complex undertaking requiring the involvement of many actors within the research pipeline [ 13 ]. Interestingly, 90% of studies that met the review criteria were published since 2006, indicating that this is a new field of research. Given the dearth of literature on public health research impact assessment, this review included assessments of the impacts of a wide range of health-related research, including basic and biomedical research, clinical trials, and health service research as well as public health research.
The review of both the published and grey literature also revealed that there are a number of conceptual frameworks currently being applied that describe processes of assessing research impact. These frameworks differ in their terminology and approaches. The lack of a common understanding of terminology and metrics makes the task of quantifying research efforts, outputs, and, ultimately, performance in this area more difficult.
Most of the models identified in the review used multidimensional conceptualization and categorization of research impact. These multidimensional models, such as the Payback model, Research Impact Framework, and Banzi Health Research Impact Framework, shared common features including assessment of traditional research outputs, such as publication and research funding, but also a broader range of potential benefits, including capacity, building, policy and product development, and service development, as well as broader societal and economic impacts. Assessments that considered more than one category were valued for their ability to capture multifaceted impact processes [ 13 , 36 , 44 ]. Interestingly, these frameworks recognised that research often impacts not only in the country within which research is conducted, but also internationally. However, for practical reasons, most studies limited assessment and verification of impacts to a single country [ 19 , 34 , 36 ].
Several methods were used to practically assess research impact, including desk analysis, bibliometrics, panel assessments, interviews, and case studies. A number of studies highlighted the utility of case study methods noting that a considerable range of research paybacks and perspectives would not have been identified without employing a structured case study approach [ 13 , 36 , 44 ]. However, it was noted that case studies can be at risk of ‘conceptualization bias’ and ‘reporting bias’ especially when they are designed or carried out retrospectively [ 13 ]. The costs of conducting case studies can also be a barrier when assessing large volumes of research [ 13 , 36 ].
Despite recent efforts, little is known about the nature and mechanisms that underpin the influence that health research has on health policy or practice. This review suggests that, to date, most primary studies of health research impacts have been small scale case studies or reviews of medical and health services research funding [ 27 , 31 , 35 , 39 , 41 ], with only two studies offering comprehensive assessments of the policy and practice impacts of public health research, with both focusing on prevention research in Australia.
The first of these aforementioned studies examined impact of population health surveillance studies on obesity prevention policy and practice [ 34 ], while the second [ 36 ] examined the policy and practice impacts of intervention research funded through the NSW Health Promotion Demonstration Research Grants Scheme 2000–2006. Both of these studies utilised comprehensive mixed methods to assess impacts that included semi-structured interviews with both investigators and end-users, bibliometric analysis, document review, verification processes, and case studies. These studies concluded that research projects can achieve the greatest policy and practice impacts if they address proximal needs of the policy context by engaging end-users from the inception of research projects and utilizing existing policy networks and structures, as well as using a range of strategies to disseminate findings that go beyond traditional peer review publications.
This review suggests that the research sector often still uses bibliometric indices to assess research impacts, rather than measuring more enduring and arguably more important policy and practice outcomes [ 6 ]. However, governments are increasingly signaling that research metrics of research quality are insufficient to determine research value because they say little about real world benefits of research [ 10 - 12 ]. The Australian Excellence in Innovation trial [ 26 ] and the UK’s Research Excellence Framework trials [ 28 , 46 ] were commissioned by governments to determine the public benefit from research spending [ 10 , 16 , 47 ].
These attempts raise an important question of how to construct an impact assessment process that can assess multi-dimensional impacts while being feasible to implement on a system level. For example, can 28 indicators across 4 domains of Research Impact Framework be realistically measured in practice? This could also be said of the Research Impact Model [ 13 ], which has 26 indicators, and the Research Excellent Framework by Ovseiko et al. [ 38 ], which has a total of 20 impact indicators. If such methods are to be widely used in practice by research funders and academic institutions to assess research impacts, the right balance between comprehensiveness and feasibility must be struck.
Though a number of studies suggest it is difficult to determine longer-term societal and economic benefits of research as part of multi-dimensional research impact assessment processes [ 13 , 36 , 44 ], the health economic impact models presented in this review and the broader literature demonstrate that it is feasible to undertake these analyses, particularly if the right methods are used [ 19 , 21 , 37 , 48 ].
The review revealed that, where broader policy and practice impacts of research have been assessed in the literature, the vast majority of studies have relied on principal investigator interviews and/or peer review to assess impacts, instead of interviewing policymakers and other important end-users of research. This would seem to be a methodological weakness of previous research, as solely relying on principal investigator assessments, particularly of impacts of their own research, has an inherent bias, leaving the research impact assessment process open to ‘gilding the lily’. In light of this, future impact assessment processes should routinely engage end-users of research in interviews and assessment processes, but also include independent documentary verification, thus addressing methodological limitations of previous research.
One of the greatest practical issues in measuring research impact, including the impact of public health research, are the long lag times before impacts manifest. It has been observed that, on average, it takes over 6 years for research evidence to reach reviews, papers, and textbooks, and a further 9 years for this evidence to be implemented into practice [ 49 ]. In light of this, it is important to allow sufficient time for impacts to manifest, while not waiting so long that these impacts cannot be verified by stakeholders involved in the production and use of the research. Studies in this review have addressed this issue by only assessing studies that had been completed for at least 24 months [ 36 ].
As identified in previous research [ 13 ], a major challenge is attribution of impacts and understanding what would have happened without individual research activity or what some describe as the ‘counterfactual’. Creating a control situation for this type of research is difficult, but, where possible, identification of baseline measures and contextual factors is important in understanding what counterfactual situations may have arisen. Confidence in attribution of effects can be improved by undertaking independent verification of processes and engaging end-users in assessments instead of solely relying on investigators accounts of impacts [ 36 ].
The research described in this review has some limitations that merit closer examination. Given the paucity of research in this area, review criteria had to be adjusted to include assessment of impacts beyond public health research to include all health research. It was also challenging to make direct comparisons across studies mostly due to the heterogeneity of studies and the lack of a standard terminology, hence the broad definition of ‘research impact’ finally applied in the review criteria. Although the majority of studies were found in the traditional biomedical databases (i.e., Medline, etc.), 18% were found in the grey literature highlighting the importance of using multiple data sources in future review processes. Another methodological limitation also identified in previous reviews [ 13 ], is that we did not estimate the level of publication bias and selective publication in this emerging field. Finally, as our analysis included studies published up to June 2013, we may not have captured more recent approaches to impact assessment.
Research impact assessment is a new field of scientific endeavour and typically impacts are assessed using mixed methodologies, including publication and citation analysis, interviews with principal investigators, peer assessment, case studies, and document analysis. The literature is characterised by an over reliance on bibliometric methods to assess research impact. Future impact assessment processes could be strengthened by routinely engaging the end-users of research in interviews and assessment processes. If multidimensional research impact assessment methods are to be widely used in practice by research funders and academic institutions, the right balance between comprehensiveness and feasibility must be determined.
Anderson W, Papadakis E. Research to improve health practice and policy. Med J Aust. 2009;191(11/12):646–7.
PubMed Google Scholar
Cooksey D. A review of UK health research funding. London: HMSO; 2006.
Google Scholar
Health and Medical Research Strategic Review Committee. The virtuous cycle: working together for health and medical research. Canberra: Commonwealth of Australia; 1998.
National Health and Medical Research Council Public Health Advisory Committee. Report of the Review of Public Health Research Funding in Australia. Canberra: NHMRC; 2008.
Campbell DM. Increasing the use of evidence in health policy: practice and views of policy makers and researchers. Aust New Zealand Health Policy. 2009;6:21.
PubMed PubMed Central Google Scholar
Wells R, Whitworth JA. Assessing outcomes of health and medical research: do we measure what counts or count what we can measure? Aust New Zealand Health Policy. 2007;4:14.
Australian Government Australian Research Council. Excellence in Research in Australia 2012. Canberra: Australian Research Council; 2012.
Kuruvilla S, Mays N, Walt G. Describing the impact of health services and policy research. J Health Serv Res Policy. 2007;12 Suppl 1:S1. -23-31.
Weiss AP. Measuring the impact of medical research: moving from outputs to outcomes. Am J Psychiatr. 2007;164(2):206–14.
Bornmann L. Measuring the societal impact of research. Eur Mol Biol Organ. 2012;13(8):673–6.
CAS Google Scholar
Holbrook JB. Re-assessing the science–society relation: The case of the US National Science Foundation’s broader impacts merit review criterion (1997–2011). In: Frodeman R, Holbrook JB, Mitcham C, Xiaonan H, editors. Peer Review, Research Integrity, and the Governance of Science–Practice, Theory, and Current Discussions. Dalian: People’s Publishing House and Dalian University of Technology; 2012. p. 328–62.
Holbrook JB, Frodeman R. Science’s social effects. Issues in Science and Technology. 2007. http://issues.org/23-3/p_frodeman-3/ .
Banzi R, Moja L, Pistotti V, Facchini A, Liberati A. Conceptual frameworks and empirical approaches used to assess the impact of health research: an overview of reviews. health Res Policy Syst. 2011;9:26.
Boaz A, Fitzpatrick S, Shaw B. Assessing the impact of research on policy: A review of the literature for a project on bridging research and policy through outcome evaluation. London: Policy Studies Institute London; 2008.
Aymerich M, Carrion C, Gallo P, Garcia M, López-Bermejo A, Quesada M, et al. Measuring the payback of research activities: a feasible ex-post evaluation methodology in epidemiology and public health. Soc Sci Med. 2012;75(3):505–10.
Barber R, Boote JD, Parry GD, Cooper CL, Yeeles P, Cook S. Can the impact of public involvement on research be evaluated? A mixed methods study. Health Expect. 2012;15(3):229–41.
Barker K, The UK. Research Assessment Exercise: the evolution of a national research evaluation system. Res Eval. 2007;16(1):3–12.
Boyack KW, Jordan P. Metrics associated with NIH funding: a high-level view. J Am Med Inform Assoc. 2011;18(4):423–31.
Buxton M, Hanney S, Morris S, Sundmacher L, Mestre-Ferrandiz J, Garau M, et al. Medical research: what’s it worth. Estimating the economic benefits from medical research in the UK. Report for MRC, Wellcome Trust and the Academy of Medical Sciences. 2008. http://www.wellcome.ac.uk/stellent/groups/corporatesite/@sitestudioobjects/documents/web_document/wtx052110.pdf .
Buykx P, Humphreys J, Wakerman J, Perkins D, Lyle D, McGrail M, et al. ‘Making evidence count’: A framework to monitor the impact of health services research. Aust J Rural Health. 2012;20(2):51–8.
Deloitte Access Economics. Extrapolated returns on investment in NHMRC medical research. Canberra: Australian Society for Medical Research; 2012.
Derrick GE, Haynes A, Chapman S, Hall WD. The association between four citation metrics and peer rankings of research influence of Australian researchers in six fields of public health. PLoS One. 2011;6(4):e18521.
CAS PubMed PubMed Central Google Scholar
Franks AL, Simoes EJ, Singh R, Gray BS. Assessing prevention research impact: a bibliometric analysis. Am J Prev Med. 2006;30(3):211–6.
Graham KE, Chorzempa HL, Valentine PA, Magnan J. Evaluating health research impact: development and implementation of the Alberta Innovates–Health Solutions impact framework. Res Eval. 2012;21(5):354–67.
Canadian Institutes of Health Research. Developing a CIHR framework to measure the impact of health research. Ottawa: Canadian Institutes of Health Research; 2005.
Group of Eight. Excellence in innovation: research impacting our nation’s future – assessing the benefits. Adelaide: Australian Technology Network of Universities; 2012.
Hanney S. An assessment of the impact of the NHS Health Technology Assessment Programme. Southampton: National Coordinating Centre for Health Technology Assessment, University of Southampton; 2007.
Higher Education Funding Council for England. Panel criteria and working methods. London: Higher Education Funding Council for England; 2012.
Kalucy EC, Jackson-Bowers E, McIntyre E, Reed R. The feasibility of determining the impact of primary health care research projects using the Payback Framework. Health Res Policy Syst. 2009;7:11.
Kuruvilla S, Mays N, Pleasant A, Walt G. Describing the impact of health research: a Research Impact Framework. BMC Health Serv Res. 2006;6(1):134.
Kwan P, Johnston J, Fung AYK, Chong DSY, Collins RA, Lo SV. A systematic evaluation of payback of publicly funded health and health services research in Hong Kong. BMC Health Serv Res. 2007;7(1):121.
Landry R, Amara N, Lamari M. Climbing the ladder of research utilization: Evidence from social science research. Sci Commun. 2001;22:396–422.
Lavis J, Ross S, McLeod C, Gildiner A. Measuring the impact of health research. J Health Serv Res Policy. 2003;8(3):165–70.
Laws R, King L, Hardy LL, Milat AJ, Rissel C, Newson R, et al. Utilization of a population health survey in policy and practice: a case study. Health Res Policy Syst. 2013;11:4.
Liebow E, Phelps J, Van Houten B, Rose S, Orians C, Cohen J, et al. Toward the assessment of scientific and public health impacts of the National Institute of Environmental Health Sciences Extramural Asthma Research Program using available data. Environ Health Perspect. 2009;117(7):1147.
Milat AJ, Laws R, King L, Newson R, Rychetnik L, Rissel C, et al. Policy and practice impacts of applied research: a case study analysis of the New South Wales Health Promotion Demonstration Research Grants Scheme 2000–2006. Health Res Policy Syst. 2013;11:5.
National Institutes of Health. Cost savings resulting from NIH research support. Bethesda, MD: United States Department of Health and Human Services National Institute of Health; 1993.
Ovseiko PV, Oancea A, Buchan AM. Assessing research impact in academic clinical medicine: a study using Research Excellence Framework pilot impact indicators. BMC Health Serv Res. 2012;12:478.
Schapper CC, Dwyer T, Tregear GW, Aitken M, Clay MA. Research performance evaluation: the experience of an independent medical research institute. Aust Health Rev. 2012;36(2):218–23.
Spoth RL, Schainker LM, Hiller-Sturmhöefel S. Translating family-focused prevention science into public health impact: illustrations from partnership-based research. Alcohol Res Health. 2011;34(2):188.
Sullivan R, Lewison G, Purushotham AD. An analysis of research activity in major UK cancer centres. Eur J Cancer. 2011;47(4):536–44.
CAS PubMed Google Scholar
Taylor J, Bradbury-Jones C. International principles of social impact assessment: lessons for research? J Res Nurs. 2011;16(2):133–45.
Warner KE, Tam J. The impact of tobacco control research on policy: 20 years of progress. Tob Control. 2012;21(2):103–9.
Wooding S, Hanney S, Buxton M, Grant J. The returns from arthritis research. Volume 1: Approach analysis and recommendations. Netherlands: RAND Europe; 2004.
Buxton M, Hanney S. How can payback from health services research be assessed? J Health Serv Res Policy. 1996;1(1):35–43.
Higher Education Funding Council for England. Decisions on assessing research impact. Bristol: Higher Education Funding Council for England; 2011.
Grant J, Brutscher P-B, Kirk SE, Butler L, Wooding S. Capturing research impacts: a review of international practice. Documented Briefing. RAND Corporation; 2010. http://www.rand.org/pubs/documented_briefings/DB578.html .
Murphy KM, Topel RH. Measuring the gains from medical research: an economic approach. Chicago: University of Chicago Press; 2010.
Balas EA, Boren SA. Managing clinical knowledge for health care improvement. In: Bemmel J, McCray AT, editors. Yearbook of Medical Informatics 2000: Patient-Centered Systems. Stuttgart, Germany: Schattauer Verlagsgesellschaft mbH; 2000. p. 65–70.
Download references
Author information
Authors and affiliations.
New South Wales Ministry of Health, 73 Miller St North, Sydney, NSW, 2060, Australia
Andrew J Milat
School of Public Health, University of Sydney, Level 2, Medical Foundation, Building, K25, Sydney, NSW, 2006, Australia
Andrew J Milat, Adrian E Bauman & Sally Redman
Sax Institute, Sydney, Level 2, 10 Quay, St Haymarket, NSW, 2000, Australia
Sally Redman
You can also search for this author in PubMed Google Scholar
Corresponding author
Correspondence to Andrew J Milat .
Additional information
Competing interests.
The authors declare that they have no competing interests.
Authors’ contributions
AJM conceived the study, designed the methods, and conducted the literature searches. AJM drafted the manuscript and all authors contributed to data interpretation and have read and approved the final manuscript.
Additional file
Additional file 1: table s1..
Characteristics of studies focusing on processes, theories, or frameworks assessing research impact.
Rights and permissions
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/4.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.
Reprints and permissions
About this article
Cite this article.
Milat, A.J., Bauman, A.E. & Redman, S. A narrative review of research impact assessment models and methods. Health Res Policy Sys 13 , 18 (2015). https://doi.org/10.1186/s12961-015-0003-1
Download citation
Received : 07 November 2014
Accepted : 16 February 2015
Published : 18 March 2015
DOI : https://doi.org/10.1186/s12961-015-0003-1
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
- Policy and practice impact
- Research impact
- Research returns
Health Research Policy and Systems
ISSN: 1478-4505
- Submission enquiries: Access here and click Contact Us
- General enquiries: [email protected]
IMAGES