Impact Statements

Writing Research Impact Statements

Michelle Cleary, Jan Sayers, and Roger Watson

nurse author & Editor, 2016, 26(1), 4

Research drives innovation by generating new ideas or ways of doing things and, in turn, contributes to society (Chapman, 2014). Increasingly, attention is being given to demonstrating and measuring the return on research investment and the benefits generated from research, for example, by the Australian Research Council (ARC) especially in terms of environmental, economic, and social impact (ARC, 2015). In a perfect world all research undertaken would have impact (Cleary, Siegfried, Jackson, & Hunt, 2013). However, the ARC definition of impact requires the researcher to make a demonstrable contribution “to the economy, society, culture, national security, public policy or services, health, the environment, or quality of life, beyond contributions to academia.”

Measuring research impact is known to be challenging and takes many years to achieve (Cleary et al., 2013). Equally, making judgements about research impact is fraught with complexity although frameworks exist that may guide determinations (Morgan, 2014). Jaffe’s (2015) impact evaluation framework is designed to inform decision-makers about the scope of potential research outcomes and has ready applicability for nursing research. The five components of the framework are:

  1. capability (improvements in workforce capability);
  2. environmental (enhancements in the natural environment);
  3. financial (creation of job opportunities or improved services);
  4. public policy (impacts relating to legislation, public policy, or regulations); and
  5. social, cultural or community (benefits enhancing cultural values, health and safety, international reputation and contribution).

Whilst it will not necessarily be possible for all research to be evaluated according to the elements of this framework, this tool nonetheless may assist researchers (and authors) in identifying potentially important evidence of the impact of their work as well as facets that might otherwise have been overlooked. The framework and associated impact measures are also useful for funding providers to inform decisions about financial resources to be dedicated to research (Jaffe, 2015).

There is an increasing expectation that researcher(s) will plan (aspirational) impact when undertaking research. This includes more than traditional research outputs (publications and competitive funding), but the need to demonstrate impact such as knowledge transfer (e.g., collaborations), application of research to practice, community benefit and the impact of the research on policies, laws, and regulations (eg see https://becker.wustl.edu/impact-assessment) (King, 2011). This is increasingly crucial in any research assessment including research grant applications (Cleary et al., 2013).

Evidence of Impact

Increasingly, non-academics outputs are also being considered in relation to impact. This includes local, national and international conferences, grey literature (e.g. working papers), professional accounts (Twitter, Facebook, blogs), professional magazines/newsletters publications, public media press releases (expert opinion, debates), and representation/community/government engagement (e.g. forums, presentations, advisory roles). Involvement with external agencies such as board and committee membership are also considered important in relation to impact assessment.

IMPACT Templates

Impact templates are provided in some countries, for example as in the United Kingdom (UK) in 2014, for assessment by the Research Excellence Framework (REF), which require the formal development of impact case studies for presentation during the research assessment exercise. The Research Excellence Framework has a searchable tool so that impact case studies are publicly available enabling the generation of interest beyond academia. These provide some excellent examples for researchers to use to inform the development of their own case studies.  

When considering writing impact statements for schools or projects it is suggested that one could trawl for examples of impact that its research has had and, start ‘creating’ impact, for example, by ensuring that projects likely to have impact create webpages with hit counters, information in different formats for relevant groups and an auditable distribution (areas, people), lectures and forums to increase the public understanding of your science, ensuring that where your research has been referred to in State and Federal legislation is recorded (in addition to a plan for getting it there), YouTube, podcast and blog outputs (where hits are automatically recorded) and Twitter sites where followers are counted. 

Whilst the UK 2014 REF provided templates for impact statements, the templates operated at two levels: one for institutions and their general approach to impact including support and strategy, and one for each of the actual case studies returned.  The latter were designed to gather specific information related to the impact that was being described such as: the purported impact that was being achieved: the underlying research—with publications—that led to the impact; the people involved and evidence of the actual impact.  It should be noted that impact specifically excludes impact on the research community; in other words metrics such as citations, subsequent research funding or related discoveries, and educational impact such as course development and educational textbooks.  Otherwise, the Higher Education Funding Council in the UK—the body responsible for managing the REF across the UK via a REF management team—encouraged as wide as possible a definition of impact and what could be used as evidence and the successful universities, which were the most imaginative, were certainly able to demonstrate this.  The ARC summarises the range of impact nicely as follows, under ‘Uptake and adoption’:

“The application of research outputs by users, resulting in outcomes. This may involve complex processes over time, whereby research outputs (e.g. knowledge, technologies, intellectual property) are adapted, built upon and operationally applied. Evidence of engagement, uptake and adoption, may include licenses, incorporation into policies or standards, use of tools, etc.” (ARC, 2015)

and also reminds authors of impact statements that valuation is important, as follows: “Assigning a monetary value on outcomes, for example to enable a comparison to be made.”

Therefore, distilling some of the intelligence available from the 2014 UK REF and referring to the ARC information (the UK was first to implement assessment of research impact and, it appears, Australia is the first to follow with very similar principles) a useful list of things to include in a personal or institutional template for gathering evidence of impact would include:

  • What is the purported impact? (a succinct statement to guide the process of demonstrating the impact)
  • What is the research that led to the impact? (this may be a range of projects and specific findings; NB: the findings must be directly related to the impact)
  • When did the research take place and when was the impact noticed? (there will be published rules in this regard)
  • Which publications report the research findings? (these should be peer reviewed at least)
  • What evidence do you have of impact? (thus can be wide, taking any excluded categories of impact into account, but must be a direct result of the impact; the impact can be negative, eg something adverse stopped as a result of the research)
  • How can you demonstrate the evidence of impact? (reference in policy documents; websites and social media ‘hits’; numbers of people ‘impacted’; income earned)

In summary, researchers need to plan and author measurable impact outcomes when writing research proposals; regularly track progress; and write a supporting narrative. Allocating time to progressively complete these tasks during the program of research enables the nurse author not only to reflect on their achievements but importantly to readily and objectively demonstrate research impact. This planned approach is a better alternative than cobbling together information haphazardly when the need arises, and potentially omitting important evidence. Finally, sources to corroborate claims must be provided to support the case. 

References

  1. Australian Research Council. (2015, November 27). Research impact principles and framework. Retrieved January 13, 2016, from http://www.arc.gov.au/research-impact-principles-and-framework
  2. Chapman, M. (2014). 50 Solutions that count: Innovation isn’t enough. Campus Review, February 24th, 9-11.   
  3. Cleary, M., Siegfried, N., Jackson, D., & Hunt, G. E. (2013). Making a difference with research: Measuring the impact of mental health research. International Journal of Mental Health Nursing, 22(2), 103-105.           
  4. Jaffe, A. B. (2015). A Framework for Evaluating the Beneficial Impacts of Publicly Funded Research, Motu Note 15. Retrieved from: http://www.motu.org.nz/our-work/productivity-and-innovation/science-and-innovation-policy/a-framework-for-evaluating-the-beneficial-impacts-of-publicly-funded-research/ November 16, 2015.
  5. King, J. (2011). Measuring the impact of research. Information Outlook, 15(2), 17-19.       
  6. Morgan, B. (2014). Research impact: Income for outcome. Nature, 511(7510), S72-S75.  

about the authors

 Michelle Cleary, PhD, RN, is Professor, Faculty of Health, University of Tasmania, Sydney, Australia. Email:  michelle.cleary@utas.edu.au 

Jan Sayers, PhD RN, is an Independent Research Advisor, Sydney, Australia.

Roger Watson, PhD RN FRCN FAAN, is Professor, University of Hull, UK. Email: R.Watson@hull.ac.uk

NAE 2016 26 1 4 Cleary et al

Copyright 2015: The Authors. May not be reproduced without permission.
Journal Complication Copyright 2015: John Wiley and Sons Ltd