Why Are So Few Nursing Science Papers Retracted?

Why Are So Few Nursing Science Papers Retracted?

Richard Gray

Nurse Author & Editor, 2018, 28(3), 2

Retraction is an important part of the scientific method. Research that is flawed—because of error or fabrication of data—is removed from the scientific record. In clinical disciplines, practice based on unsound observations may result in patients being given (or conversely denied) effective care. Nursing is out of kilter with much of the rest of science; fewer papers are retracted and for different reasons. Why is this, and what do we need to do differently as a discipline?

Established in 2010, Retraction Watch (www.retractionwatch.com) has the dual aim of tracking and reporting retractions in science.  The number of papers that are retracted seems to be increasing exponentially. In 2016 Retraction Watch reported a 37% increase in retracted articles up from 500 in 2014 to 684 in 2015 (Belluz, 2016). Whether this trend demonstrates an increase in scientific misconduct or improved vigilance in spotting bad science is unclear. Interestingly, misconduct is, perhaps, not the most common reason for retraction. Wager and Williams (2011) reported that 40% of papers were retracted because of honest error or non-replicable findings and only 28% because of research misconduct. They also note that it is the original author (63%) who does most of the retracting. That said, there are novel reasons for retraction that may alter this arrangement. Over the past few years there has been an paroxysm of retractions because of fake peer reviews (see for example, McCook, 2018) and it has been argued that this may become the leading reason for retraction over the coming few years. It is probably not helpful for me to describe how you might go about manipulating the peer review system but sufficed to say it is not terribly difficult (with the right motivation). In part, this is because editors find it increasingly challenging to find peer reviewers.

Also relevant to this conversation is the observation that there is a strong correlation between the frequency of retraction and journal impact factor (put another way, publish in a high impact journal and you’re more likely to get retracted) (Fang & Casadevall, 2011). This observation may suggest that papers in leading journals come under more intense scrutiny from the scientific community and consequently more errors are spotted.

The consequences of having a paper retracted can be profound. Researchers caught fabricating data have, quite rightly, lost their jobs, professional registrations, and have had their reputations wrecked. It is both sobering and illuminating to read the case examples published on the retraction watch blog and there are examples where nurses have featured (the case of Moon-Fai Chan is informative https://retractionwatch.com/category/by-journal/j-clinical-nursing/). The stigma associated with having a paper retracted can serve as a powerful disincentive. The unintended consequence may be that scientists who have made a genuine error fear the consequence of admitting it.

Retraction in Nursing Science

Retraction in nursing science differs in a number of important ways to other disciplines. Rates of retraction  were reported in a systematic review of the Journal Citation Reports (JCR) nursing science journals (Al-Ghareeb et al., 2018). In total the authors identified just 29 retracted papers. The review did identify a small—statistically significant—increase in the number of papers retracted over time. There was also a significant correlation between the number of papers retracted and the journal impact factors, but the impact factor was negatively correlated with the number of retractions (to put it another way, if you publish in  a high impact nursing journal you are less likely to get retracted, which is contrary to the findings of Fang and Casadevall cited above). Two-thirds of included papers were retracted because they were duplicate publications and a one-quarter were randomized controlled trials. “Nursing science” was defined in the review as work published in JCR nursing journals. Of course, there are many journals not on this list and many nurses don’t publish their “best work” in nursing journals. As one of the first reviews of retraction in nursing, there are important limitations that need to be considered.  That said we need to consider how might we explain the apparent differences between nursing and science more generally?

Men are seemingly more likely to have a paper retracted than women. This might of course be explained by women’s under-representation in science, particularly in senior roles. In a female dominated profession such as nursing this may account—at least in part—for the lower rates of retraction in the discipline. But is it conceivable that nurses never fabricate data? Probably not. Why then have no papers been retracted because of data fabrication? In my experience nurses are not sufficiently critical of published work; there is not the intensity of academic debate over studies published in nursing journals that you might find in medicine, for example. It is this intense post-publication scrutiny that weeds out error and fraud. At the Journal of Psychiatric and Mental Health Nursing, a journal that I edit, we get few letters to the editor (in 2017 we had one letter submitted) challenging or debating papers that we have published. I think this does a disservice to the profession.

Few articles in nursing science have been retracted because the author reported that they had made an error, again in stark contrast with science more generally. Is this because authors do not check papers they have published? Or if they spot an error do they not feel ethically obliged to report it? It cannot be overstated that in a clinical discipline research impacts patient care. If there is an error it is imperative that it is removed from the scientific record.

In nursing science there is a preponderance of qualitative methodologies. An anecdotal observation (no one has ever counted) qualitative papers rarely seem to get retracted. Presumably qualitative researchers do fabricate (or tweak) data? It must happen that when looking for a quote to illuminate a theme an author is tempted to reinterpret or simply invent one. After all, this is an area of enquiry where interpretation is at the core of the work. Qualitative researchers could publish their data sets in a repository (as is the trend with clinical trials data) allowing other researchers to review the core data. I can’t find an example of this happening. When talking to qualitative researchers the rationale they give is that it is too difficult to anonymize data. Other ways of spotting fraud in qualitative research includes the use of linguistic packages to spot if there are similarities in participants included quotes. It remains an important, but open, question as to whether research misconduct is actually less prevalent among qualitative researchers. Further work is required.

I, like most editors, find it challenging to recruit peer reviewers who will produce a considered, detailed and timely review. There is some evidence that the quality of peer review is in decline, if you can measure quality by the number of words; in 2016 the average length of a review was 457 words, a year later this had dropped to 342 words (“It’s not the size that matters,” 2018). Many of the reviews I receive have been even briefer. My point is that I think reviewers are becoming less invested in taking the time to do a detailed, thorough peer review. Checking to see if data might have been fabricated takes time and effort. It seems to me that there is a need to improve the quality of peer review but how to do this is a vexing issue that editors grapple with on an almost daily basis.

Retraction is an important part of the scientific method. In nursing, this system of self-correction is not working as it should and we need to fix it. These are my five suggestions about how we might do that:

  1. Nurses—both clinical and academic—need to better understand their responsibility in correcting the scientific record if they spot error or misconduct.
  2. We need to provide training to peer reviewers (particularly around research integrity), the Publons Academy is an excellent initiative that supports this (Watson, 2018).
  3. Reviewers need to be rewarded for their contribution. I have long thought that contribution to peer review should be a key performance indicator for all academic nurses.
  4. Authors should make data publicly available so that observations can be checked and verified.
  5. Journal editors and publishers need to have a relentless focus on ensuring the integrity of the work they publish.

We need to see a year-on-year increase in the number of nursing science papers that are retracted. Nursing is a clinical discipline; ultimately our research impacts practice. If we don’t have a focus on weeding out bad science we put our patients at risk.


  1. Al-Ghareeb, A., Hillel, S., McKenna, L., Cleary, L., Vistentin, D., Jones, M., … Gray, R. (2018). Retraction of publications in nursing and midwifery research: a systematic review. International Journal of Nursing Studies, 81, 8-13.
  2. Belluz, J. (2016, March 24). Scientific journals are retracting more papers than ever before. This is probably good for science. Retrieved 14 July 2018, from https://www.vox.com/2016/3/24/11299102/scientific-retractions-are-on-the-rise
  3. Fang, F. C., & Casadevall, A. (2011). Retracted Science and the Retraction Index. Infection and Immunity, IAI.05661-11. https://doi.org/10.1128/IAI.05661-11 
  4. It’s not the size that matters. (2018, February 26). Retrieved 18 July 2018, from https://publons.com/blog/its-not-the-size-that-matters/
  5. McCook, A. A. (2018, July 12). A publisher just retracted ten papers whose peer review was “engineered” — despite knowing about the problem of fake reviews for years. Retrieved 15 July 2018, from https://retractionwatch.com/2018/07/12/publisher-has-known-of-problem-of-fake-reviews-for-years-so-how-did-10-papers-slip-its-notice/
  6. Wager, E., & Williams, P. (2011). Why and how do journals retract articles? An analysis of Medline retractions 1988–2008. Journal of Medical Ethics, 37(9), 567–570. https://doi.org/10.1136/jme.2010.040964
  7. Watson, R. (2018). Publons: The solution to several long-standing problems. Nurse Author & Editor28(1), 6. http://naepub.com/authorship/2018-28-1-6/

Editor’s note

We don’t usually leave comments open on articles published on Nurse Author & Editor, but this article, to me, begs for a discussion. Dr. Gray raises lots of issues and while he presents five recommendations, there is much in this article that remains “up in the air.” Therefore, commentary from readers in encouraged. I look forward to the discussion! –LHN

About the Author

Richard Gray, RN, PhD is Professor of Clinical Nursing Practice, School of Nursing and Midwifery, La Trobe University, Melbourne, VIC 3086, Australia. He is also the Editor of the Journal of Psychiatric and Mental Health Nursing. You can contact him via email at: r.gray@latrobe.edu.au

2018 28 3 2 Gray

Copyright 2018: The Author. May not be reproduced without permission.
Journal Complication Copyright 2018: John Wiley and Son Ltd.



One comment

  1. Dr. Gray raised a very important issue in nursing research and science. As a doctoral nursing student, I find this article highly relevant for professional development of nurse researchers and for encouraging researchers to participate in improving the quality of nursing research. Retraction of nursing literature which includes plagiarized, fabricated, and falsified data is important to prevent patients and their families from any harm and to improve the quality of nursing care. However, as indicated in this article, the retraction rates of nursing papers are low compared to other fields such as medicine, pharmacology, and basic sciences. Several reasons for low retraction rates in nursing have been outlined.

    I am a regular reader of retraction watch (https://retractionwatch.com/) and PubPeer (https://pubpeer.com/static/about). Dr. Gray mentioned only one of these blogs. PubPeer is another blog which serves as forum for the discussion of scholarly literature. In fact, the errors identified in several of papers listed on retraction watch were first identified through discussion on PubPeer. Therefore, nurse researchers could use this forum to discuss the published literature.

    Dr. Gray has proposed some excellent suggestions to improve the quality of nursing literature. I would like to comment on two of the suggestions.

    First, nursing academics and clinicians have a responsibility to critically review the published literature and help editors and scholars to correct the literature. Although this is an excellent suggestion, the challenge is that often nurses and clinicians do not receive adequate training to critically review the published literature. I consider this a challenge because during my experience as a graduate student, I have observed that nurses and graduate students lack the skills to critically appraise any published article. Even during their research courses, the critical appraisal criteria used by students include appraisal of word counts of title, abstract, and different sections of a research paper rather than the appraisal of the methodology, findings, content, and arguments. Therefore, it is important for mentors and educators to equip nurses, researchers, and graduate students with necessary critical appraisal skills. The second concern is that there are no guidelines published in nursing journals to explain the process of whistle blowing, that is, reporting of research misconduct. Perhaps, come clinicians and researchers may not report such cases because of fear of unknown consequences of whistleblowing. Recently, I read several papers from a nurse researcher which included incorrectly paraphrased excerpts from the cited materials. Although, the papers were properly cited, but the paraphrasing was extremely poor. The reporting was not possible because the definition of plagiarism on COPE’S website does not consider it plagiarism. “When somebody presents the work of others (data, words or theories) as if they were his/her own and without proper acknowledgment “(https://publicationethics.org/category/keywords/plagiarism). What measures should be taken in such cases?

    Second, the suggestion that researchers should share data through recognized open data repositories is an important advancement. However, some researchers may be prevented from data sharing because of the guidelines set by different institutional ethics review boards. Perhaps, there is also a need to encourage the review boards to allow the data sharing so that published research should be scrutinized.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.