Why Do All Systematic Reviews Have Fifteen Studies?

Why Do All Systematic Reviews Have Fifteen Studies?

Richard Gray

Nurse Author & Editor, 2020, 30(4), 1

Why do all systematic reviews published in nursing science journals have— give or take— 15 studies? Don’t believe me? Pick a journal, find the most recent issue, open up the first systematic review you come to and see how many studies are included.  It is perplexing that so many of the reviews I handle seem to have such similar numbers of included studies. Even when search strategies seemingly generate substantially different numbers of papers, researchers seem to always whittle these down to around 12 to 15. Intuitively this does not make much sense. Surely, you’d expect different review questions to generate different numbers of included studies. There should be a broad distribution, some reviews with a handful of papers, some sophisticated mega-reviews with lots of studies, and some middle-sized.

As a jobbing researcher—with time on my hands—I thought I’d test my observation. So, I picked a year—2017—and started to flick through the table of contents of all 116 nursing journals listed in the Journal Citation Report, noting every paper that self-identified as a “systematic review” in the title or abstract. I thought this task would take about an hour as a bit of fun in the evening, but I was wrong: it took hours. In total, I found 293 reviews. From each review, I then noted the number of included studies. The reviews included a grand total of 5,993 studies; the smallest had one study, the largest 232. That meant on average systematic reviews had a mean of 20 (SD = 22) included papers. I was wrong—well sort of. The mean is not the only average in statistics, on some occasions; particularly when one is trying to manipulate data to prove a point, it may be better to consider the median or mode. Turns out that both the median and modal average was 15. It was noteworthy how many reviews included between 5 and 19 studies. In fact, two-thirds of the reviews (190/293, 65%) were within this range. Posing the question, is there some sort of conspiracy between authors and editors that generates this seeming homogeneity to the number of studies in reviews?

The Empty Review Quandary

Inevitably, some reviews will be empty, that is to say, the search will find no studies that meet their inclusion criteria and addresses the research question. Curiously, of the 293 reviews I read, not one had no included studies. The phenomenon of the empty review is intriguing. Colleagues may be familiar with Cochrane reviews that contain few or no trials. Yaffe et al.7 published an interesting paper that examined the prevalence of empty reviews in the Cochrane library, reporting that 9% of active reviews were empty. Where are the empty reviews in nursing? I did recently submit an empty review for publication. The paper bounced back— surprisingly rapidly—from the editor with the feedback, “…I do not think it will interest the readers…” Mildly irritated, I did what you are advised as a doctoral student not to do—I wrote back to the editor reminding them that according to the author guidelines they claim to publish “methodologically rigorous,” not “interesting” research. My point, editors don’t like empty (or even almost empty) reviews. I have tested this theory out with a number of editorial colleagues, and I think it holds true. They don’t see the point in them. Of course, editors always have an eye to their impact factor, and if I was being cynical, I might wonder if editors are concerned empty reviews will not be well cited. But empty reviews play a critical role in research by identifying gaps in knowledge that can be addressed in future research, as well as other benefits, as I will illustrate.1,2,7

How Can a Review Be Empty?

Since my review of 293 nursing articles turned up zero empty reviews, readers may be wondering what exactly an empty review looks like and how it is written, given that chances are good they have never read one. Korth et al.5 have a detailed example that I think is helpful. Although it is not a nursing-focused review, their question, “What is the impact of urban agriculture on food security in low- and middle-income countries?” should still make sense to a nursing audience. Inclusion criteria for studies in their review were: 1) an impact evaluation measuring the effectiveness of urban agriculture on food insecurity; 2) a comparison group was required; and 3) at least two data points were measured. Their initial search resulted in 8,142 hits; screening of abstracts resulted in 198 full-text articles for review. Ultimately, none of the reviewed studies met their inclusion criteria. Reasons for disqualification from the review: 1) not in a developing country (n = 4); 2) not about urban agriculture (n = 12); 3) not about food security (n = 3); 4) not an impact evaluation (n = 173); and 5) duplicates (n = 6).5

Even though they did not find any studies that met their inclusion criteria, DaSilva et al.1,2 had five valuable takeaways from their review experience which I would like to note:

  • Significant gaps in the research were highlighted;
  • Multi-stage reviews are important and supported;
  • The literature they reviewed was “earmarked” in that they identified a “large number of cross-sectional studies…that would be essential in developing more realistic theories of change for future work”;
  • They became more actively engaged with content experts and established relationships that have been essential to their work; and
  • All team members, both experienced and new, gained new skills through the review process.

Reading this series of articles, their disappointment that the review did not turn out as expected is evident; however, their approach to “make lemonade from their lemon of a review” (my words, not theirs) was a valuable learning experience made more so by sharing with others through their publications.

Searching for Goldilocks

Most systematic reviews in nursing are not supported by external grants. Generally, they are self-funded—or part of doctoral projects—and undertaken on a shoestring, with limited resources. Why review 60 or 100 papers, when you can get away with 15? Consequently, do colleagues adjust, refine, or amend search strategies and inclusion criteria to get the number of papers that seems just right? Or is it a calibration that is established by the continuous forward and back of author submission and editor rejection? Note that the former is a version of ethical misconduct and should not be considered as an appropriate strategy as part of the review process.

The overt or covert manipulation of study methods to get the desired result perhaps has a parallel in the replicability crisis in psychology. In 2011 Brian Nosek and colleagues set up “The Reproducibility Project.”6 The aim was to replicate 100 experiments published in leading psychology journals in 2008. Published in Science, the results were troubling. Only 36 of 100 experiments were significant compared with 97 of 100 original experiments.6 Studies were not replicable probably because of researcher bias or the subtle manipulation of research methods and design. The same type of tinkering with review methodology likely introduces considerable bias and distortion. It would be informative, though not practical, to replicate a random selection of systematic reviews to test the consistency of observations. The transparency that Korth et al.5 offered in their review methodology highlights how easy it would have been to slightly modify their inclusion criteria and end up with 16 (that number again!) studies—fortunately they did not do so.

Breaking the Conspiracy

There is an easy way to prevent the manipulation of study methods: pre-registration.3 Pre-registration is a simple idea. Researchers, before starting their investigation, publish their methodology thus locking in the methods for their study. Search strategies, inclusion criteria, data extraction items, outcomes and analysis plans cannot be “tinkered with” during the review process. When the results are published, the reader can reconcile the plan with what was reported, and any discrepancies can be challenged. Authors can pre-register their plans with PROSPERO or the Open Science Framework. Increasingly authors are publishing detailed review protocols in journals such as Systematic Reviews. It is disappointing that so few nursing journals publish protocols: The Journal of Advanced Nursing publishes them, but with caveats, as does the Collegian, Nursing Reports, and Nursing Open, but that’s about it. The problem with pre-registration is that while it ties in researchers to a methodology, it does nothing to address editorial bias. If a review is submitted that faithfully complies with the published protocol, but the editor thinks a study will not interest readers, they are free to reject.

Editorial decisions, it might be argued, should be focused on the methodological rigour of the research. The only practical way to achieve this is to blind editors (and reviewers) to the results of a paper. Registered reports accomplish this by dividing the publication process into two parts: Part one is a paper providing a justification for the study and setting out the methods. The decision to accept is based solely on a reviewer’s judgment of the significance and rigour of the work. If part one is accepted the editor issues an in-principal agreement to publish the results of the study as long as the researchers comply with the methods set out in part one.4,5 In theory, editors can’t reject a review because they don’t like the results. According to the Centre for Open Science, only three nursing journals currently publish registered reports: The Journal of Clinical Nursing, the Journal of Psychiatric and Mental Health Nursing (JPM) and JMIR Nursing. Depressingly, the JPM has recently decided to drop this type of submission.

Embracing New Ways of Publishing

My irritation with the homogeneity of systematic reviews in nursing may be nothing more sinister than attentional bias to the modal mean of a distribution. But I’m not so sure. Editors send out subtle, but clear messages when they correspond with authors. These messages are reinforced by what editors ultimately decide to publish. Authors are attentive, they read and interpret signals and respond accordingly. Systematic reviews matter: they make a statement about the state of knowledge about a particular topic and inform clinical practice. Threats to research integrity come from multiple sources and need to be considered and debated. Mechanisms—such as registered reports—that help reduce conscious and unconscious bias seem to make an important contribution to improving science reporting. It would be good to see more nursing journals embracing novel approaches to publishing.

Conclusion

Systematic reviews are a type of research and as such, their procedures should be clearly identified before beginning the literature search. Inclusion and exclusion criteria should be delineated. As the review proceeds, if the results are returning too many—or too few—studies, that is not a reason to change the methodology to make the numbers work in your favour. In nursing, we need to become comfortable with empty reviews and I encourage editors to consider them for publication in their journals. Likewise, pre-registration of study protocols for all types of investigations—including systematic reviews—is important and needs broader uptake among the community of nurse researchers.

References

  1. DaSilva, N.R., Stewart, R., & vanRooyen, C. (2016). Gaining from nothing: Five benefits of empty systematic reviews. https://carinavr.files.wordpress.com/2016/03/cee-2016_poster.pdf.
  2. Da Silva, N. R., Zaranyika, H., Langer, L., Randall, N., Muchiri, E., & Stewart, R. (2017). Making the most of what we already know: A three-stage approach to systematic reviewing. Evaluation Review, 41(2), 155–172. https://doi.org/10.1177/0193841X16666363
  3. Gray, R. J. (2016). Moths to a flame: How we can improve the quality of clinical trial reporting in nursing journals. Nurse Author & Editor, 26(4), 5. https://naepub.com/wp-content/uploads/2016/11/NAE-2016-26-4-5-Gray.pdf
  4. Gray, R. (2018). Promoting openness and transparency in mental health nursing science. Journal of Psychiatric and Mental Health Nursing, 25(1), 1–2. https://doi.org/10.1111/jpm.12432
  5. Gray, R. (2019). Questionable research practices in nursing science: .05 shades of grey. Nurse Author & Editor, 29(2), 5. https://naepub.com/reporting-research/2019-29-2-5/
  6. Korth, M., Stewart, R., Langer, L., Madinga, N., Rebelo Da Silva, N., Zaranyika, H., van Rooyen, C., & de Wet, T. (2014). What are the impacts of urban agriculture programs on food security in low and middle-income countries: A systematic review. Environmental Evidence, 3(1), 21. https://doi.org/10.1186/2047-2382-3-21
  7. Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251). https://doi.org/10.1126/science.aac4716
  8. Yaffe, J., Montgomery, P., Hopewell, S., & Shepard, L. D. (2012). Empty reviews: A description and consideration of Cochrane Systematic Reviews with no included studies. PLoS ONE, 7(5), e36626. https://doi.org/10.1371/journal.pone.0036626

About the Author

Richard Gray, RN, PhD is Professor of Clinical Nursing Practice, School of Nursing and Midwifery, La Trobe University, Melbourne, VIC 3086, Australia. You can contact him via email at: r.gray@latrobe.edu.au.

2020 30 4 1 Gray

Copyright 2020: The Author. May not be reproduced without permission.
Journal Complication Copyright 2020: John Wiley and Son Ltd.