0 Datasets
0 Files
Get instant academic access to this publication’s datasets.
Yes. After verification, you can browse and download datasets at no cost. Some premium assets may require author approval.
Files are stored on encrypted storage. Access is restricted to verified users and all downloads are logged.
Yes, message the author after sign-up to request supplementary files or replication code.
Join 50,000+ researchers worldwide. Get instant access to peer-reviewed datasets, advanced analytics, and global collaboration tools.
✓ Immediate verification • ✓ Free institutional access • ✓ Global collaborationJoin our academic network to download verified datasets and collaborate with researchers worldwide.
Get Free AccessFor systematic reviews of interventions, replication is defined as the \nreproduction of findings of previous systematic reviews looking at the same \neffectiveness question either by: purposefully repeating the same methods to \nverify one or more empirical findings; or purposefully extending or narrowing \nthe systematic review to a broader or more focused question (eg, across broader \nor more focused populations, intervention types, settings, outcomes, or study \ndesigns) \nAlthough systematic reviews are often used as the basis for informing policy \nand practice decisions, little evidence has been published so far on whether \nreplication of systematic reviews is worthwhile \nReplication of existing systematic reviews cannot be done for all topics; any \nunnecessary or poorly conducted replication contributes to research waste \nThe decision to replicate a systematic review should be based on the priority of \nthe research question; the likelihood that a replication will resolve uncertainties, \ncontroversies, or the need for additional evidence; the magnitude of the benefit \nor harm of implementing findings of a replication; and the opportunity cost of \nthe replication \nSystematic review authors, commissioners, funders, and other users (including \nclinicians, patients, and representatives from policy making organisations) can \nuse the guidance and checklist proposed here to assess the need for a replication
Peter Tugwell, Vivian Welch, Sathya Karunananthan, Lara Maxwell, Elie A. Akl, Marc T. Avey, Zulfiqar A Bhutta, Melissa Brouwers, Jocalyn Clark, Sophie Cook, Luis Gabriel Cuervo, Janet Curran, Elizabeth Tanjong Ghogomu, Ian D. Graham, Jeremy Grimshaw, Brian Hutton, John P A Ioannidis, Zoe Jordan, Janet Jull, Elizabeth Kristjansson, Étienne V Langlois, Julian Little, Anne Lyddiatt, Janet Martin, Ana Marušić, Lawrence Mbuagbaw, David Moher, Rachael L. Morton, Mona Nasser, Matthew J. Page, Jordi Pardo Pardo, Jennifer Petkovic, Mark Petticrew, Terri Pigott, Kevin Pottie, Gabriel Rada, Tamara Rader, Alison Riddle, Hannah R. Rothstein, Holger J Schüneman, Larissa Shamseer, Beverley Shea, Rosiane Simeon, Konstantinos C. Siontis, Maureen Smith, Karla Soares‐Weiser, Kednapa Thavorn, David Tovey, Brigitte Vachon, Jeffrey C. Valentine, Rebecca Villemaire, Peter Walker, Laura Weeks, George A. Wells, David B. Wilson, Howard White (2020). When to replicate systematic reviews of interventions: consensus checklist. , 370, DOI: https://doi.org/10.1136/bmj.m2864.
Datasets shared by verified academics with rich metadata and previews.
Authors choose access levels; downloads are logged for transparency.
Students and faculty get instant access after verification.
Type
Article
Year
2020
Authors
56
Datasets
0
Total Files
0
Language
en
DOI
https://doi.org/10.1136/bmj.m2864
Access datasets from 50,000+ researchers worldwide with institutional verification.
Get Free Access