0 Datasets
0 Files
Get instant academic access to this publication’s datasets.
Yes. After verification, you can browse and download datasets at no cost. Some premium assets may require author approval.
Files are stored on encrypted storage. Access is restricted to verified users and all downloads are logged.
Yes, message the author after sign-up to request supplementary files or replication code.
Join 50,000+ researchers worldwide. Get instant access to peer-reviewed datasets, advanced analytics, and global collaboration tools.
✓ Immediate verification • ✓ Free institutional access • ✓ Global collaborationJoin our academic network to download verified datasets and collaborate with researchers worldwide.
Get Free AccessAbstracts Background Pilot/feasibility or studies with small sample sizes may be associated with inflated effects. This study explores the vibration of effect sizes (VoE) in meta-analyses when considering different inclusion criteria based upon sample size or pilot/feasibility status. Methods Searches were to identify systematic reviews that conducted meta-analyses of behavioral interventions on topics related to the prevention/treatment of childhood obesity from January 2016 to October 2019. The computed summary effect sizes (ES) were extracted from each meta-analysis. Individual studies included in the meta-analyses were classified into one of the following four categories: self-identified pilot/feasibility studies or based upon sample size but not a pilot/feasibility study ( N ≤ 100, N > 100, and N > 370 the upper 75th of sample size). The VoE was defined as the absolute difference (ABS) between the re-estimations of summary ES restricted to study classifications compared to the originally reported summary ES. Concordance (kappa) of statistical significance of summary ES between the four categories of studies was assessed. Fixed and random effects models and meta-regressions were estimated. Three case studies are presented to illustrate the impact of including pilot/feasibility and N ≤ 100 studies on the estimated summary ES. Results A total of 1602 effect sizes, representing 145 reported summary ES, were extracted from 48 meta-analyses containing 603 unique studies (avg. 22 studies per meta-analysis, range 2–108) and included 227,217 participants. Pilot/feasibility and N ≤ 100 studies comprised 22% (0–58%) and 21% (0–83%) of studies included in the meta-analyses. Meta-regression indicated the ABS between the re-estimated and original summary ES where summary ES ranged from 0.20 to 0.46 depending on the proportion of studies comprising the original ES were either mostly small (e.g., N ≤ 100) or mostly large ( N > 370). Concordance was low when removing both pilot/feasibility and N ≤ 100 studies (kappa = 0.53) and restricting analyses only to the largest studies ( N > 370, kappa = 0.35), with 20% and 26% of the originally reported statistically significant ES rendered non-significant. Reanalysis of the three case study meta-analyses resulted in the re-estimated ES rendered either non-significant or half of the originally reported ES. Conclusions When meta-analyses of behavioral interventions include a substantial proportion of both pilot/feasibility and N ≤ 100 studies, summary ES can be affected markedly and should be interpreted with caution.
Michael W. Beets, R. Glenn Weaver, John P A Ioannidis, Christopher D. Pfledderer, Alexis Jones, Lauren von Klinggraeff, Bridget Armstrong (2023). Influence of pilot and small trials in meta-analyses of behavioral interventions: a meta-epidemiological study. , 12(1), DOI: https://doi.org/10.1186/s13643-023-02184-7.
Datasets shared by verified academics with rich metadata and previews.
Authors choose access levels; downloads are logged for transparency.
Students and faculty get instant access after verification.
Type
Article
Year
2023
Authors
7
Datasets
0
Total Files
0
Language
en
DOI
https://doi.org/10.1186/s13643-023-02184-7
Access datasets from 50,000+ researchers worldwide with institutional verification.
Get Free Access