0 Datasets
0 Files
Get instant academic access to this publication’s datasets.
Yes. After verification, you can browse and download datasets at no cost. Some premium assets may require author approval.
Files are stored on encrypted storage. Access is restricted to verified users and all downloads are logged.
Yes, message the author after sign-up to request supplementary files or replication code.
Join 50,000+ researchers worldwide. Get instant access to peer-reviewed datasets, advanced analytics, and global collaboration tools.
✓ Immediate verification • ✓ Free institutional access • ✓ Global collaborationJoin our academic network to download verified datasets and collaborate with researchers worldwide.
Get Free AccessImportance Test accuracy studies often use small datasets to simultaneously select an optimal cutoff score that maximizes test accuracy and generate accuracy estimates. Objective To evaluate the degree to which using data-driven methods to simultaneously select an optimal Patient Health Questionnaire-9 (PHQ-9) cutoff score and estimate accuracy yields (1) optimal cutoff scores that differ from the population-level optimal cutoff score and (2) biased accuracy estimates. Design, Setting, and Participants This study used cross-sectional data from an existing individual participant data meta-analysis (IPDMA) database on PHQ-9 screening accuracy to represent a hypothetical population. Studies in the IPDMA database compared participant PHQ-9 scores with a major depression classification. From the IPDMA population, 1000 studies of 100, 200, 500, and 1000 participants each were resampled. Main Outcomes and Measures For the full IPDMA population and each simulated study, an optimal cutoff score was selected by maximizing the Youden index. Accuracy estimates for optimal cutoff scores in simulated studies were compared with accuracy in the full population. Results The IPDMA database included 100 primary studies with 44 503 participants (4541 [10%] cases of major depression). The population-level optimal cutoff score was 8 or higher. Optimal cutoff scores in simulated studies ranged from 2 or higher to 21 or higher in samples of 100 participants and 5 or higher to 11 or higher in samples of 1000 participants. The percentage of simulated studies that identified the true optimal cutoff score of 8 or higher was 17% for samples of 100 participants and 33% for samples of 1000 participants. Compared with estimates for a cutoff score of 8 or higher in the population, sensitivity was overestimated by 6.4 (95% CI, 5.7-7.1) percentage points in samples of 100 participants, 4.9 (95% CI, 4.3-5.5) percentage points in samples of 200 participants, 2.2 (95% CI, 1.8-2.6) percentage points in samples of 500 participants, and 1.8 (95% CI, 1.5-2.1) percentage points in samples of 1000 participants. Specificity was within 1 percentage point across sample sizes. Conclusions and Relevance This study of cross-sectional data found that optimal cutoff scores and accuracy estimates differed substantially from population values when data-driven methods were used to simultaneously identify an optimal cutoff score and estimate accuracy. Users of diagnostic accuracy evidence should evaluate studies of accuracy with caution and ensure that cutoff score recommendations are based on adequately powered research or well-conducted meta-analyses.
Brooke Levis, Parash Mani Bhandari, Dipika Neupane, Suiqiong Fan, Ying Sun, Chen He, Yin Wu, Ankur Krishnan, Zelalem Negeri, Mahrukh Imran, Danielle B. Rice, Kira E. Riehm, Marleine Azar, Alexander W. Levis, Jill Boruff, Pim Cuijpers, Simon Gilbody, John P A Ioannidis, Lorie A. Kloda, Scott B. Patten, Roy C. Ziegelstein, Daphna Harel, Yemisi Takwoingi, Sarah Markham, Sultan H. Alamri, Dagmar Amtmann, Bruce Arroll, Liat Ayalon, Hamid Reza Baradaran, Anna Beraldi, Charles N. Bernstein, Arvin Bhana, Charles H. Bombardier, Ryna Imma Buji, Peter Butterworth, Gregory Carter, Marcos Hortes Nisihara Chagas, Juliana C.N. Chan, Lai Fong Chan, Dixon Chibanda, Kerrie Clover, Aaron Conway, Yeates Conwell, Federico M. Daray, Janneke M. de Man‐van Ginkel, Jesse R. Fann, Felix Fischer, Sally Field, Jane Fisher, Daniel Fung, Bizu Gelaye, Leila Gholizadeh, Felicity Goodyear‐Smith, Eric Green, Catherine G. Greeno, Brian J. Hall, Liisa Hantsoo, Martin Härter, Leanne Hides, Stevan E. Hobfoll, Simone Honikman, Thomas Hyphantis, Masatoshi Inagaki, María Iglesias-González, Hong Jin Jeon, Nathalie Jetté, Mohammad E. Khamseh, Kim M. Kiely, Brandon A. Kohrt, Yunxin Kwan, Ma. Asunción Lara, Holly Frances Levin-Aspenson, Shen‐Ing Liu, Manote Lotrakul, Sônia Regina Loureiro, Bernd Löwe, Nagendra P. Luitel, Crick Lund, Ruth Ann Marrie, Laura Marsh, Brian P. Marx, Anthony McGuire, Sherina Mohd Sidik, Tiago N. Munhoz, Kumiko Muramatsu, Juliet Nakku, Laura Navarrete, Flávia L. Osório, B.W. Pence, Philippe Persoons, Inge Petersen, Angelo Picardi, Stephanie L. Pugh, Terence J. Quinn, Elmārs Rancāns, Sujit D. Rathod, Katrin Reuter, Alasdair G Rooney, Iná S. Santos, Miranda T. Schram (2024). Data-Driven Cutoff Selection for the Patient Health Questionnaire-9 Depression Screening Tool. , 7(11), DOI: https://doi.org/10.1001/jamanetworkopen.2024.29630.
Datasets shared by verified academics with rich metadata and previews.
Authors choose access levels; downloads are logged for transparency.
Students and faculty get instant access after verification.
Type
Article
Year
2024
Authors
100
Datasets
0
Total Files
0
Language
en
DOI
https://doi.org/10.1001/jamanetworkopen.2024.29630
Access datasets from 50,000+ researchers worldwide with institutional verification.
Get Free Access