4.6 Article

Sharing Individual Participant Data from Clinical Trials: An Opinion Survey Regarding the Establishment of a Central Repository

Journal

PLOS ONE
Volume 9, Issue 5, Pages -

Publisher

PUBLIC LIBRARY SCIENCE
DOI: 10.1371/journal.pone.0097886

Keywords

-

Funding

  1. MRC North West Hub for Trials Methodology Research [G0800792]
  2. Midlands Hub for Trials Methodology Research at the University of Birmingham [G0800808]
  3. All Ireland Hub for Trials Methodology Research at the Queen's University of Belfast [G0901530]
  4. Cancer Research UK [C5529]
  5. MRC [G0901530, MR/K025635/1, G0800808, G0800792] Funding Source: UKRI
  6. Cancer Research UK [16895] Funding Source: researchfish
  7. Medical Research Council [G0901530, MR/K025635/1, G0800808, G0800792] Funding Source: researchfish

Ask authors/readers for more resources

Background: Calls have been made for increased access to individual participant data (IPD) from clinical trials, to ensure that complete evidence is available. However, despite the obvious benefits, progress towards this is frustratingly slow. In the meantime, many systematic reviews have already collected IPD from clinical trials. We propose that a central repository for these IPD should be established to ensure that these datasets are safeguarded and made available for use by others, building on the strengths and advantages of the collaborative groups that have been brought together in developing the datasets. Objective: Evaluate the level of support, and identify major issues, for establishing a central repository of IPD. Design: On-line survey with email reminders. Participants: 71 reviewers affiliated with the Cochrane Collaboration's IPD Meta-analysis Methods Group were invited to participate. Results: 30 (42%) invitees responded: 28 (93%) had been involved in an IPD review and 24 (80%) had been involved in a randomised trial. 25 (83%) agreed that a central repository was a good idea and 25 (83%) agreed that they would provide their IPD for central storage. Several benefits of a central repository were noted: safeguarding and standardisation of data, increased efficiency of IPD meta-analyses, knowledge advancement, and facilitating future clinical, and methodological research. The main concerns were gaining permission from trial data owners, uncertainty about the purpose of the repository, potential resource implications, and increased workload for IPD reviewers. Restricted access requiring approval, data security, anonymisation of data, and oversight committees were highlighted as issues under governance of the repository. Conclusion: There is support in this community of IPD reviewers, many of whom are also involved in clinical trials, for storing IPD in a central repository. Results from this survey are informing further work on developing a repository of IPD which is currently underway by our group.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

Article Medicine, General & Internal

Using individual participant data to improve network meta-analysis projects

Richard D. Riley, Sofia Dias, Sarah Donegan, Jayne F. Tierney, Lesley A. Stewart, Orestis Efthimiou, David M. Phillippo

Summary: Network meta-analysis combines evidence from randomized trials to compare the efficacy of multiple treatments. Individual participant data (IPD) have potential advantages in network meta-analysis, providing more precise, reliable, and informative results, allowing treatment comparisons for individual patients and targeted populations based on their specific characteristics.

BMJ EVIDENCE-BASED MEDICINE (2023)

Review Health Care Sciences & Services

Minimal reporting improvement after peer review in reports of COVID-19 prediction models: systematic review

Mohammed T. Hudda, Lucinda Archer, Maarten van Smeden, Karel G. M. Moons, Gary S. Collins, Ewout W. Steyerberg, Charlotte Wahlich, Johannes B. Reitsma, Richard D. Riley, Ben Van Calster, Laure Wynants

Summary: The study aims to assess the improvement in reporting completeness of COVID-19 prediction models after peer review. The findings suggest that the reporting quality of preprints is poor and did not improve significantly after peer review, indicating that peer review had minimal effect on the completeness of reporting during the COVID-19 pandemic.

JOURNAL OF CLINICAL EPIDEMIOLOGY (2023)

Review Health Care Sciences & Services

Systematic review identifies the design and methodological conduct of studies on machine learning-based prediction models

Constanza L. Andaur Navarro, Johanna A. A. Damen, Maarten van Smeden, Toshihiko Takada, Steven W. J. Nijman, Paula Dhiman, Jie Ma, Gary S. Collins, Ram Bajpai, Richard D. Riley, Karel G. M. Moons, Lotty Hooft

Summary: This study aimed to summarize the research design, modeling strategies, and performance measures of clinical prediction models developed using machine learning techniques. A total of 152 studies were included, and it was found that most studies only reported the development of the models, without reporting sample size calculation, handling of missing values, and internal validation. Therefore, further improvement is needed in the methodological conduct and reporting standards of studies on machine learning-based prediction models.

JOURNAL OF CLINICAL EPIDEMIOLOGY (2023)

Article Health Care Sciences & Services

Minimum sample size for developing a multivariable prediction model using multinomial logistic regression

Alexander Pate, Richard D. Riley, Gary S. Collins, Maarten van Smeden, Ben Van Calster, Joie Ensor, Glen P. Martin

Summary: Multinomial logistic regression models are used to predict the risk of a categorical outcome with more than two categories. Researchers need to ensure that the number of participants is appropriate relative to the number of events and predictor variables for each category. This study proposes three criteria to determine the minimum required sample size, aiming to minimize overfitting, difference between observed and adjusted R-2 Nagelkerke, and ensure accurate estimation of overall risk. The criteria were evaluated through simulation study and applied to a worked example, with code provided for implementation in R and Stata.

STATISTICAL METHODS IN MEDICAL RESEARCH (2023)

Article Medicine, General & Internal

Can prognostic factors for indirect muscle injuries in elite football (soccer) players be identified using data from preseason screening? An exploratory analysis using routinely collected periodic health examination records

Tom Hughes, Richard Riley, Michael J. Callaghan, Jamie C. Sergeant

Summary: This study explored whether variables derived from periodic health examinations (PHE) are prognostic factors for indirect muscle injuries (IMIs) in elite football players. The results showed that, apart from age, most variables had limited prognostic value for injury risk prediction. The only variable that added prognostic value was a hamstring IMI occurring more than 12 months but less than 3 years prior to PHE.

BMJ OPEN (2023)

Article Medicine, General & Internal

Using Risk of Bias 2 to assess results from randomised controlled trials: guidance from Cochrane

Ella Flemyng, Theresa Helen Moore, Isabelle Boutron, Julian P. T. Higgins, Asbjorn Hrobjartsson, Camilla Hansen Nejstgaard, Kerry Dwan

Summary: A systematic review evaluates and combines all the empirical evidence from studies that meet specific criteria to answer a research question, assessing the risk of bias in the included studies to enhance confidence in the conclusions. Cochrane Reviews have used a risk of bias tool since 2008, and a new version, RoB 2, was introduced in 2019 to improve usability and reflect current understanding of bias. This paper discusses lessons learned from the phased implementation of RoB 2 and provides tips for systematic reviewers.

BMJ EVIDENCE-BASED MEDICINE (2023)

Review Health Care Sciences & Services

Systematic review finds spinpractices and poor reporting standards in studies on machine learning-based prediction models

Constanza L. Andaur Navarro, Johanna A. A. Damen, Toshihiko Takada, Steven W. J. Nijman, Paula Dhiman, Jie Ma, Gary S. Collins, Ram Bajpai, Richard D. Riley, Karel G. M. Moons, Lotty Hooft

Summary: This study evaluated the presence and frequency of spin practices and poor reporting standards in studies that developed and/or validated clinical prediction models using supervised machine learning techniques. A total of 152 studies were included, and the results revealed the existence of spin practices and poor reporting standards in these studies, emphasizing the need for a tailored framework to enhance the reporting quality of prediction model studies.

JOURNAL OF CLINICAL EPIDEMIOLOGY (2023)

Review Health Care Sciences & Services

Overinterpretation of findings in machine learning prediction model studies in oncology: a systematic review

Paula Dhiman, Jie Ma, Constanza L. Andaur Navarro, Benjamin Speich, Garrett Bullock, Johanna A. A. Damen, Lotty Hooft, Shona Kirtley, Richard D. Riley, Ben Van Calster, Karel G. M. Moons, Gary S. Collins

Summary: This article conducted a systematic review on oncology-related studies that developed and validated prognostic models using machine learning. The findings revealed the presence of spin, i.e., overinterpretation of findings, in these studies. The inconsistent reporting and use of overly strong or leading words in the publications indicate the need for caution when reading and using prognostic models in oncology.

JOURNAL OF CLINICAL EPIDEMIOLOGY (2023)

Article Sport Sciences

The Trade Secret Taboo: Open Science Methods are Required to Improve Prediction Models in Sports Medicine and Performance

Garrett S. Bullock, Patrick Ward, Franco M. Impellizzeri, Stefan Kluzek, Tom Hughes, Paula Dhiman, Richard D. Riley, Gary S. Collins

Summary: Regression or machine learning models used in sports medicine often suffer from poor methodology, incomplete reporting, and inadequate performance evaluation, leading to unreliable predictions and limited clinical usefulness. Thorough evaluation and open science practices are crucial for improving the validity and utility of these models, but they are currently lacking in the field.

SPORTS MEDICINE (2023)

Article Mathematical & Computational Biology

Propensity-based standardization to enhance the validation and interpretation of prediction model discrimination for a target population

Valentijn M. T. de Jong, Jeroen Hoogland, Karel G. M. Moons, Richard D. Riley, Tri-Long Nguyen, Thomas P. A. Debray

Summary: External validation of prediction models requires careful interpretation, as discrimination depends on both sample characteristics and generalizability of predictor coefficients. To resolve differences in discriminative ability across validation samples, we propose propensity-weighted measures of discrimination. Our methods account for case-mix differences and allow for fair comparisons of discriminative ability in the target population of interest, providing valuable insights for model updating strategies.

STATISTICS IN MEDICINE (2023)

Article Mathematical & Computational Biology

Developing prediction models to estimate the risk of two survival outcomes both occurring: A comparison of techniques

Alexander Pate, Matthew Sperrin, Richard D. Riley, Jamie C. Sergeant, Tjeerd Van Staa, Niels Peek, Mamas A. Mamas, Gregory Y. H. Lip, Martin O'Flaherty, Iain Buchan, Glen P. Martin

Summary: This study focuses on predicting the time until two survival outcomes have occurred and compares different analytical methods for multi-morbidity prognosis. The performance of these methods is evaluated through simulated data and a clinical example.

STATISTICS IN MEDICINE (2023)

Article Mathematical & Computational Biology

Stability of clinical prediction models developed using statistical or machine learning methods

Richard D. Riley, Gary S. Collins

Summary: Clinical prediction models estimate an individual's risk of a particular health outcome. Many models are developed using small datasets, leading to instability in the model and its predictions. Researchers should examine instability at the model development stage and propose instability plots and measures to assess model reliability and inform critical appraisal, fairness, and validation requirements.

BIOMETRICAL JOURNAL (2023)

Article Mathematical & Computational Biology

Regularized parametric survival modeling to improve risk prediction models

J. Hoogland, T. P. A. Debray, M. J. Crowther, R. D. Riley, J. Inthout, J. B. Reitsma, A. H. Zwinderman

Summary: This study proposes a method that combines flexible parametric survival modeling and regularization to improve risk prediction models for time-to-event data. By introducing different penalty terms, the models can be regularized to enhance prediction accuracy and model performance.

BIOMETRICAL JOURNAL (2023)

Article Mathematical & Computational Biology

Using temporal recalibration to improve the calibration of risk prediction models in competing risk settings when there are trends in survival over time

Sarah Booth, Sarwar I. Mozumder, Lucinda Archer, Joie Ensor, Richard D. Riley, Paul C. Lambert, Mark J. Rutherford

Summary: This article introduces a method called temporal recalibration to improve the calibration of prognostic models for new patients by accounting for trends in survival over time. The method involves estimating predictor effects using the full dataset and re-estimating the baseline using a subset of the most recent data. The authors demonstrate the application of temporal recalibration in the context of colon cancer survival and discuss considerations for applying this method.

STATISTICS IN MEDICINE (2023)

Review Medicine, General & Internal

ROB-ME: a tool for assessing risk of bias due to missing evidence in systematic reviews with meta-analysis

Matthew J. Page, Jonathan A. C. Sterne, Isabelle Boutron, Asbjorn Hrobjartsson, Jamie J. Kirkham, Tianjing Li, Andreas Lundh, Evan Mayo-Wilson, Joanne E. McKenzie, Lesley A. Stewart, Alex J. Sutton, Lisa Bero, Adam G. Dunn, Kerry Dwan, Roy G. Elbers, Raju Kanukula, Joerg J. Meerpohl, Erick H. Turner, Julian P. T. Higgins

Summary: This paper describes a structured approach, the ROB-ME tool, for assessing bias risk in meta-analysis, which can help identify high-risk meta-analyses and interpret results appropriately.

BMJ-BRITISH MEDICAL JOURNAL (2023)

No Data Available