4.7 Article

Retrospective comparative effectiveness research: Will changing the analytical methods change the results?

Journal

INTERNATIONAL JOURNAL OF CANCER
Volume 150, Issue 12, Pages 1933-1940

Publisher

WILEY
DOI: 10.1002/ijc.33946

Keywords

cohort studies; comparative effectiveness research; propensity score; retrospective; selection bias

Categories

Funding

  1. Penn State Cancer Institute
  2. Penn State College of Medicine
  3. National Institutes of Health [LRP 1L30 CA231572-01]
  4. American Cancer Society-Tri State CEOs Against Cancer Clinician Scientist Development Grant [CSDG-20-013-01-CCE]

Ask authors/readers for more resources

This study highlights the significant impact of biostatistical analytic choices on the outcomes of retrospective comparative effectiveness research studies, potentially leading to inconsistent conclusions.
In medicine, retrospective cohort studies are used to compare treatments to one another. We hypothesize that the outcomes of retrospective comparative effectiveness research studies can be heavily influenced by biostatistical analytic choices, thereby leading to inconsistent conclusions. We selected a clinical scenario currently under investigation: survival in metastatic prostate, breast or lung cancer after systemic vs systemic + definitive local therapy. We ran >300 000 regression models (each representing a publishable study). Each model had various forms of analytic choices (to account for bias): propensity score matching, left truncation adjustment, landmark analysis and covariate combinations. There were 72 549 lung, 14 904 prostate and 13 857 breast cancer patients included. In the most basic analysis, which omitted propensity score matching, left truncation adjustment and landmark analysis, all of the HRs were <1 (generally, 0.60-0.95, favoring addition of local therapy), with all P-values P-values. The combination of propensity score matching, left truncation adjustment, landmark analysis and covariate combinations generally produced P-values that were >.05 and/or HRs that were >1 (favoring systemic therapy alone). The use of more statistical methods to reduce the selection bias caused reported HR ranges to approach 1.0. By varying analytic choices in comparative effectiveness research, we generated contrary outcomes. Our results suggest that some retrospective observational studies may find a treatment improves outcomes for patients, while another similar study may find it does not, simply based on analytical choices.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available