Importance
Many medical journals, including JAMA, restrict the use of causal language to the reporting of randomized clinical trials. Although well-conducted randomized clinical trials remain the preferred approach for answering causal questions, methods for observational studies have advanced such that causal interpretations of the results of well-conducted observational studies may be possible when strong assumptions hold. Furthermore, observational studies may be the only practical source of information for answering some questions about the causal effects of medical or policy interventions, can support the study of interventions in populations and settings that reflect practice, and can help identify interventions for further experimental investigation. Identifying opportunities for the appropriate use of causal language when describing observational studies is important for communication in medical journals.
Observations
A structured approach to whether and how causal language may be used when describing observational studies would enhance the communication of research goals, support the assessment of assumptions and design and analytic choices, and allow for more clear and accurate interpretation of results. Building on the extensive literature on causal inference across diverse disciplines, we suggest a framework for observational studies that aim to provide evidence about the causal effects of interventions based on 6 core questions: what is the causal question; what quantity would, if known, answer the causal question; what is the study design; what causal assumptions are being made; how can the observed data be used to answer the causal question in principle and in practice; and is a causal interpretation of the analyses tenable?
Conclusions and Relevance
Adoption of the proposed framework to identify when causal interpretation is appropriate in observational studies promises to facilitate better communication between authors, reviewers, editors, and readers. Practical implementation will require cooperation between editors, authors, and reviewers to operationalize the framework and evaluate its effect on the reporting of empirical research.
Many medical journals, including JAMA, restrict the use of causal language to describing studies in which the intervention is randomly assigned. Indeed, randomized clinical trials are widely viewed as the preferred way of answering questions about the causal effects of interventions. Yet it is not feasible to answer all such questions with trials due to limitations including cost, follow-up duration, or ethical considerations. When such limitations preclude the conduct of trials, carefully designed analyses of observational (nonexperimental) data offer an alternative source of evidence on the effects of interventions (eg, treatment strategies, policies, or changes in behavior). Furthermore, observational studies can serve as a data-driven approach for identifying interventions that merit further experimental investigation and for examining the effects of interventions in populations and settings that reflect practice.
The potential of observational studies to contribute evidence about the causal effects of interventions is actively being examined across medicine, epidemiology, biostatistics, economics, and other social sciences. In this Special Communication, we examine a framework that might be used by medical journals as they move away from the current approach prohibiting the use of any causal language for observational studies and toward a more comprehensive approach for causal inference that reflects a synthesis of extensive prior work spanning multiple, diverse disciplines. We undertake this examination now for 3 main reasons. First, decision-makers are increasingly seeking timely answers to complex research questions about the effects of interventions that are challenging or impossible to address with randomized trials. For example, questions about long-term or rare effects of treatment, heterogeneity of treatment effects, or the effects of health care policies can be difficult to answer by relying exclusively on trials. Second, there has been wide dissemination of frameworks for posing causal questions and elaborating the assumptions needed to answer them.1-41 These frameworks have supported the refinement of existing methods and the development of new methods that promise to deliver results that have a causal interpretation, provided strong assumptions are met.42-138 Third, observational data from multiple sources (eg, registries, health care claims, electronic health records) are increasingly available for research purposes. Analyses from different sources can facilitate the evaluation of robustness by using data with different measurement characteristics from populations that may have different underlying causal structures.
In what follows, we first lay out the challenges inherent in drawing causal inferences about the effects of interventions from observational studies. We then discuss limitations of the current approach to determining the appropriateness of causal language for observational studies. Finally, we propose an alternative framework for causal inference in medical and health policy research and examine its implications for authors, reviewers, editors, and readers of clinical journals.
The Challenge in Drawing Causal Inferences From Observational Studies
Increasing use of observational studies to address questions about the causal effects of interventions poses a challenge to journals that primarily serve clinical audiences. These observational studies depend more heavily on causal and statistical modeling assumptions compared with large, well-conducted randomized trials. Therefore, all other study aspects being equal, drawing causal inferences from observational studies is inherently more speculative. But, as noted earlier, all other study aspects are often not equal. Randomized trials cannot address all causal questions of importance in medicine and health policy and may have limited generalizability; thus, investigators may need to use observational studies as a source of evidence to address causal questions. The challenge, then, is to balance the importance of addressing the causal questions for which observational studies are needed with caution regarding the reliance on strong assumptions to support causal conclusions.
When researchers are confronted with this challenge, one response is to retreat from causal goals and pursue purely descriptive or predictive goals for observational studies. This approach often amounts to applying a randomization-centered criterion for determining whether causal language is allowed, resulting in exclusively associational language for any investigation using observational data. With this approach, a single study design element essentially dictates the language that can be used to describe goals, methods, and interpretations. For example, current Instructions for Authors in JAMA and the JAMA Network journals state that “[c]ausal language (including use of terms such as effect and efficacy) should be used only for randomized clinical trials. For all other study designs…, methods and results should be described in terms of association or correlation and should avoid cause-and-effect wording.” This recommendation is also included in the AMA Manual of Style.139 Nevertheless, rare ad hoc exceptions have been made by JAMA and JAMA Network journals in allowing causal interpretations for observational analyses in which necessary assumptions were articulated and deemed plausible.140,141 Furthermore, articles in the JAMA Guide to Statistics and Methods series have discussed various causal inference methods.142-151
Limitations of the Randomization-Centered Criterion for Determining the Appropriateness of Causal Language and Interpretation
The use of a binary, randomization-centered criterion for allowing the use of causal language or interpretation is not problematic when applied to large, well-conducted randomized trials with near-perfect adherence to the study protocol and limited missing outcomes, wherein a causal interpretation is warranted. However, for many other studies, the approach based on this criterion is inadequate and does not accommodate precise descriptions of goals, research questions, methods, assumptions, and interpretations, and can result in lack of clarity during interactions among authors, editors, reviewers, and readers. The prohibition impedes the presentation and critique of study methods and risks misinterpretation of results both by allowing inappropriately drawn implicit causal inferences and by obscuring appropriate causal conclusions.
Prohibiting causal language when describing observational studies does not allow authors to communicate their research goals clearly and fully.152-154 Causal goals require causal assumptions (eg, the assumption of no uncontrolled confounding). These assumptions are almost never possible to verify with the data alone, and their plausibility can best be assessed within an explicit causal framework. Without causal language, the description and critique of research methods becomes challenging because the connection between ends (causal goals) and means (research methods) is obscured.154 Furthermore, when causal goals, assumptions, and methods cannot be explicitly discussed, assessing the choice of study design and analytic approaches and interpreting results become difficult, if not impossible. In fact, avoidance of causal language precludes effective criticism grounded in causal considerations. For example, if a manuscript purports to present only descriptive or predictive associations between some exposure (or treatment) and outcomes, there is little room for discussing confounding in the sense of comparability between intervention groups.153,154 Yet such discussion is often necessary to uncover the reported study’s limitations if a causal interpretation is under consideration. In other words, restricting causal discourse is undesirable because authors and readers often hope that the estimated associations have a tenable causal interpretation and are interested to know when and why such interpretation may not be valid.
In addition, using a single study design element (randomization) as the sole criterion of whether causal conclusions can be drawn risks giving the impression of complacency about potential weaknesses that can affect both randomized trials and observational studies. Editors, reviewers, and readers would not draw causal conclusions based on simple between-treatment-group comparisons from a randomized trial with poor data collection practices, differential outcome ascertainment, or a high dropout rate, but these issues are not given the same weight as (lack of) randomization when the current approach to the use of causal language is applied. Arguably, an approach based on the randomization-centered criterion without directly confronting the difficulties listed earlier would be possible only if randomized trials with no major flaws were the only experimental studies under consideration, in which case cautions about causal interpretation could be reserved only for observational studies. Randomization strengthens the plausibility of a causal interpretation of study results, but randomization alone is not sufficient. Conversely, the absence of randomization does not on its own render a causal interpretation completely untenable. For observational studies, the blanket prohibition of causal language skirts the difficult but necessary work of judging whether a causal interpretation of any specific observational analysis is tenable. This judgment cannot rest on simply noting the absence of randomization155; it requires context-informed examination of all relevant aspects of design, conduct, and analysis.
An Alternative Framework for Causal Inference for Medical and Health Policy Research
The extensive literature on causal inference across diverse disciplines26,30,156-158 suggests an alternative framework for observational studies that aim to answer questions about the causal effects of interventions. This framework avoids the limitations discussed earlier and can help editors and readers determine whether a particular observational study provides valid and reliable evidence about the effects of interventions in a target population. Such a framework can be summarized in terms of several core questions that need to be considered to understand and interpret observational studies:
What is the causal question? If the goal of the research is to provide evidence about the effects of medical or health policy interventions, the research question is best explicitly framed in causal terms, comparing 2 or more well-defined alternatives with respect to clearly defined outcomes of interest, for a specific target population during a period of follow-up.159,160
What quantity would, if known, answer the causal question? After stating the causal question, one can specify the quantity that could, if known, serve as the answer to the question; this quantity is the causal estimand (eg, the causal effect of interest).161,162 The precise specification of the causal estimand requires describing the population of interest, the interventions or strategies to be compared, details of outcome definitions and the timing of outcome ascertainment, and the choice of effect measure (eg, risk difference, relative risk). The causal estimand can be formally specified using mathematical causal models (eg, closely related counterfactual, potential outcome, or structural models3,5,26,163-169). In many cases, specification can be aided by describing the (hypothetical) target trial that could address the research question.144,170-172
What is the study design? The approach for collecting new data or using existing data—including choosing among data sources, sampling individuals and their follow-up experience, and collecting treatment covariate and outcome information over time—determines whether the data can be used to answer the causal question. For example, in cohort studies comparing different treatment strategies, the choice of the start of follow-up (time zero) and the alignment of that time with the time at which eligibility is determined can affect the validity of observational analyses.173 More broadly, the key goal of study design is to make the causal assumptions more plausible and to facilitate learning about the causal estimand.
What causal assumptions are being made? Drawing causal inferences from observational studies requires causal assumptions that allow investigators to learn about the causal estimand by using data. For example, many observational studies require an assumption that, given the variables that have been measured and accounted for (via study design or analysis), there remains no uncontrolled confounding.174 Other approaches, such as instrumental variable analyses, difference-in-differences analyses, or regression discontinuity analyses, require different sets of assumptions. Typically, causal assumptions are untestable in the sense that they cannot be fully evaluated with the data alone; instead, they have to be examined on the basis of background knowledge (eg, clinical knowledge of the treatment selection process).175,176
How can the observed data be used to answer the causal question in principle and in practice? Using the study design and causal assumptions, investigators can determine how analyses of observed data could, at least in principle (eg, if, hypothetically, all causal assumptions held and sampling variability were absent), provide information about the causal estimand. The formal examination of whether the observed data can in principle be used to learn about the causal estimand is referred to as identification analysis. In some cases, the assumptions suffice only to place bounds around the causal estimand.45,177-179 Most studies aiming to estimate causal estimands using observational data rely on well-understood identification strategies (ie, the results from prior identification analyses)180,181 and apply statistical methods to data for estimation and statistical inference. We offer a more detailed description of the relationship between causal estimands, identification analysis, and the use of data and statistical methods in the eText; eFigure 1, eFigure 2, and eFigure 3; and Example 1 and Example 2 in the Supplement.
The statistical methods for observational studies should have good statistical performance (eg, acceptably low bias, high precision) and support the valid quantification of uncertainty (eg, producing valid CIs). The challenges of drawing statistical inferences using data and models are, if anything, accentuated in nonexperimental research.182 Furthermore, issues related to missing data and measurement error often arise in observational studies and require additional assumptions (typically untestable using the data alone) about the structure of missingness or measurement error, additional data (eg, validation studies), and specialized methods to address these issues and properly quantify uncertainty.
Is a causal interpretation of the analyses tenable? Evaluating the appropriateness of endowing the results of an observational analysis with a causal interpretation typically requires untestable assumptions. Determining whether such interpretation is tenable, therefore, involves subjective judgments informed by background knowledge and an understanding of the research context, drawing on multiple sources of evidence. These judgments can be informed by triangulation of results across different analyses (eg, using different assumptions or other data sources)183; attempts to falsify the causal assumptions with the data, when possible (eg, negative control analyses80,184); and quantitative bias/sensitivity analyses and other methods to examine assumption violations.19,185-191
What This Framework Aims to Accomplish
This framework maintains the distinction between causation and association while addressing the limitations of approaches that rely on randomization as the sole criterion: it differentiates between causal ends and the statistical means to achieve them; supports the alignment between causal questions and the analyses used to answer them; increases transparency to facilitate scientific conversations; acknowledges that subjective judgments, informed by background clinical or policy knowledge, are unavoidable in observational studies; and aims to instill intellectual humility. Disagreements regarding the appropriate interpretation of observational studies among different stakeholders are always possible. This framework clarifies such disagreements by making the relevant considerations explicit and facilitates reasoning and debate.
Far from being a list of separate items, the framework highlights that multiple interrelated components are needed to report, evaluate, and interpret observational studies. For example, investigators will select study designs that are tailored to answer the causal question of interest and that support the plausibility of the causal assumptions needed to answer it. Similarly, study design and data analysis aspects can be arranged to facilitate the conduct of quantitative bias/sensitivity and falsification analyses, providing for the rigorous evaluation of assumptions. Background knowledge and understanding of the medical or policy context of the investigation is needed in all steps of the framework, from framing the research question to evaluating the plausibility of assumptions and evaluating whether a causal interpretation is tenable.
Interpreted practically, the framework allows the use of causal language to specify research questions and study goals (eg, in a manuscript’s Introduction section); to describe study methods, assumptions under which the methods produce results that have a causal interpretation, and approaches for examining assumptions (eg, in the Methods section); and to reason about the plausibility of assumptions and the degree to which a causal interpretation is tenable in view of background knowledge while acknowledging the potential limitations of such an interpretation (eg, in the Discussion section). Two elements are central to this proposal for presenting observational studies: first, being explicit about the “if-then” (conditional) structure needed for their interpretation (eg, if certain assumptions hold, then a causal interpretation of the findings is tenable); and second, acknowledging that careful context-informed judgments are necessary to evaluate whether assumptions are plausible and a causal interpretation is tenable.
Last, although not the focus of this communication, the framework can also be applied to randomized trials and may be particularly helpful for pragmatic trials with baseline randomization that otherwise share many characteristics of observational studies (eg, trials with nonstandardized follow-up protocols and limited systematic efforts to enhance adherence to the assigned treatment).192,193
What the Framework Does Not Do
The framework does not imply that all, or even most, observational studies merit a causal interpretation. For some observational studies that start with causal goals, causal inference may prove impossible; in these cases, estimates retain only associational interpretations. In addition, many important descriptive and predictive research questions can be answered by observational studies that do not require causal notions.
Furthermore, when addressing causal questions, our proposal does not single out any of the currently popular frameworks, empirical research strategies, or statistical methods for causal inference from observational studies (eg, structural approaches27,167; identification strategies180,181; the target trial framework144,170; the causal roadmap and targeted learning32,156; any specific statistical, epidemiologic, or econometric method), nor does it single out any philosophy of statistical inference (eg, frequentist, bayesian). There is room for creativity in approaching practical causal questions, and investigators should have the freedom to select the approaches that best suit their research questions, provided they follow the norms for reporting described earlier. Without delving into the details of a specific research question, perhaps the most that can be recommended is to use the simplest methods that are adequate for the study’s causal goals.194,195
The framework does not address the broader issue of how to determine whether some general causal claim is warranted (eg, whether some exposure is a “cause” of some outcome). Instead, it focuses on whether observational studies can contribute independent credible evidence about causal effects of interventions in a particular target population, time, and place. Reports of such studies are the core publication type in most medical and health policy journals; more important, they are a key input to the process of evidence synthesis that can support general causal claims. This process combines information from multiple sources, including—in addition to trials and observational studies comparing interventions—basic science investigations, case reports, noncomparative studies, meta-analyses, and simulation modeling studies, as well as background knowledge.
Last, the framework does not cover other important issues that apply broadly to empirical investigations regardless of study design, such as prespecifying and preregistering analyses, following the principles of reproducible science, and sharing research materials.
Implications for Authors, Reviewers, Editors, and Readers
Adoption and further elaboration of the framework outlined earlier by medical journals offer the promise of facilitating communication between authors, reviewers, editors, and readers, but come with challenges in operationalization and implementation.
For authors, the framework provides more freedom to express causal goals and assumptions of observational studies, but also entails the responsibility to explicitly discuss and evaluate assumptions and openly acknowledge limitations (eg, violations of assumptions) and may require additional work (eg, to report technical details; to conduct triangulation, falsification, and bias analyses).
For reviewers, the framework should aid in the assessment of manuscripts that report observational studies. It requires familiarity with causal inference methods, as well as background knowledge to judge the appropriateness of the methods in the context of applied work.
Adoption of the framework should facilitate communication between authors, reviewers, and editors by encouraging the transparent reporting and critique of methods and results of observational studies of medical interventions. Implementation at scale will require retaining expert reviewers and increasing the cooperation between editors, authors, and reviewers to operationalize the framework for use with different analyses and specific clinical applications and to evaluate whether it improves the reporting of empirical research. Furthermore, the complex judgments that the framework entails require vigilance to mitigate cognitive biases and distortions that may influence the presentation and interpretation of observational studies, particularly those using technically complex methods.196
For readers, the framework should facilitate the clear communication of causal questions and methods. As usual, detailed technical descriptions may be appropriately placed in supplemental appendices to allow for the inclusion of the necessary detail and to maintain the readability and accessibility of the published study. Although our proposal suggests that complex concepts and more elaborate methodological descriptions may be needed to fully report and evaluate observational studies, adoption of the framework promises to improve the value of applied research that can support medical and policy decisions.
We look forward to readers’ reactions to the framework. In future communications, we plan to explore its application in the context of concrete examples of specific types of observational analyses typically encountered in medical journals such as JAMA and the JAMA Network journals.
Accepted for Publication: April 15, 2024.
Published Online: May 9, 2024. doi:10.1001/jama.2024.7741
Corresponding Author: Issa J. Dahabreh, MD, ScD, CAUSALab, Department of Epidemiology, Harvard T.H. Chan School of Public Health, 677 Huntington Ave, Room 816c, Boston, MA 02115 ([email protected]).
Conflict of Interest Disclosures: Dr Dahabreh reported receiving grants from Sanofi as principal investigator of a research agreement between Harvard and Sanofi for causal inference methods for transportability analyses; and reported receiving consulting fees from Moderna for trial and observational analyses outside the submitted work. No other disclosures were reported.
Additional Contributions: We thank Caroline Sietmann, MALIS, JAMA and JAMA Network, for assistance with soliciting and organizing comments on earlier versions of this article. We thank the following individuals for comments on earlier versions: Heather Gwynn Allore, MS, PhD, Yale University and JAMA Internal Medicine; Joshua D. Angrist, PhD, Massachusetts Institute of Technology; Michael Berkwits, MD; Jesse Berlin, ScD, Rutgers University and JAMA Network Open; Isabelle Boutron, MD, PhD, Université de Paris, CRESS, Inserm; Stephen R. Cole, PhD, University of North Carolina at Chapel Hill; John Concato, MD, MS, MPH, US Food and Drug Administration and Yale School of Medicine; Gregory Curfman, MD, JAMA and JAMA Network; Annette Flanagin, RN, MA, JAMA and JAMA Network; Maria Glymour, ScD, MS, Boston University; Deborah Grady, MD, MPH, University of California, San Francisco, and JAMA Internal Medicine; Sander Greenland, DrPH, University of California, Los Angeles; Gordon Guyatt, MD, MSc, McMaster University; Sebastien Haneuse, PhD, Harvard University and JAMA Network Open; Frank E. Harrell Jr, PhD, Vanderbilt University; Robert A. Harrington, MD, Weill Cornell Medicine and JAMA Cardiology; Laura Hatfield, PhD, Harvard University and JAMA; Miguel A. Hernán, MD, PhD, Harvard University; Guido Imbens, PhD, Stanford University; Nina Joyce, PhD, Brown University; Amy H. Kaji, MD, PhD, University of California, Los Angeles, and JAMA Surgery; Jay S. Kaufman, PhD, McGill University; Dhruv Kazi, MD, MSc, MS, Beth Israel Deaconess Medical Center; Kenneth S. Kendler, MD, Virginia Commonwealth University; Daniel B. Kramer, MD, MPH, Beth Israel Deaconess Medical Center; Timothy Lash, DSc, MPH, Emory University; Roger J. Lewis, MD, PhD, University of California, Los Angeles, and JAMA; Charles F. Manski, PhD, Northwestern University; M. Hassan Murad, MD, Mayo Clinic; Christopher Muth, MD, JAMA; Sharon-Lise Normand, PhD, Harvard University; Neil Pearce, PhD, London School of Hygiene and Tropical Medicine; Maya Petersen, MD, PhD, University of California, Berkeley; Romain Pirracchio, MD, MPH, PhD, University of California, San Francisco, and JAMA; Stuart J. Pocock, PhD, London School of Hygiene and Tropical Medicine; James M. Robins, MD, Harvard University; Sherri Rose, PhD, Stanford University; Paul R. Rosenbaum, PhD, University of Pennsylvania; Kenneth J. Rothman, DrPH, Boston University; Jeffrey Saver, MD, University of California, Los Angeles, and JAMA; Stephen Schenkel, MD, MPP, University of Maryland Medical Center and JAMA; David Schriger, MD, MPH, University of California, Los Angeles, and JAMA; Ian Shrier, MD, PhD, McGill University; Dylan S. Small, PhD, University of Pennsylvania; George Davey Smith, DSc, University of Bristol; Zirui Song, MD, PhD, Harvard University and JAMA Health Forum; Jonathan A. C. Sterne, PhD, University of Bristol; Elizabeth A. Stuart, PhD, Johns Hopkins University and JAMA Health Forum; Sonja A. Swanson, ScD, University of Pittsburgh and JAMA Psychiatry; Eric Tchetgen Tchetgen, PhD, University of Pennsylvania; Linda Valeri, PhD, Columbia University and JAMA Psychiatry; Tyler J. VanderWeele, PhD, Harvard University and JAMA Psychiatry; Rishi Wadhera, MD, MPP, MPhil, Beth Israel Deaconess Medical Center; Robert Yeh, MD, MS, MBA, Beth Israel Deaconess Medical Center; and Alan M. Zaslavsky, PhD, Harvard Medical School.
2.Susser
M. Causal Thinking in the Health Sciences: Concepts and Strategies of Epidemiology. Oxford University Press; 1973:xii.
8.Miettinen
OS. Theoretical Epidemiology: Principles of Occurrence Research in Medicine. Delmar Publishers; 1985:xxii.
18.Shadish
WR, Cook
TD, Campbell
DT. Experimental and Quasi-Experimental Designs for Generalized Causal Inference. Houghton Mifflin; 2001:xxi.
26.Pearl
J. Causal inference in statistics: an overview.
Statist Serv. 2009;3:96-146.
Google Scholar 28.Angrist
JD, Pischke
JS.
Mostly Harmless Econometrics: An Empiricist’s Companion. Princeton University Press; 2009. doi:
10.1515/9781400829828
30.Angrist
JD, Pischke
JS. The credibility revolution in empirical economics: how better research design is taking the con out of econometrics.
J Econ Perspect. 2010;24(2):3-30. doi:
10.1257/jep.24.2.3
Google ScholarCrossref 32.Van der Laan
MJ, Rose
S.
Targeted Learning: Causal Inference for Observational and Experimental Data. Springer; 2011. doi:
10.1007/978-1-4419-9782-1
33.Berzuini
C, Dawid
P, Bernardinell
L, eds.
Causality: Statistical Perspectives and Applications. John Wiley & Sons; 2012. doi:
10.1002/9781119945710
34.Imbens
GW, Rubin
DB.
Causal Inference in Statistics, Social, and Biomedical Sciences. Cambridge University Press; 2015. doi:
10.1017/CBO9781139025751
36.Young
JG, Stensrud
MJ, Tchetgen Tchetgen
EJ, Hernán
MA. A causal framework for classical statistical estimands in failure-time settings with competing events.
Stat Med. 2020;39(8):1199-1236. doi:
10.1002/sim.8471
PubMedGoogle ScholarCrossref 42.Cochran
WG, Rubin
DB. Controlling bias in observational studies: a review.
Sankhya Ser A. 1973;35(4):417-446.
Google Scholar 45.Manski
CF. Nonparametric bounds on treatment effects.
Am Econ Rev. 1990;80(2):319-323.
Google Scholar 46.Angrist
J, Imbens
G.
Identification and Estimation of Local Average Treatment Effects. National Bureau of Economic Research Cambridge; 1995. doi:
10.3386/t0118
59.Gelman
A, Meng
XL, eds.
Applied Bayesian Modeling and Causal Inference From Incomplete-Data Perspectives. John Wiley & Sons; 2004. doi:
10.1002/0470090456
61.Ho
DE, Imai
K, King
G, Stuart
EA. Matching as nonparametric preprocessing for reducing model dependence in parametric causal inference.
Polit Anal. 2007;15(3):199-236. doi:
10.1093/pan/mpl013
Google ScholarCrossref 63.Robins
JM, Hernán
MA.
Estimation of the Causal Effects of Time-Varying Exposures: Longitudinal Data Analysis. Chapman & Hall/CRC; 2008:547-593. doi:
10.1201/9781420011579.ch23 70.Abadie
A, Diamond
A, Hainmueller
J. Synthetic control methods for comparative case studies: estimating the effect of California’s tobacco control program.
J Am Stat Assoc. 2010;105(490):493-505. doi:
10.1198/jasa.2009.ap08746
Google ScholarCrossref 74.Valeri
L, Vanderweele
TJ. Mediation analysis allowing for exposure-mediator interactions and causal interpretation: theoretical assumptions and implementation with SAS and SPSS macros.
Psychol Methods. 2013;18(2):137-150. doi:
10.1037/a0031034
PubMedGoogle ScholarCrossref 75.Chakraborty
B, Moodie
EEM.
Statistical Methods for Dynamic Treatment Regimes: Reinforcement Learning, Causal Inference, and Personalized Medicine. Springer; 2013. doi:
10.1007/978-1-4614-7428-9 77.VanderWeele
T. Explanation in Causal Inference: Methods for Mediation and Interaction. Oxford University Press; 2015.
80.Sofer
T, Richardson
DB, Colicino
E, Schwartz
J, Tchetgen Tchetgen
EJ. On negative outcome control of unobserved confounding as a generalization of difference-in-differences.
Stat Sci. 2016;31(3):348-361. doi:
10.1214/16-STS558PubMedGoogle ScholarCrossref 81.Cain
LE, Robins
JM, Lanoy
E, Logan
R, Costagliola
D, Hernán
MA. When to start treatment? a systematic approach to the comparison of dynamic regimes using observational data.
Int J Biostat. 2010;6(2):18. doi:
10.2202/1557-4679.1212PubMedGoogle ScholarCrossref 88.Cattaneo
MD, Idrobo
N, Titiunik
R.
A Practical Introduction to Regression Discontinuity Designs: Foundations. Cambridge University Press; 2019. doi:
10.1017/9781108684606
89.Tsiatis
AA.
Dynamic Treatment Regimes: Statistical Methods for Precision Medicine. CRC Press; 2019. doi:
10.1201/9780429192692
90.Fröhlich
M, Frölich
M, Sperlich
S.
Impact Evaluation. Cambridge University Press; 2019. doi:
10.1017/9781107337008
96.Huber
M. Causal Analysis: Impact Evaluation and Causal Machine Learning With Applications in R. MIT Press; 2023.
97.Cattaneo
MD, Idrobo
N, Titiunik
R.
A Practical Introduction to Regression Discontinuity Designs: Extensions. Cambridge University Press; 2024. doi:
10.1017/9781009441896
102.Heckman
JJ, Ichimura
H, Todd
PE. Matching as an econometric evaluation estimator: evidence from evaluating a job training programme.
Rev Econ Stud. 1997;64(4):605-654. doi:
10.2307/2971733Google ScholarCrossref 118.Hainmueller
J. Entropy balancing for causal effects: a multivariate reweighting method to produce balanced samples in observational studies.
Polit Anal. 2012;20(1):25-46. doi:
10.1093/pan/mpr025Google ScholarCrossref 127.Mattei
A, Mealli
F. Regression discontinuity designs as local randomized experiments.
Obs Stud. 2016;2:156-173.
Google Scholar 133.Hahn
PR, Murray
JS, Carvalho
CM. Bayesian regression tree models for causal inference: regularization, confounding, and heterogeneous effects (with discussion).
Bayesian Anal. 2020;15(3):965-1056. doi:
10.1214/19-BA1195Google ScholarCrossref 138.Li
F, Ding
P, Mealli
F. Bayesian causal inference: a critical review.
Philos Trans A Math Phys Eng Sci. 2023;381(2247):20220153.
PubMedGoogle Scholar 139.Christiansen
S, Iverson
C, Flanagin
A,
et al. AMA Manual of Style: A Guide for Authors and Editors. 11th ed. Oxford University Press; 2020.
152.Rothman
K. Modern Epidemiology. Little Brown & Co; 1986:77.
159.Guyatt
G, Meade
MO, Agoritsas
T, Richardson
WS, Jaeschke
R. What is the question? In: Guyatt
G, Rennie
D, Meade
MO, Cook
DJ, eds. Users’ Guides to the Medical Literature: A Manual for Evidence-Based Clinical Practice. 3rd ed. McGraw-Hill; 2008:17-28.
161.US Department of Health and Human Services; Food and Drug Administration; Center for Drug Evaluation and Research (CDER); Center for Biologics Evaluation and Research (CBER). E9(R1) statistical principles for clinical trials: addendum: estimands and sensitivity analysis in clinical trials: guidance for industry. Accessed April 24, 2024.
https://www.fda.gov/media/148473/download 168.Spirtes
P, Glymour
CN, Scheines
R. Causation, Prediction, and Search. MIT Press; 2000.
177.Manski
CF. Partial Identification of Probability Distributions. Springer; 2003.
179.Manski
CF. Patient Care Under Uncertainty. Princeton University Press; 2019.
181.Angrist
JD, Krueger
AB. Empirical strategies in labor economics. In: Ashenfelter
OC, Card
D, eds. Handbook of Labor Economics. Vol 3. Elsevier; 1999:1277-1366.
183.Lawlor
DA, Tilling
K, Davey Smith
G. Triangulation in aetiological epidemiology.
Int J Epidemiol. 2016;45(6):1866-1886.
PubMedGoogle Scholar 186.Robins
JM, Rotnitzky
A, Scharfstein
DO. Sensitivity analysis for selection bias and unmeasured confounding in missing data and causal inference models. In: Halloran
ME, Berry
D, eds.
Statistical Models in Epidemiology, the Environment, and Clinical Trials. Springer; 2000:1-94. doi:
10.1007/978-1-4612-1284-3_1