Skip Navigation Department of Health and Human Services www.hhs.gov
Agency for Healthcare Research Quality www.ahrq.gov
AHRQ banner
 

EPCs

Evidence-based Practice Centers

Synthesize Knowledge (EPCs)

AHRQ created Evidence-based Practice Centers (EPCs) in 1997 to synthesize existing scientific literature about important health care topics and promote evidence-based practice and decision-making. The expertise of the EPCs is now also used for Comparative Effectiveness Reviews (CERs) or Research Reviews on medications, devices, and other relevant interventions. These reviews use a research methodology that systematically and critically appraises existing research to synthesize knowledge on a particular topic. An important aspect of the Comparative Effectiveness Reviews is the identification of research gaps, as well as recommendations for studies and approaches to fill those gaps.

top

Evidence-based Practice Centers

  • Blue Cross and Blue Shield Association, Technology Evaluation Center (TEC), Chicago, IL - Naomi Aronson, Ph.D.
  • Duke University, Durham, NC - John W. Williams, Jr., M.D.
  • ECRI Institute, Plymouth Meeting, PA - Karen Schoelles, M.D., S.M.
  • Johns Hopkins University, Baltimore, MD - Eric B. Bass, M.D., M.P.H.
  • McMaster University, Hamilton, Ontario, Canada - Parminder Raina, Ph.D.
  • Oregon Health & Science University, Portland, OR - Mark Helfand, M.D., M.S., M.P.H.
  • RTI International--University of North Carolina at Chapel Hill, Chapel Hill, NC - Meera Viswanathan, Ph.D.
  • Southern California Evidence-based Practice Center--RAND, Santa Monica, CA - Paul Shekelle, M.D., Ph.D.
  • Stanford University, Stanford, and University of California, San Francisco, CA - Douglas K. Owens, M.D., M.S.
  • Tufts University--New England Medical Center, Boston, MA - Joseph Lau, M.D.
  • University of Alberta, Edmonton, Alberta, Canada - Terry P. Klassen, M.D., M.Sc., FRCPC & Brian H. Rowe, M.D., M.Sc., CCFP(EM), FCCP
  • University of Connecticut, Storrs, CT - C. Michael White, PharmD.
  • Minnesota Evidence-based Practice Center, Minneapolis, MN - Robert L. Kane, M.D. and Timothy J. Wilt, M.D., M.P.H.
  • University of Ottawa, Ottawa, Canada - David Moher, Ph.D.
  • Vanderbilt University Medical Center, Nashville, TN - Katherine Hartman, M.D., Ph.D.

top

Background on Comparative Effectiveness Reviews

Comparative Effectiveness Reviews (CERs) are a key component of the EHC program. They provide building blocks to support evidence-based practice and decision making. They seek to answer important questions about treatments or diagnostic tests to help clinicians and patients choose the best treatments and tests and to help healthcare policy makers make informed decisions about health care services and quality improvement.

CERs are a type of systematic review, which synthesizes the available scientific evidence on a specific topic. CERs expand the scope of a typical systematic review, which focuses on the effectiveness of a single intervention, by comparing the relative benefits and harms among a range of available treatments or interventions for a given condition. In doing so, CERs more closely parallel the decisions facing clinicians, patients and policymakers, who must choose among a variety of alternatives in making diagnostic, treatment, and health care delivery decisions.

Comparative Effectiveness Reviews follow the explicit principals of systematic reviews. The first essential step is to carefully formulate the problem, selecting questions that are important to patients and other health care decision makers and examining how well the scientific literature answers them. Studies that measure health outcomes (events or conditions that the patient can feel, such as disability, quality of life or death) are given more weight than studies of intermediate outcomes, such as a change in a laboratory measure. Studies that measure benefits and harms over extended periods of time are usually more relevant than studies that examine outcomes over short periods.

Second, CERs explicitly define what types of research studies provide useful evidence and apply empirically tested search strategies to find all relevant studies. To assess effectiveness of other interventions, such as the efficacy of a drug, reviews may focus on the results of randomized controlled trials. For other questions, or to compare results of trials with those from everyday practice, observational studies may play a key role. The hallmark of the systematic review process is the careful assessment of the quality of the collected evidence, with greater weight given to studies following methods that have been shown to reduce the likelihood of biased results. Although well-done randomized trials generally provide the highest quality evidence, well-done observational studies may provide better evidence when trials are too small, too short, or have important methodological flaws.

A third critical step is to consider whether studies performed in carefully controlled research settings (efficacy studies) are applicable to the patients, clinicians and settings for whom the review is intended. A number of factors may limit the generalizability of results from efficacy studies. Patients are often carefully selected, excluding patients who are sicker or older and those who have trouble adhering to treatment. Racial and ethnic minorities may also be underrepresented. Efficacy studies also often use regimens and follow-up protocols that maximize benefits and limit harms but which may be impractical in usual practice. Effectiveness studies, which are conducted in practice-based settings, use less stringent eligibility criteria and assess longer-term health outcomes, are intended to provide results that are more applicable to “average” patients. They remain much less common than efficacy studies, however. A comparative effectiveness review examines the efficacy data thoroughly to ensure that decision makers can assess the scope, quality, and relevance of the available data and points out areas of clinical uncertainty. Clinicians can judge the relevance of the study results to their practice and should note where there are gaps in the available scientific information. Identified gaps in the available scientific evidence can provide important insight to organizations that fund research.

Finally, CERs aim to present benefits and harms for different treatments and tests in a consistent way so that decision makers can fairly assess the important tradeoffs involved for different treatment or diagnostic strategies. Expressing benefits in absolute terms (for example, a treatment prevents one event for every 100 treated patients) is more meaningful than presenting results in relative terms (for example, a treatment reduces events by 50%). These reviews also highlight where evidence indicates that benefits, harms, and tradeoffs are different for distinct patient groups, high- vs. low-risk patients, for example. Reviews do not attempt to set a standard for how results of research studies should be applied to patients or settings that were not represented in the studies. With or without a comparative effectiveness review, these are decisions that must be informed by clinical judgment.

In the context of developing recommendations for practice, comparative effectiveness reviews are useful because they define the strengths and limits of the evidence and clarify which interventions are supported by strong evidence from clinical studies and which issues are less certain. Comparative effectiveness reviews do not contain recommendations and they do not tell readers what to do: judgment, reasoning, and considerations of the values of the relevant parties (patients, clinicians, decision makers, and society) must also play a role in decision making. Users of a comparative effectiveness review must also keep in mind that “not proven” does not mean an intervention is proven not effective; that is, if the evidence supporting a specific intervention is weak (i.e., strength of evidence is judged to be low or insufficient), it does not mean that the intervention is ineffective. The quality of the evidence on effectiveness is a key component, but not the only component, in making decisions about clinical policies. Additional factors to consider include acceptability to physicians or patients, the potential for unrecognized harms, the consequences of deferring decisions until better evidence becomes available, applicability of the evidence to practice, and consideration of equity and justice.

CERs are written for an audience of clinical decision makers. The text should be simple, clear, and as free as possible of the jargon of systematic reviews. Although CERs may be used in a variety of settings, the primary users are likely to be clinicians appointed by organizations or public agencies to make recommendations for the use of treatments, diagnostic tests, or other interventions. Payers and insurers may use them to make clinical and group policy decisions on benefits and coverage, and professional groups may base their clinical practice guidelines on them. Experts in informed consumer decision making can use the reports to develop decision aids and other tools that consumers can use to choose among alternative diagnostic and therapeutic strategies.

top

Reports

  • Key Questions for Research Reviews - A set of questions meant to guide the research review process.
  • Draft Report - A pre-publication version of the the Full Research Review that is open for review and comment by the public for a short period of time.
  • Final Research Review - The completed review of research.

top

 
 
 
 
spacer
AHRQ  Advancing Excellence in Health Care