|
Measuring Client and Program Needs and Outcomes in a Changing Service Environment
Integrating Research and Clinical Assessment
July, 1998
Michael L. Dennis, Ph.D.
Chestnut Health Systems
Sections
- Author's Note
- Executive Summary
- Introduction and Goals
- Changing Environment and Pressure for Integrating Clinical Assessment
- The Problem with Unintegrated Clinical Assessment
- Kinds of Clinical Assessment Needed
- Dimensions of Variation in Integrated Clinical Assessment Systems
- Availability and Appropriateness of Existing Measures
- Limitaions of Current Research and Practices
- Recommendations and Potential Next Steps
- References
- Appendix A
This paper was written as an initial overview of the topic and the issues involved. It is not a comprehensive review nor does it include every topic or example originally considered in order to maintain a manageable length.
The author wishes to acknowledge comments received on this manuscript from Mark Godley, Susan Godley, and Bill White, as well as assistance in preparing this manuscript from Joan Unsicker. This issue paper was produced for the National Institute on Drug Abuse’s (NIDA’s) Resource Center for Health Services Research under NIDA contract No. N01DA-5-6050. The opinions expressed herein are solely those of the author and do not represent official positions of NIDA or the U.S. Government.
Sections
Substance abuse treatment researchers, clinicians, and clients are often interested in many of the same questions. Unfortunately, they often go about collecting information to answer them in ways that are highly redundant but different enough to make it difficult to meet each others’ needs. Interest in developing more integrated forms of clinical assessment also has been stimulated by pressure on the field to demonstrate accountability, treatment effectiveness, matching, and a continuum of care. The specific goals of this issue paper are to (a) review the changing environment and pressure for integrating clinical assessment; (b) describe the problems with unintegrated clinical assessment; (c) identify the kinds of assessment needed; (d) examine dimensions of variation in integrated clinical assessment systems; (e) review the availability and appropriateness of existing measures; (f) discuss the limitations of current research and practices; and (g) suggest specific recommendations and potential next steps to advance this area. The paper draws on both the literature and the author’s extensive experience in trying to integrate clinical and research assessment as part of the Drug Outcome Monitoring System in Illinois, Target Cities evaluations in Chicago and New Orleans, the Madison County Drug court evaluation, as well as several randomized field experiments evaluating the effectiveness of several interventions in practice. The overarching goal of this paper, however, is to identify the issues that must be addressed to develop a productive paradigm of integrated assessment for clinical practice, quality improvement/administration, and health services research.
Sections
Substance abuse treatment researchers, clinicians, and clients often are interested in many of the same questions and go about collecting information to answer these questions in ways that are highly redundant but often different enough to make it difficult to meet each others’ needs. Interest in developing more integrated forms of clinical assessment currently is being stimulated by several major changes (e.g., managed care, changes in diagnostic/placement criteria, shift to outcome monitoring/contracting) that are increasing pressure on the field to use assessment to help demonstrate accountability, treatment effectiveness, and matching on a continuum of care.
The specific goals of this issue paper are to:
- Review the changing environment and pressure for integrating clinical assessment;
- Describe the problem with unintegrated clinical assessment;
- Identify the kinds of assessment needed;
- Examine dimensions of variation in integrated clinical assessment systems;
- Review the availability and appropriateness of existing measures;
- Discuss the limitations of current research and practices;
- Suggest specific recommendations and potential next steps to advance this area.
The paper draws on both the literature and the author’s extensive experience in trying to integrate clinical and research assessment as part of the Drug Outcome Monitoring System in Illinois, Target Cities evaluations in Chicago and New Orleans, the Madison County Drug court evaluation, as well as several randomized field experiments evaluating the effectiveness of several interventions in practice. The overarching goal of this paper, however, is to identify the issues that must be addressed to develop a productive paradigm of integrated assessment for clinical practice, quality improvement/administration, and health services research.
Sections
The idea of integrating clinical assessment and research is not new. In fact, the best ways to classify disorders and match clients to treatment have been discussed for over a hundred years (Babor et al., 1992; White, 1996, 1998). What is different is a greater degree of volatility in the current environment that is leading to changes in the assessment system and demands that it be more integrated, efficient, and accurate. Some of the more dramatic changes in the past few years include the following:
New Diagnostic and Statistical Manual (DSM) IV (American Psychiatric Association
[APA], 1994): The criteria for substance use disorders (abuse and dependence) have been
simplified and standardized across substances; substance-induced disorders have been
shifted to be subsets of the main conditions (e.g., anxiety, sleep disorders, mood
disorders) and are now treated immediately instead of waiting for the client to become
abstinent.
Shift to matching and a continuum of care: Several states, including Illinois,
and some private groups are mandating the use of standardized patient placement criteria,
autonomous or semi-autonomous intake staff, and continuous matching along a continuum of
care throughout treatment (e.g., American Society of Addiction Medicine [ASAM], 1994,
1996; Department of Alcoholism and Substance Abuse [DASA], 1995, 1996; Texas Commission on
Alcohol and Drug Abuse, 1996).
Shift to managed care: Medicaid and other publicly funded substance abuse
treatment programs are directly or indirectly affected by general welfare reform, health
care reforms, and mandates to shift publicly insured clients to managed care systems
(Geraty, 1995; Hall, 1996; Hall & Flynn, 1997).
Shift to behavioral managed health care: In the private and public sectors,
there is a corresponding shift to collapse alcohol, drug abuse, and mental health problems
into a single state/local provider agency (e.g., Illinois new Department of Human
Services) and "carve out" a managed health care or capitation plan
(Cesare-Murphy, McMahill, & Schyve, 1997; Commission on Accreditation of
Rehabilitation Facilities [CARF], 1996).
Shift to capitated levels of funding: Many states are experimenting with
capitating the amount they will pay to a provider from a specific fund by client, type of
care, and/or diagnosis (e.g., Illinois is experimenting with capitating the annual
reimbursement from its main funding mechanism by level of care; DASA, 1995).
Introduction and then loss of Social Security Income (SSI) and Disability Income
(SSDI): In the past 7 years, clients in recovery became more widely eligible for
SSI/SSDI, then had their eligibility limited if they had a "payee" to monitor
their use of these funds. In 1996, they lost eligibility if addiction was considered to be
a "contributing factor" to the determination of their disability status (P.L.
104-121).
Shift to outcome monitoring: Outcome monitoring is the common ground that is
being encouraged by both accrediting agencies and Congress to resolve conflicts between
managed care and providers on lengths of stay and services to be provided, as well as
between managed care and clinical researchers who are losing traditional sources of
funding through overall cutbacks at the very time the field is demanding more from them
(e.g., Agnew, 1996; Joint Commission on the Accreditation of Healthcare Organizations
[JCAHO], 1995).
Compounding matters further, virtually every state and federal initiative to reform welfare includes one more provision targeted at substance users, particularly mothers or pregnant women. Many include employment provisions that may or may not recognize that chronic substance users often have other disabilities and/or barriers (e.g., AIDS, hepatitis, lack of high school degree, criminal justice record, multiple children, interrupted work histories, no housing) that limit their ability to accomplish this without substantial assistance (e.g., Senay, Dorus, & Joseph, 1981; Dennis, Karuntzos, McDougal, French, & Hubbard, 1993; Dennis, Fairbank, et al., 1995).
For treatment providers, these changes have introduced a steady stream of requests for increasingly more sophisticated assessment information for decision making and accountability. Also, many small providers who lack the necessary resources to comply have been assimilated by regional providers; in turn, these regional providers have collapsed into larger state or interstate providers or treatment systems. Another reason for this consolidation is the likelihood that dollars will be issued in large "system-based" contracts. Where this happens, the management of such treatment systems also has significant internal demands for standardized assessment to allow for clinical oversight and risk management.
Sections
The general purpose of clinical assessment is to collect information to define problems, understand how they are related to each other, and make decisions on how to proceed. For administrators, it is important that this information be documented in a standardized way that can be used for cost monitoring, accreditation, regulatory review, and quality improvement efforts. Using multiple standardized assessments can meet these requirements regardless of the degree of integration. The main advantage of integrating them is reduced time and, consequently, reduced cost and client burden.
In a system with unintegrated assessment, a client might be screened initially by a case manager, school counselor, social worker, or probation/parole worker to decide if he or she should be referred. This might include reviewing the record, administering and interpreting a screener, and submitting referral recommendations in writing. The client might then go through a centralized or standardized intake program for a treatment network. This typically would involve reviewing prior materials, (re)collecting basic demographics, and checking for substance use disorder diagnoses and placement criteria such as the degree of intoxication, withdrawal, medical problems, psychological problems, treatment acceptance/resistance, relapse potential, environment, legal issues, and vocational issues. It would result in a preliminary diagnosis, level of care placement recommendations, and perhaps some tentative recommendations about additional services or assessment that is needed (e.g., a psychiatric follow-up) and generation of transmittal documents. When the client arrived at the actual treatment program, the staff probably would start yet another assessment. They would (re)cover the demographics and substance use disorder diagnoses, then try to get more detailed diagnoses for other common problems (e.g., depression, anxiety, attention-deficit disorder, conduct disorder), collect dimensional measures of problem severity in the same areas as above, and seek information on treatment needs for treatment planning. If the client were then recruited for a health services research study, he or she then might go through another 2 to 6 hours of assessment covering all of the above (which often was not sufficiently standardized for research) as well as a variety of other topics. Rather than having each wave of assessment build on the earlier ones, treatment staff typically would re-cover the same areas.
This process is redundant and time consuming. Moreover, information often is not documented sufficiently for later clinical or research use by other staff. It is an approach that costs staff and client time and is very user-unfriendly for the consumer (who is asked the same or very similar questions repeatedly). Like other areas of treatment, providers are increasingly being asked to do much more clinical assessment with considerably fewer resources. Moreover, all levels of consumers (e.g., individuals, employers, the public) are demanding more accountability (i.e., value for a unit of service or outcome) and user-friendliness from both their service providers and managed care organizations (Hall & Flynn, 1997; Manderscheid, 1996).
One final important caveat is warranted. Each wave of clinical assessment might take 10
to 90 minutes, perhaps 1-3 hours overall. Comprehensive research assessments often take 2
to 6 hours, do not cover many of the areas in the clinical assessment (e.g.,
diagnosis, placement criteria, reporting requirements) and offer no panacea to this
problem. What is needed is truly a different set of instrumentation than has existed to
date.
Sections
Although considerable consensus exists that the current system is too disaggregated, the shape of the next generation of more integrated assessments is still evolving. In general, there is some agreement that an integrated assessment should be more standardized and accurate, because it will be used for some combination of screening, diagnosis, placement, treatment planning, and outcome monitoring. Some of the specific areas to be addressed by this new generation of integrated assessment include:
Presentation of the problem, basic demographics, and insurance: Why the person
is seeking treatment or being evaluated; basic information about gender, age, custody,
race, marital status, and insurance coverage (which can affect both what assessment is
required and/or what will be reimbursed).
Substance use disorders: The level (e.g., abuse, dependence), severity (e.g.,
moderate, severe), which specific substances, current frequency/intensity of use, current
state of intoxication, state/risk of withdrawal, and eminent risk of relapse (APA, 1994).
Other behavioral health conditions: Identification of general mental distress
and specific diagnostic conditions related to depression, anxiety, somatic complaints,
post/acute traumatic stress disorders, attention deficit, conduct disorder, personality
disorders, pathological gambling (particularly when there is gang involvement), and severe
mental illness (e.g., psychoses).
Other health conditions: Identification of general health distress and specific
conditions related to disabilities, pregnancy, infectious diseases (e.g., human
immunodeficiency virus, hepatitis, tuberculosis, and sexually transmitted diseases), and
other problems (e.g., allergies, dental problems, injuries, convulsions/neurological
problems, heart/blood/circulatory problems, asthma/breathing problems, tumors/cancer,
diabetes/thyroid problems, vitamin deficiencies/anemia, digestive problems,
sexual/fertility problems, female/male specific problems, bone/muscle/foot problems, or
skin problems); whether they will require any accommodation or new or continuing services;
and whether they have one or more family history risk factors (e.g, adoption, alcohol use,
drug use, heart/blood problems, diabetes, and emotional problems).
Risk behaviors: Measurement of behaviors that put the client at risk of disease
transmission, health problems, and/or injury (e.g., needle use, sexual practices, tobacco
use, impulse control problems, and going without food) and/or those that indicate
readiness for change or protective measures (e.g., needle cleaning, condom use, exercise,
testing, and counseling).
Environment and living situation: Including where and with whom the client is
living, risk of homelessness, substance use in home, time in a controlled environment
(which impacts interpretation of almost everything), children/ parental/family
interaction/functioning, risk from living/vocational/social environments (e.g., substance
use, illegal activity, violence, employment, and treatment experience), degree of
traumatic victimization, other sources of psychosocial stress, social support, and
satisfaction with the above.
Legal situation: Including illegal activity, arrests, current criminal and civil
justice status, and past range of charges (particularly driving under the influence
[DUI]).
Vocational situation: Including educational grades, special education, grades
completed, degrees, school attendance/problems, military experience/discharge, civilian
work experience, current employment, work attendance/problems, vocational status,
financial problems, sources of income, personal/family income, and number of financial
dependents.
Client preferences/motivation/barriers: What the client is expecting and/or
wants to get from treatment, the clients motivation for treatment, and barriers or
other problems that make client/clinical goals more difficult (e.g., other commitments,
family/others hostile to recovery, and treatment too demanding).
Treatment planning questions: Including client and counselor questions related
to the need for specific services, problems to be addressed in treatment, issues that need
to be monitored in treatment (e.g., current medications or past problems), sexual
preference, religious preference, and cultural modifications that need to be made to make
treatment "accessible" to the client.
Outcome monitoring measures: Short face valid measure that can be used both at
intake and later to chart individual progress, evaluate programs, measure change, and have
available "norms" or benchmarks against which to evaluate clients or programs
(including case mix adjustments).
Service utilization: Including past history and current use of substance abuse
treatment, mental health treatment, physical health treatment, child welfare systems,
other public aid, arrests, probation, jail/prison, detention, or parole as part of both
the biopsychosocial history and the economic outcomes of treatment that might demonstrate
change.
Quality checks: Including degree of cognitive impairment, literacy,
misunderstanding, denial, and misrepresentation that might affect the quality of the
assessment.
Reduced time/burden: Including the elimination of duplicative questions,
shortening of scales, removal of unused questions, and allowing the client to skip long
problem-specific measures when a screener or question suggests that the problem is highly
unlikely (e.g., days of heroin use after the client has a reported no lifetime use).
Increased accuracy: Use of multiple symptoms in lay language to increase
construct validity, checks on reliability through internal consistency (or test-retest
over short periods of time), time bounding to increase sensitivity to change, and checks
on predictive validity relative to treatment planning and evaluation.
Scoring and interpretative guidelines: Spell out how to interpret scores and
their prognostic significance for treatment planning and outcomes, ideally including norms
overall and for major subpopulations.
Note that issues of scientific rigor and/or reporting requirements cut across these
issues. Similarly, we are interested in these issues both at the individual level for
treatment planning and at the group level for program planning.
Sections
Not all systems cover all of the above issues, and in fact, most do not; there are many different approaches and levels of integrated clinical assessment. To communicate what one means when describing a particular approach, it is helpful to do so along a number of dimensions, each of which represents a continuum that has extremes. Any one approach may fall at different points along each dimension and/or have aspects that span the entire dimension. The dimensions include:
Purpose and use of the results: Varying from the evaluation of a theoretical
concept as part of a scientific study, to use of results for a system or accreditation
body, to a focus on treating or evaluating individuals on a case-by-case basis.
Population to be studied: Varying from the very homogeneous populations that are
typical of controlled trials, to special populations with specific needs (e.g.,
pregnant/postpartum women), to more heterogeneous populations typical of most programs.
Interventions to be considered or evaluated: Varying from assessing whether a
"given" program can serve the individual, to which of a group of programs might
best serve the individual, to which of several services might best fit the clients.
Types of measures: Varying from intensive observational studies, to collateral
assessments, to self-report, to physiological/laboratory assessments (with the latter
having much less bias, but also less precision and information).
Prognostic significance of measures: Varying from measures that try to
differentiate several related problems, to those that try to define the presence of
potentially overlapping problems, to those that try to screen for a subset of people on
whom further assessment is warranted (i.e., potentially high false positives).
Reliability and validity: Varying from those that have high reliability/internal
consistency across a related set of face valid indicators to those that are designed to
measure subtle correlations and/or those that are simply face valid.
Level of analyses: Varying from those designed for use in treating or monitoring
individuals to those designed primarily for analysis at a group level for program
planning.
Immediacy of analysis: Varying from within minutes of the assessment (e.g., for
admission) to those designed for later uses (e.g., monitoring progress, program
evaluation, reporting requirements).
Audience(s) for the results: Varying from the research community, to payers, to
program staff, to the clients themselves.
The latter actually may be one of the most critical dimensions related to the type of
clinical assessment system that is undertaken. The audience at one extreme is primarily
the research community, which reviews and essentially approves or disapproves of the
methods and findings. There is a hope that practicing clinicians will incorporate lessons
learned into their findings, but often there is no direct link between the two. The
audience for the emerging outcome monitoring data are funders, accreditation bodies,
administrators, service providers, and consumers. The most demanding audiences for
integrating research and clinical assessment, however, are clinicians. The problem has
less to do with what they want than when they want it. For an assessment to be of use to
them, it has to be immediately scored, interpreted, and related to guidelines about its
implication for diagnosis, placement, and treatment planning.
Sections
Quality of Measures
The substance abuse treatment field needs short, efficient, reliable, and scientifically valid measures of the constructs available both in individual form and in more comprehensive batteries (i.e., combinations of individual measures). Measures can include a combination of self-report, observation, collateral report, and physiological/laboratory measures. For the sake of efficiency, they also would be set up with screeners that could lead, where appropriate, to more intensive/ expensive/invasive assessments (e.g., full psychiatric assessments). Both clinical staff and researchers agree (Bollen & Lennox, 1991; Dennis, Huebner, & McLellan, 1996; JCAHO, 1997) that these measures ideally should be:
Face valid (unless they are specifically being used to measure denial or
misrepresentation);
Statistically sound (items with test-retest reliability of .6+, scales with internal
consistencies of .7+, items or scales that appear reliable because they are correlated
with the targeted outcome behaviors .7+);
Developed, validated, and/or normed on a similar population.
It also is useful to select and format measures with an eye toward (a) achieving
brevity/ parsimony, (b) balancing response burden against analytic precision,
(c) minimizing training/processing errors under often less than optimal field
conditions, and (d) having an appropriate comparison group or norms. In terms
of the latter, provider associations spend much more time than researchers being
concerned with the base to which a number should be compared. In accrediting outcome
measures, for instance, the JCAHO (1995) asks one question about reliability and
validity but over a dozen about who will be included and/or excluded in calculating
statistics based on the measure and/or for comparisons. This is because the difference
caused by including or excluding such groups can be many times larger than simple
measurement error.
Individual Measures Available
Although literally hundreds of standardized measures are available, finding one that has been used with a particular population to evaluate a condition, a matching decision, and/or changes in a particular behavior can be a difficult undertaking. To make matters worse, many resource books include copies of various instruments but do not give their original references or summarize their psychometric or substantive properties. Information about individual measures, however, is becoming increasing available through both compendiums (Addiction Research Foundation [ARF], 1994; Allen & Columbus, 1995; Bausell, 1991; Coughlin, 1997; Fischer & Corcoran, 1994a, 1994b; Friedman, Granick, Sowder, & Soucy, 1994; Jenkinson, 1994; McDowell & Newell, 1987; Rounsaville, Tims, Horton, & Sowder, 1993; Sederer & Dickey, 1996) and the Internet (www.arf.org; www.chesnut.org; ericae.net/; www.medsch.ucla.edu/som/npi/DARC/; www.ibr.tcu.edu/; www.jcaho.org/; www.nida.nih.gov).
There has been a lot of recent development in the use of clinical assessment for "outcome monitoring." These approaches vary considerably in their purpose and types of clinical assessment (Affholter, 1994). Some repeatedly assess clients to chart progress (e.g., Kazdin, 1993, 1996; Ogles, Lambert, & Masters, 1996), whereas others focus on linking assessment to quality improvement ( Salzer, Nixon, & Schut, 1997; Waxman, 1994), managed care applications (Brill, Lish, & Grissom, 1995; Geraty, 1995; Kongstuedt, 1996; Ross, 1997), or the use of report cards and performance indicators (Dickey, 1995; Kramer, Daniels, & Mahesh, 1996). Also, increasingly more text is targeted at measurement and analytic issues relevant to the use of clinical assessment in practice; this text is aimed at staff in treatment settings rather than at researchers (Docherty & Streeter, 1995; Hogmann, 1995; Sechrest, McNight, & McNight, 1996; Tonigan, 1995).
Measurement Batteries
Many assessment batteries, individual measures, and/or diagnostic measures already have been used for both research and clinical purposes. Unfortunately, many of the available measures are highly redundant, contain unnecessary scales, are expensive, offer no formal support (important for clinical use), and do not comprehensively address diagnostic or placement criteria (e.g., APA, 1994; ASAM, 1994, 1996) or accreditation requirements (CARF, 1996; DASA, 1996; JCAHO, 1995; Office of Applied Studies [OAS], 1994). The oldest and most widely used "multiple domain" assessment battery is the Addiction Severity Index (ASI) (McLellan et al., 1985). Although there are few problems with the content of the ASI, this instrument does not fully address current diagnostic or reporting requirements (which it predates by more than 10 years), service utilization, or issues related to several special populations such as adolescents, needle users, or pregnant women. Because it is an interview schedule (versus a more literal survey), the reliability of the ASI is also very sensitive to the investment in training and quality assurance. As a result of its being in the public domain and receiving the personal encouragement of McLellan, the ASI has served as the foundation for many subsequent assessment batteries that have tried to address these issues (e.g., Dennis, 1998; Dennis et al., 1993, 1996; Flynn et al., 1995; Kaminer, 1991; Kaminer et al, 1997; McLellan et al., 1992; Meyer et al., 1995). Still others are based on the Drug Abuse Reporting Program (DARP) and early AIDS research (e.g., Simpson, 1992) or NIDA’s research on adolescents (e.g., Radhert, 1991; Winters & Henley, 1989) or the precursors of ASAM’s patient placement criteria (e.g., Mee-Lee et al., 1992). All of these subsequent measures have gone the direction of writing out the full questions and response sets to allow clerical or even self administration (although interpretation is clinically or scientifically driven). The author’s Global Appraisal of Individual Needs (GAIN) (Dennis, 1998; Dennis, Rourke, Caddell, Karuntzos, Bossert, & Ingram, 1993; Dennis, Webber, et al., 1996) has evolved from research funded by NIDA, NIAAA, and CSAT explicitly to develop a standardized biopsychosocial model of assessment that integrates screening criteria for referral, diagnostic criteria based on DSM-IV (APA, 1994), placement criteria based on PPC-2 (ASAM, 1996), treatment planning based on JCAHO (1997), reporting requirements based on the minimum client data set (OAS, 1994), dimensional severity measures based on symptom counts, and individual items and outcome monitoring based on the Drug Outcome Monitoring Systems (DOMS) (Dennis et al., 1995). Although the GAIN is already in use in several systems, it is far behind the others in terms of having any kind of publicly or commercially funded support. Attached to this document is an appendix that provides a summary list of these substance abuse treatment assessment measurement batteries, the extent to which they cover each of these areas (screening, diagnosis, placement, treatment planning, reporting requirements, severity measures, outcome measures), the level of skill required for administration, and brief comments from the author on their strengths and weaknesses. It also includes a list of several other individual and diagnostic measures that also are used widely for both research and clinical practice.
Norms, Benchmarks, and Case Mix Adjustments
Both general methodological reviews (e.g., Lipsey, 1990) and those targeted more specifically at substance abuse health services research (Dennis, 1994; Dennis, Huebner, & McLellan, 1996; Dennis, Lennox, & Foss, 1997) regularly identify the need to examine reliability and validity. But in clinical practice, outcome monitoring, and demonstration evaluation, it is equally as important to have norms, benchmarks, and/or case mix adjustments. This is because there often is no control or comparison group and/or because client composition is a major source of variation when comparing performance of two treatment units or a single unit over time. In prior research (Dennis, Ingram, Burks, & Rachal, 1994), for instance, we found that an accelerated admissions process for methadone led to the program’s 6-month retention rate dropping from 91% to 79%. However, most of this change was the result of the grant increasing the percentage of clients on public assistance (from 69% to 88%), because these clients averaged 8% to 12% lower retention. Unfortunately, very limited normative data are available. Detailed norms by subpopulations (e.g., pregnant women, inpatients, youths) are still only partially available for common instruments like the ASI (McLellan et al., 1992) and/or must be purchased commercially like the RAATE (Mee-Lee et al., 1992). Moreover, the majority of what is available focuses on cross-sectional properties and does not address either the sensitivity to or norms for expected rate of change.
Sections
Separate Paths
Substance abuse clinical practice and research have evolved substantially in the past 25 years on two separate but overlapping paths. The rapid expansion of methadone maintenance followed reports of research on its effectiveness, but then led to almost immediate concerns that the model was not being replicated or working as well in the community (Dole & Joseph, 1978; Dole & Nyswander, 1965). This and subsequent "revelations" have led to several efforts to create national reporting systems (NIDA, 1982, 1989; OAS, 1993), create national and statewide studies to evaluate effectiveness of publicly funded treatment (Anglin, Speckart, Booth, & Ryan, 1989; Etheridge, Craddock, Dunteman, & Hubbard, 1995; Gerstein et al., 1994, 1997; Hubbard et al., 1989; Rounds-Bryant et al., 1996; Sells & Simpson, 1976), examine the effectiveness of specific services in practice (Blaine & Renault, 1976; Dennis et al., 1993, 1994; Fuller, Branchey, & Brightwell, 1990; Higgins et al., 1991; Howard, Morass, Brill, Martinovich, & Lutz, 1996; McLellan, Arndt, Woody, & Metzger, 1993; McLellan, Woody, Luborsky, & Goehl, 1988), and study the interaction of client characteristics with treatment effectiveness both in terms of initial matching and changes over time (Anglin, Hser, Grella, Longshore, & Pendergast, 1997; McLellan, Luborsky, Woody, O’Brien, & Druley, 1983; Miller et al., 1995; Project MATCH Research Group, 1993; Rounsaville, Weissman, Crits-Christoph, Wilber, & Kleber, 1982; Simpson & Savage, 1980).
The treatment system, in contrast, has been much more focused on issues of whether to follow a disease or "dependence" model (Edwards & Gross, 1976; Jellinek, 1960; c.f. De Leon & Jainchill, 1986; De Leon, Melnick, Schoket, & Jainchill, 1993); whether to include problems and role failure among problems requiring treatment (Drummond, 1990, 1992); and the development of standards for diagnosis (e.g., DSM-IV, APA, 1994), patient placement (e.g., PPC-2, ASAM, 1996), and provision of care (JCAHO, 1995). It is useful to note that the major national evaluations above did not incorporate the direct measures of the current reporting requirements or areas of assessment that are in common use throughout the United States and many other countries. Conversely, the published clinical standards did not use any data from the national substance abuse treatment system evaluations (though they did use some other studies), nor did they identify instruments for implementing the standards. Over the past 25 years, however, there has been an increasing trend toward the development of scientifically rigorous measures that attempt to measure these clinical issues (e.g., DIS, DISC, GAIN, SCID, RAATE).
Limitations of Research for Practice
Tightly controlled "efficacy" studies typically have been conducted under special conditions that are difficult, if not impossible, to generalize to practice (Dennis, 1994; Howard et al., 1996). Typically, these paradigms use clinical trials to compare two interventions that are based on manual descriptions in the treatment of a particular problem — eliminating from consideration anyone who has multiple problems, transfers, or reenters treatment (Dennis, Godley, et al., under review; Goldfried & Wolfe, 1996). Some of the common clinical and health service issues this paradigm fails to address include:
High rates of comorbidity that exist in most publicly funded treatment programs;
Clients who transfer between levels of care or clients who are readmitted multiple
times;
Variation in treatment utilization patterns due to individualization, "clinical
guidelines," client preferences, funding limitations, and the clients
"response to treatment";
The need to study cost, cost-effectiveness, cost-offsets, and full benefit-costs in
practice;
The need to identify clinical subgroups to control for case mix differences;
The need to study treatment by client interactions and the effectiveness of various
matching rules.
Even many of the treatment "effectiveness" studies done to date with more heterogeneous populations, programs, staff, and measures have been difficult to interpret for practice because they lacked scientific rigor and conceptual foundations (Dennis et al., 1996). For instance, a methodological meta-analysis of 168 treatment studies (Lipsey, Crosse, Dunkle, Pollard, & Stobart, 1985) found that:
Less than 30% even considered the sensitivity of their assessment or outcome measures,
and only a handful took the basic steps to improve what they did have;
Less than 30% mentioned, let alone measured or used data on, treatment protocol
implementation or "dosage";
Despite the complexity of their interventions, over 69% of the studies lacked any
theory, "logic model," or hypotheses about why they might work (most focus on
labels, components, process outcomes, or strategies for gaining participation) or interact
with client characteristics/severity/comorbidity;
Whereas 77% involved heterogeneous populations and multidimensional and multiple
exposure interventions, more than 84% analyzed the independent variable as a categorical
dichotomy (e.g., using a chi-square or analysis of variance) using only the outcome data
(vs. incorporating baseline characteristics and/or treatment received data).
Combined with statistical power of 33% to 45% found in other meta-analyses of treatment
studies (Dennis et al., 1996; Dennis et al., 1997; Lipsey, 1990) and the failure to
measure or use measures of co-occurring problems, it is hardly surprising that researchers
have had difficulty discriminating the relative effectiveness of different types of
treatment or how they interact with client characteristics/severity.
Limits of Clinical Practice for Research
Conversely, although clinicians collect information in most of the same areas, they emphasize the speed of collection and interpretation rather than the reliability and validity of their "notes" in any kind of record that could be used for research. It is important to recognize that just as researchers have not been doing a perfect job by their own standards, neither have clinicians. In our recent evaluation of clinical practices in several systems (Dennis, Godley, et al., under review), we found several sources of waste in the current clinical assessment system including:
Redundant assessments in terms of both multiple clinical interviews covering the same
ground and multiple measures asking about overlapping areas;
Collection of information that is not really used;
Failure to get the full use from the assessment;
The need to do or redo additional documentation to meet managed care requirements
because of the inadequacy of the original "documentation";
Lack of consistency in questions and/or documentation to allow reliable analysis of
program level needs or outcomes.
For example, it would not be uncommon for someone coming from a jail to treatment to be assessed in the jail, then to be sent to a central intake unit for further assessment, then to be sent to a specific facility for a third assessment, then to be assigned to a level of care, and then to have to start over again when he or she finally meets with a primary counselor. Not only are clients asked similar questions several times, but this process may occur within a 1- to 7-day period (particularly with criminal justice clients). Clients complain that no one seems to hear what they say, let alone know what they want. When reviewing notes of earlier assessments, staff rarely can figure out what was actually asked or what the answer was; sometimes, they cannot even read the notes. Each person collects data for his or her own requirements. Managed care organizations actually have exacerbated this situation by requiring even more "documentation" that is extraneous to clinical decision making, failing to provide clear or consistent guidelines on what they want, and making providers repeatedly justify decisions on paper as a mechanism of "discouraging" them from recommending or accessing further services.
Useful information is often in the file but not always used because only a few clinical staff generally know how to read or interpret standardized tests or understand their implications for treatment. This is not limited to paraprofessional or "recovering" staff. Medical staff may underdiagnose psychiatric comorbidity and/or fail to prescribe related pharmacological or behavioral interventions relative to what would be expected in the literature. In the State of Illinois, for instance, the average rate of reporting a co-occurring mental health problem averages only 6.3% (Gillespie, 1997). The published literature, however, suggests that over half of the people with substance use disorders have co-occurring mental problems (Kessler et al., 1994) and that over 65% of those presenting for treatment have co-occurring mental problems (Ross, Glaser, & Germanson, 1988).
Limitations of Outcome Monitoring
The fastest growing area of integrated clinical assessment is outcome monitoring, which has been touted as a potential panacea to the debate on the value of cutting costs by cutting services (vs. waste). Given that so much is riding on this effort, it is unfortunate that initial efforts have been scientifically lacking. In our examination (Dennis, Godley, et al., under review) of over four dozen academic and commercial approaches to outcome monitoring, we found many flaws, including:
A focus on "soft" outcomes (e.g., satisfaction, processing time), only a few
dimensions of functioning, or psychometrically weak scales (e.g., low reliability, low
sensitivity to change);
Grossly inadequate follow-up rates of public clients (e.g., 20-40%);
No attention to and/or low statistical power (e.g., under 30-50%);
Inadequate, unlinked, and unevaluated case mix adjustments (which are essential for
nonexperimental comparisons);
Partial or no linkage with clinical guidelines for diagnosis, placement, treatment
planning, or economic outcomes;
Limited range of clients (particularly coverage of adolescents, criminal justice
clients, pregnant women, and clients with multiple comorbid conditions) matching to
limited programs or services;
Limited range of types/levels of treatment, movement along a continuum of care, or high
likelihood of re-admissions;
Largely external operations that were neither integrated into nor designed to help
clinical staff do their work or do in-treatment outcome monitoring.
Although JCAHO’s Oryx requirements (Cesare-Murphy et al., 1997) to institute outcome monitoring go into effect in 1998, by the summer of 1997 there were only six providers approved to do outcome monitoring for behavioral health programs, and one of these was listed as a closed/internal system. The remaining five were offering primarily variations of the ASI, RAATE, and SCL-90.
Different Approaches to Confidentiality
A final major issue that needs to be resolved in integrating research and clinical assessment is the way in which confidentiality is approached. Researchers typically try to assure clients in their "informed consents" that their answers only will be used for research, often obtain certificates of confidentiality to prevent records from being used in court, and separate all client identifiers from their main analysis files (which contain only an otherwise meaningless research identifier). To varying degrees, they often are willing to share this more anonymous form of data. Clinicians, in contrast, need unique identifiers in their records to deal effectively with their clients, their clients’ collaterals, and/or other agencies involved in their clients’ lives. Although they make the same privacy assurances in their informed consents, clinicians address this need to share information by requesting that the client sign a series of "releases" allowing various individuals and agencies to share information. This process increasingly has become an essential practice as managed care and other payers are demanding more documentation and information as part of their approval process (i.e., requiring information on the explicit criteria met versus a simple diagnosis). It is noteworthy that whereas many researchers would "reject" such a release, under the terms of the public health service certificates of confidentiality, this actually is the client’s prerogative (not the researchers’), who are "required" to comply under the Federal Food, Drug, and Cosmetic Act (21 U.S.C. 301) and regulations thereunder (21 CFR). The authors approach to addressing this issue has been to say that the information collected only will be used for "treatment or to evaluate our services" and to say that information will not be shared with others "unless you provide us a release."
Sections
Integrating clinical and research assessment is clearly an area of rapid growth and also one in desperate need of more scientific leadership/involvement. At the same time, it is an area where "general" scientific approaches/resources may be inadequate and in need of further development. The final goal of this paper is to recommend potential next steps where NIDA, other agencies, researchers, and providers can start advancing the field.
- Identification and access of resources: Identify clinical assessment
resources and how to access the instruments, and their norms and properties, through the
use of the Internet and publications.
- Evaluation of resources: Use funding mechanisms like R21s (e.g., National
Institute on Alcohol Abuse and Alcoholism [NIAAA]) to encourage more methodological
studies of the available resources and/or to develop new ones that address gaps.
- Norms, benchmarks and case mix adjustments: Encourage the development and
dissemination of norms, benchmarks, and case mix adjustments that can be used to improve
both statistical power and interpretability in both experimental and nonexperimental
studies.
- Comparative studies: Sponsor comparative studies to examine the impact of
issues related to integrating assessment (e.g., variation in assessment method, who does
the assessment, when the assessment is done, scale length, staff-client interpretability).
- Outcome monitoring: Call for increased research on the emerging paradigm of
outcome monitoring and how it is related to clinical assessment and economic consequences.
- Cross-training: Increase opportunities for cross-training of both clinicians
and researchers on the issues related to integrating clinical and research assessments for
both new staff (e.g., T32s) and existing staff (e.g., K awards, postgraduate training).
- Organizational Studies: Call for increased studies of the
organizational, management, and political issues involved in doing combined assessment,
and provide managed care entities with access to the information as well.
- Opportunities for collaboration: Solicit center grants or
policy grants to foster collaboration between large treatment systems and researchers to
address a range of issues related to integration (e.g., standardization across
populations, levels of care, movement along a continuum of care).
Sections
Achenbach, T. M., & Edelbrock, C. S. (1983). Manual for the child behavior
checklist and revised child behavior profile. Burlington, VT: University of Vermont,
Department of Psychiatry.
Achenbach, T. M., & Edelbrock, C. S. (1987). Manual for youth self report and
profile. Burlington, VT: University of Vermont, Department of Psychiatry.
Addiction Research Foundation. (1994). Directory of client outcome measures for
addictions treatment programs. Toronto, Canada: Author.
Affholter, D. P. (1994). Outcome monitoring. In J. S. Wholey, H. P. Hatry, & K. E.
Newcomer (Eds.), Handbook of practical program evaluation (pp. 96-118). San
Francisco: Jossey-Bass.
Agency for Health Care Policy and Research. (1997, June 13; RFA HS-98-003). Health care
quality improvement and quality assurance research. NIH Guide, 26 (20) [electronic
document].
Agency for Health Care Policy and Research. (1997, August 1; P.T. 34; K.W. 0730050,
0730021, 0730023). Opportunity for cooperative research and development agreements and
other public-private partnerships. NIH Guide, 26 (25) [electronic document].
Agnew, B. (1996). Clinical research: Looking for help in unusual places. Journal of
NIH Research, 8 (7), 21-22.
Allen, J. P., & Columbus, M. (Eds.). (1995). Assessing alcohol problems.
Treatment Handbook Series 4. Bethesda, MD: National Institute on Alcohol Abuse and
Alcoholism.
Allen, J. P., & Kadden, R. M. (1995). Matching clients to alcohol treatments. In R.
K. Hester & W. R. Miller (Eds.), Handbook of alcoholism treatment approaches:
Effective alternatives (2nd ed., pp. 278-292). Boston: Allyn and Bacon.
American Psychiatric Association. (1994). Diagnostic and statistical manual of
mental disorders (4th ed.). Washington, DC: Author.
American Society of Addiction Medicine. (1994). Principles of addiction medicine.
Chevy Chase, MD: Author.
American Society of Addiction Medicine. (1996). Patient placement criteria for the
treatment of psychoactive substance disorders (2nd ed.). Chevy Chase, MD: Author.
Anglin, M. D., Hser, Y. I., Grella, C. E., Longshore, D., & Pendergast, M. L.
(1997). Drug treatment careers: Conceptual overview and application to DATOS. Los
Angeles: Drug Abuse Research Center, University of California.
Anglin, M. D., Speckart, G. R., Booth, M. W., & Ryan, T. M. (1989). Consequences
and costs of shutting off methadone. Addictive Behaviors, 14, 307-326.
Annis, H. M., & Graham, J. M. (1988). Situation confidence questionnaire
(SCQ-39) users guide. Toronto, Canada: Addiction Research Foundation.
Babor, T. F., de la Fuente, J. R., Saunders, J., & Grant, M. (1992). AUDIT: The
Alcohol Use Disorders Identification Test. Geneva, Switzerland: World Health
Organization.
Babor, T. F., Dolinsky, Z. S., Meyer, R. E., Hesselbrock, M., Hofmann, M., &
Tennen, H. (1992). Types of alcoholics: Concurrent and predictive validity of some common
classification schemes. British Journal of Addictions, 87, 1415-1431.
Bausell, R. B. (1991). Advanced research methodology: A guide to resources.
Metuchen, NJ: Scarecrow Press.
Beck, A. T. (1990). Beck Anxiety Inventory (BAI). San Antonio, TX: The
Psychological Corporation (1-800-228-0752).
Beck, A. T. (1996). Beck Depression Inventory II (BDI-II). San Antonio, TX: The
Psychological Corporation (1-800-228-0752).
Blaine, J. D., & Renault, P. F. (Eds.). (1976). Rx: 3x/week LAAM alternative to
methadone (Research Monograph 8). Rockville, MD: National Institute on Drug Abuse.
Bollen, K., & Lennox, R. (1991). Conventional wisdom on measurement: A structural
equation perspective. Psychological Bulletin, 110, 305-314.
Brill, P. L., Lish, J. D., & Grissom, G. R. (1995, September/October). Timing is
everything: Prepost versus concurrent measurement. Behavioral Healthcare Tomorrow,
pp.76-77.
Butcher, J. N., Dahlstrom, W. G., Graham, J. R., Tellegen, R. P., & Kaemmer, B.
(1989). Manual for administration, scoring, and interpretation: MMPI-2.
Minneapolis, MN: University of Minnesota.
Butcher, J. N., Williams, C. L., Graham, J. R., Archer, R. P., Tellegen, A.,
Ben-Porath, Y. S., & Kaemmer, B. (1992). Manual for administration, scoring, and
interpretation: MMPI-A. Minneapolis, MN: University of Minnesota Press.
Cesare-Murphy, M., McMahill, C., & Schyve, P. (1997). Joint Commission evaluation
of behavioral health care organizations. Evaluation Review, 21 (3), 322-329.
Commission on Accreditation of Rehabilitation Facilities. (1996). 1996 Standards
manual and interpretive guidelines for behavioral health. Tucson, AZ: Author.
Coughlin, K. M. (Ed). (1997). The 1997 behavioral outcomes and guidelines sourcebook.
New York: Faulkner & Gray, Inc.
De Leon, G., & Jainchill, N. (1986). Circumstance, motivation, readiness, and
suitability as correlates of treatment tenure. Journal of Psychoactive Drugs, 18 (3),
203-208.
De Leon, G., Melnick, G., Schoket, D., & Jainchill, N. (1993). Is the therapeutic
community culturally relevant? Findings on race/ethnic differences in retention in
treatment. Journal of Psychoactive Drugs, 25 (1), 77-86.
Dennis, M. (1998). Global Appraisal of Individual Needs (GAIN) Manual.
Bloomington, IL: Chestnut Health Systems (www.chestnut.org/CYT/GAIN).
Dennis, M. L. (1994). Ethical and practical randomized field experiments. In J. S.
Wholey, H. P. Hatry, & K. E. Newcomer (Eds.), Handbook of practical program
evaluation (pp. 155-197). San Francisco: Jossey-Bass.
Dennis, M. L., Fairbank, J. A., Caddell, J. M., Bonito, A. J., Rourke, K. M., Woods, M.
G., Rachal, J. V., Bossert, K., & Burke, M. (1995). Individualized substance abuse
counseling (ISAC) manual. Bloomington, IL: Lighthouse Publications.
Dennis, M. L., Godley, S. H., Scott, C. K., Senay, E. C., Bailey, J., & Bokos, P.
J. (under review). Drug Outcome Monitoring Systems (DOMS): Developing a new paradigm for
health services research. Journal of Mental Health Administration.
Dennis, M. L., Huebner, R. B., & McLellan, A. T. (1996). Methodological issues
in treatment services research. Paper commissioned by the Panel on Effectiveness and
Outcomes, Subcommittee on Health Services Research National Advisory Council on Alcohol
Abuse and Alcoholism, of the National Institute on Alcohol Abuse and Alcoholism.
Rockville, MD: NIAAA.
Dennis, M. L., Ingram, P. W., Burks, M. E., & Rachal, J. V. (1994). Effectiveness
of streamlined admissions to methadone treatment: A simplified time-series analysis. Journal
of Psychoactive Drugs, 26, 207-216.
Dennis, M. L., Karuntzos, G. T., McDougal, G., French, M. T., & Hubbard, R. L.
(1993). Developing training and employment programs to meet the needs of methadone
treatment clients. Evaluation and Program Planning, 16 (2), 73-86.
Dennis, M. L., Lennox, R. D., & Foss, M. A. (1997). Practical power analysis for
substance abuse health services research. In K. J. Bryant, M. Windle, & S. G. West
(Eds.), The science of prevention: Methodological advances from alcohol and substance
abuse research (pp. 367-404). Washington, DC: American Psychological Association.
Dennis, M. L., Rourke, K. M., Caddell, J. M., Karutnzos, G., Ingram, P., & Bossert,
K. (1993). Global Appraisal of Individual Needs (GAIN): Version 10/93. Research
Triangle Park, NC: Research Triangle Institute.
Dennis, M. L., Webber, R., White, W., Senay, E., Adams, L., Bokos, P., Eisenberg, S.,
Fraser, J., Moran, M., Ravine, E., Rosenfeld, J., & Sodetz, A. (1996). Global
Appraisal of Individual Needs (GAIN): Version 12/96. Bloomington, IL: Lighthouse
Institute, Chestnut Health Systems.
Department of Alcoholism and Substance Abuse. (1995). DARTS users manual for
treatment counselors, prevention specialists, and computer entry personnel.
Springfield, IL: Author.
Department of Alcoholism and Substance Abuse. (1996). Substance abuse treatment
regulations. Chicago: Author.
Deragotis, L. R., Lipman, R. S., Rickels, J., Uhlenhuth, E. H., & Covi, L. (1974).
The Hopkins symptom checklist: A self-report symptom inventory. Behavioral Science, 19,
1-15.
Dickey, B. (1995). The development of report cards for mental health care. In L. I.
Sederer & B. Dickey (Eds.), Outcomes assessment in clinical practice (pp.
156-160). Baltimore: Williams & Wilkins.
Docherty, J. P., & Streeter, M. J. (1995). Measuring outcomes. In L. I. Sederer
& B. Dickey (Eds.), Outcomes assessment in clinical practice (pp. 8-18).
Baltimore: Williams & Wilkins.
Dole, V. P., & Joseph, H. (1978). Long-term outcome of patients treated with
methadone maintenance. Annals of the New York Academy of Sciences, 311, 181-189.
Dole, V. P., & Nyswander, M. E. (1965). A medical treatment for diacetyl-morphine
(heroin) addiction. Journal of the American Medical Association, 193, 80-84.
Drummond, D. C. (1990). The relationship between alcohol dependence and alcohol-related
problems in a clinical population. British Journal of Addiction, 85, 357-366.
Drummond, D. C. (1992). Problems and dependence: Chalk and cheese or bread and butter?
In M. Lader, G. Edwards, D. C. Drummond (Eds.), The nature of alcohol and drug-related
problems (pp. 61-82). New York: Oxford University Press.
Edwards, G., & Gross, M. M. (1976). Alcohol dependence: Provisional description
syndrome and related disabilities. British Journal of Addiction, 85, 357-366.
Etheridge, R. M., Craddock, S. G., Dunteman, G. H., & Hubbard, R. L. (1995).
Treatment services in two national studies of community-based drug abuse treatment
programs. Journal of Substance Abuse, 7, 9-26.
Fischer, J., & Corcoran, K. (1994a). Measures for clinical practices: A source
book, Volume 1 on couples, families and children (2nd ed.). New York: The Free Press.
Fischer, J., & Corcoran, K. (1994b). Measures for clinical practices: A source
book, Volume 2 on adults (2nd ed.). New York: The Free Press.
Flynn, P. M., Hubbard, R. L., Luckey, J. W., Forsyth, B. H., Smith, T. K., Phillips, C.
D., Fountain, D. L., Hoffman, J. A., & Koman, J. J. (1995). Individual Assessment
Profile (IAP). Standardizing the assessment of substance abusers. Journal of Substance
Abuse Treatment, 12 (3), 213-221.
Freeman, M. A., & Trabin, T. (1994). Managed behavioral healthcare: History,
models, key issues, and future course. Rockville, MD: U.S. Center for Mental Health
Services.
Friedman, A. S., Granick, S., Sowder, B. J., & Soucy, G. P. (1994). Assessing
drug abuse among adolescents and adults: Standardized instruments (Clinical Report
Series, NIH Pub. No. 94-3757). Rockville, MD: National Institute on Drug Abuse.
Fuller, R. K., Branchey, L., & Brightwell, D. R. (1986). Disulfiram treatment of
alcoholism. Journal of the American Medical Association, 245, 1449-1455.
Geraty, R. D. (1995). The use of outcomes assessment in managed care: Past, present,
and future. In L. I. Sederer & B. Dickey (Eds.), Outcomes assessment in clinical
practice (pp. 129-138). Baltimore: Williams & Wilkins.
Gerstein, D. R., Ingles, J., Datta, R., Talley, K., Jordan, K., Schildhaus, S.,
Johnson, R., Rasinski, K., Taylor, J., Bacellar, H., Anderson, D., Phillips, D., Collins,
J., Condelli, W., Ciftan, E., & Rohde, F. (1997). National Treatment Improvement
Evaluation Study (NTIES) main findings. Rockville, MD: SAMHSA Center for Substance
Abuse Treatment.
Gerstein, D. R., Johnson, R. A., Harwood, H., Fountain, D., Suter, N., & Malloy, K.
(1994). Evaluating recovery services: The California drug and alcohol treatment
assessment, General report. Submitted to the State of California Department of Alcohol
and Drug Programs. Chicago: National Opinion Research Center.
Gillespie, S. (1997). Special runs from the DARTS FY 96 data base. Chicago:
Illinois Department of Alcohol and Drug Abuse.
Goldfried, M. R., & Wolfe, B. A. (1996). Psychotherapy practice and research:
Repairing a strained alliance. American Psychologist, 51, 1007-1016.
Grant, B. F., & Hasin, D. S. (1992). The alcohol use disorders and associated
disabilities interview schedule. Rockville: National Institute on Alcohol Abuse and
Alcoholism.
Hall, L. L. (1996). Report cards accelerate quality and accountability: Impact of
managed care on severe mental illness The role of report cards, consumers, and
family members. Behavioral Healthcare Tomorrow, 5 (3), 57-61.
Hall, L. L., & Flynn, L. M. (1997). NAMIs managed care report card. Evaluation
Review, 21 (3), 352-356.
Hamilton, M. (1967). Development of a rating scale for primary depressive illness. British
Journal of Social and Clinical Psychology, 6, 278-296.
Hasin, D., Dietz-Trautman, K., Grant, B., & Endicott, J. (1994). PRISM. New
York: New York State Psychiatric Institute.
Hathaway, S. R., & McKinley, J. C. (1951). MMPI manual. New York:
Psychological Corporation.
Heather, N., Gold, R. & Rollnick, S. (1991). Readiness to change questionnaire:
Users manual, (Technical Report 15). Kensington, Australia: National Drug and
Alcohol Research Centre, University of New South Wales.
Higgins, D. L., Galavotti, C., O'Reilly, K. R., Schuell, D. J., Moore, M., Rug, D. L.,
& Johnson, R. (1991). Evidence for the effects of HIV antibody counseling and testing
on risk behaviors. Journal of the American Medical Association, 266 (17),
2419-2429.
Hogmann, A. A. (1995). Measurement sensitivity in clinical mental health services
research: Recommendation for the future. In L. I. Sederer & B. Dickey (Eds.), Outcomes
assessment in clinical practice (pp. 161-168). Baltimore: Williams & Wilkins.
Howard, K. I., Brill, P. L., Lueger, R. J., OMahoney, M. T., & Grissom, G. R.
(1995). Compass OP: Psychometric properties. King of Prussia, PA: Compass
Information Services, Inc.
Howard, K. I., Morass, K., Brill, P. L., Martinovich, Z., & Lutz, W. (1996).
Evaluation of psychotherapy: Efficacy, effectiveness, and patient progress. American
Psychologist, 51, 1059-1064.
Hubbard, R. L., Marsden, M. E., Rachal, J. V., Harwood, H. J., Cavanaugh, E. R., &
Ginzburg, H. M. (1989). Drug abuse treatment: A national study of effectiveness.
Chapel Hill, NC: University of North Carolina Press.
Jellinek, E. M. (1960). The disease concept of alcoholism. New Haven, CT:
College and University Press.
Jenkinson, C. (1994). Measuring health and medical outcomes. Lynn, England: UCL
Press.
Joint Commission on the Accreditation of Healthcare Organizations. (1994). A guide
to establishing programs for assessing outcomes in clinical settings. Oakbrook
Terrace, IL: Author.
Joint Commission on the Accreditation of Healthcare Organizations. (1995). Accreditation
manual for mental health, chemical dependency, and mental retardation/ developmental
disabilities services. Vol. 1, Standards. Oakbrook Terrace, IL: Author.
Joint Commission on the Accreditation of Healthcare Organizations. (1997). Oryx: The
next evolution in accreditation. Oakbrook Terrace, IL: Author.
Kaminer, Y. (1991). The Teen Addiction Severity Index (T-ASI): Rationale and
reliability. International Journal of the Addictions, 26, 219-226.
Kaminer, Y., Blitz, C., Burleson, J. A., & Sussman, J. (1997). The teen treatment
services review (T-TSR). Journal of Substance Abuse Treatment, 14, 1-10.
Kazdin, A. E. (1993). Evaluation in clinical practice: Clinically sensitive and
systematic methods of treatment delivery. Behavior Therapy, 24, 11-45.
Kazdin, A. E. (Ed.). (1996). Evaluation in clinical practice (Special Series). Clinical
Psychology: Science and Practice, 3, 144-181.
Kessler, R. C., McGonagle, K. A., Zhao, S., Nelson, C., Hughes, M., Eshleman, S.,
Wittchen, H., & Kendler, K. (1994). Lifetime and 12-month prevalence of DSM-III-R
psychiatric disorders in the United States. Archives of General Psychiatry, 51,
8-19.
Kongstuedt, P. R. (1996). The managed health care handbook. Gaithersburg, MD:
Aspen.
Kramer, T., Daniels, A., & Mahesh, N. (1996). Performance indicators in
behavioral healthcare. Portola Valley, CA: Institute for Behavioral Healthcare.
Larsen, D. L., Atkinson, C. C., Hargreaves, W. A., & Nguyen, T. D. (1979).
Assessment of client/patient satisfaction: Development of a general scale. Evaluation
and Program Planning, 2, 197-207.
Lipman, R. S., Covi, L., & Shapiro, A. K. (1979). The Hopkins Symptom Checklist
(HSCL). Factors derived from the HSCL-90. Journal of Affective Disorders, 1, 9-24.
Lipsey, M. W. (1990). Design sensitivity: Statistical power for experimental
research. Newbury Park, CA: Sage.
Lipsey, M. W., Crosse, S., Dunkle, J., Pollard, J., & Stobart, G. (1985).
Evaluation: The state of the art and the sorry state of the science. New Directions for
Program Evaluation, 27, 7-28.
Manderscheid, R. (1996). Report cards accelerate quality and accountability: A report
card for accountability to consumers. Behavioral Healthcare Tomorrow, 5 (3), 57-61.
Mayfield, D., McLeod, G., & Hall, P. (1994). The CAGE questionnaire: Validation of
a new measure. American Journal of Psychiatry, 131, 1121-1123.
McDowell, I., & Newell, C. (1987). Measuring health: A guide to rating scales
and questionnaires. New York: Oxford Press.
McLellan, A. T., Alterman, A. I., Cacciola, J., Metzger, D., & O'Brien, C. P.
(1992). A new measure of substance abuse treatment: Initial studies of the treatment
services review. Journal of Nervous and Mental Disease, 180 (2),101-110.
McLellan A. T., Arndt, I. O., Woody, G. E., & Metzger, D. (1993). Psychosocial
services in substance abuse treatment?: A dose-ranging study of psychosocial services. Journal
of the American Medical Association, 269 (15), 1953-1959.
McLellan, A. T., Kushner, H., Peters, F., Smith, I., Corse, S. J., & Alterman, A.
I. (1992). The Addiction Severity Index ten years later. Journal of Substance Abuse
Treatment, 9, 199-213.
McLellan, A. T., Luborsky, L., Cacciola, J., Griffith, J., McGahan, P., & O'Brien,
C. P. (1985). Guide to the Addiction Severity Index: Background, administration, and
field testing results (DHHS Publication No. ADM 88-1419). Rockville, MD: National
Institute on Drug Abuse.
McLellan, A. T., Luborsky, L., Woody, G. E., O'Brien, C. P., & Druley, K. A.
(1983). Prediction response to alcohol and drug abuse treatments: Role of psychiatric
severity. Archives of General Psychiatry, 40, 620-628.
McLellan, A. T., Woody, G. E., Luborsky, L., & Goehl, L. (1988). Is the counselor
an "active ingredient" in substance abuse rehabilitation? An examination of
treatment success among four counselors. Journal of Nervous and Mental Disease, 176,
423-430.
McLellan, A. T., Woody, G. E., Luborsky, L., OBrien, C. P., & Druley, K. A.
(1983). Increased effectiveness of substance abuse treatment: A prospective study of
patient-treatment "matching." Journal of Nervous and Mental Disease, 171 (10),
597-605.
Medical Outcome Trust (1997a). Behavioral and Symptom Identification Scale (BASIS
32). Boston, MA: Author. < http://www.outcomes-trust.org/catalog/b32.htm>
Medical Outcome Trust (1997b). MOS-HIV Health Survey. Boston, MA: Author.
<http://www.outcomes-trust.org/catalog/moshiv.htm>
Mee-Lee, D. (1988). An instrument for treatment progress and matching: The recovery
attitude and treatment evaluator (RAATE). Journal of Substance Abuse Treatment, 5,
183-186.
Mee-Lee, D., Hoffmann, N. G., & Smith, M. B. (1992). Recovery Attitude and
Treatment Evaluator (RAATE) manual (2nd ed.). St. Paul, MN: New Standards.
Meyers, M. G., & Brown, S. A. (1996). The adolescent relapse coping questionnaire:
Psychometric validation. Journal of Studies on Alcohol, 57 (1), 40-46.
Meyers, K., McLellan, A. T., Jaeger, J. L. & Pettinati, H. M. (1995). The
development of the Comprehensive Addiction Severity Index for Adolescents (CASI-A). An
interview for assessing multiple problems of adolescents. Journal of Substance Abuse
Treatment, 12 (3), 181-193.
Miller, G. A. (1985). The Substance Abuse Subtle Screening Inventory (SASSI):
Manual. Bloomington, IN: Spencer Evening World.
Miller, W. R. (1989). Matching individuals with interventions. In R. K. Hester & W.
R. Miller (Eds.), Handbook of alcoholism treatment approaches: Effective alternatives (pp.
261-272). New York: Pergamon Press.
Miller, W. R. (1991). Form 90: Structured assessment interview for drinking and
related behavior, Project Match Manual. Rockville, MD: National Institute on Alcohol
Abuse and Alcoholism.
Miller, W. R., Brown, J. M., Simpson, T. L., Handmaker, N. S., Bien, T. H., Luckie, L.
F., Montgomery, H. A., Hester, R. K., & Tonigan, J. S. (1995). What works? A
methadological analysis of the alcohol treatment outcome literature. In R. K. Hester &
W. R. Miller (Eds.), Handbook of alcoholism treatment approaches: Effective
alternatives (2nd ed., pp.12-44). Boston: Allyn and Bacon.
Miller, W. R., Tonigan, J. S., & Longabaugh, R. (1995). The Drinker
Inventory of Consequences (DrInC): An instrument for assessing adverse consequences of
alcohol abuse. Test manual, (NIAAA Project Match Monograph Series. Vol. 4. NIH Pub.
No. 95-3911). Washington, DC: U.S. Government Printing Office.
Millon, T. (1977). MCMI-II manual. Minneapolis, MN: National Computer Systems.
Moos, R. (1974). Family Environment Scale, Form R. Palo Alto, CA: Consulting
Psychologists Press.
National Institute on Drug Abuse. (1982). Data from the client oriented data
acquistion process (CODAP): Trend report Jan 1978Sept 1981 [DHHS Pub. No. (ADM)
82-1214]. Rockville, MD: Author.
NDATUS. (1992). Highlights from the 1991 National Drug and Alcoholism Treatment Unit
Survey (NDATUS). Rockville, MD: Office of Applied Studies, SAMHSA.
Office of Applied Studies. (1993). Client data system. Rockville, MD: Author.
Office of Applied Studies. (1994). National Household Survey on Drug Abuse: Main
findings for 1993. Rockville, MD: Author.
Ogles, B. M., Lambert, M. J., & Masters, K. S. (1996). Assessing outcome in
clinical practice. Boston: Allyn & Bacon.
Project MATCH Research Group. (1993). Project MATCH: Rationale and methods for a
multisite clinical trial matching alcoholism patients to treatment. Alcoholism:
Clinical and Experimental Research, 17 (6), 1130-1145.
Rahdert, E. (ed). (1991). Adolescent assessment/referral system manual.
Washington, DC: National Institute on Drug Abuse.
Robins, L. N., Cottler, L. B., Bucholz, K., & Compton, W. M. (1996). Diagnostic
Interview Schedule, Version IV. St. Louis, MO: Washington University.
Ross, E. C. (1997). Managed behavioral health care premises, accountable systems of
care, and AMBHAs PERMS. Evaluation Review, 21 (3), 318-321.
Ross, H. E., Glaser, F. B., & Germanson, T. (1988). The prevalence of psychiatric
disorders in patients with alcohol and other drug problems. Archives of General
Psychiatry, 45, 1023-1031.
Rounds-Bryant, J. L., Kristiansen, P. L., Fairbank, J. A., Caddell, J. M., Hubbard, R.
L., Fletcher, B., & Tims, F. (1996). Characteristics of adolescents entering drug
abuse treatment: Preliminary findings. A poster presented at the 104th Annual
Convention American Psychological Association, Toronto, Canada.
Rounsaville, B. J., Tims, F. M., Horton, A. M., & Sowder, B. J. (1993). Diagnostic
source book on drug abuse research and treatment. Rockville, MD: National Institute on
Drug Abuse.
Rounsaville, B. J., Weissman, M. M., Crits-Christoph, K., Wilber, C., & Kleber, H.
D. (1982). Diagnosis and symptoms of depression in opiate addicts. Archives of General
Psychiatry, 39, 151-156.
Salzer, M. S., Nixon, C. T., & Schut, L. J. A. (1997). Validating quality
indicators: Quality as relationship between structure, process, and outcome. Evaluation
Review, 21 (3), 292-309.
Schaffer, D., Fisher, P., & Lucas, C. (1997). Diagnostic Interview Schedule for
Children (DISC). New York: Columbia University.
Sechrest, L., McNight, P., & McNight, K. (1996). Calibration of measures for
psychotherapy outcome studies. American Psychologist, 51 (10), 1065-1071.
Sederer, L., & Dickey B. (1996). Outcomes assessment in clinical practice.
Baltimore: Williams & Wilkins.
Sells, S. B., & Simpson, D. D. (Eds.). (1976). The effectiveness of drug abuse
treatment (Vol. 4). Cambridge, MA: Ballinger.
Selzer, M. L. (1970). The Michigan Alcoholism Screening Test: The quest for a new
diagnostic instrument. American Journal of Psychiatry, 127, 1635-1658.
Senay, E. C., Dorus, W., & Joseph, M. (1981). Evaluating service needs in drug
abuse clients. International Journal of the Addictions, 16, 709.
Simpson, D. D. (1992). TCU/DATAR forms manual: Drug Abuse Treatment for AIDS-Risk
Reduction (DATAR). Fort Worth: Texas Christian University, Institute of Behavioral
Research.
Simpson, D. D., & Savage, L. J. (1980). Drug abuse treatment readmissions and
outcomes: Three year follow-up of DARP patients. Archives of General Psychiatry, 37,
896-901.
Skinner, H. A. (1982). The Drug Abuse Screening Test (DAST). Addictive Behaviors, 7,
363-371.
Spitzer, R. L., Williams, J. B. W., Gibbon, M., & First, M. (1992). The structured
clinical interview for DSM-III-R, 1: History, rationale, and description. Archives of
General Psychiatry, 49, 624-629.
Sullivan, J. T., Sykora, K., Schneiderman, J., Naranjo, C. A., & Sellers, E. M.
(1989). Assessment of alcohol withdrawal: The revised Clinical Institute Withdrawal
Assessment for Alcohol scale (CIWA-AR). British Journal of Addiction, 84,
1353-1357.
Texas Commission on Alcohol and Drug Abuse. (1996). Agency strategic plan for the
1997-2001 period. Austin, TX: Author.
Tonigan, S. (1995). Outcome evaluation: Issues in alcohol treatment outcome assessment.
In J. P. Allen & M. Columbus (Eds.), Assessing alcohol problems. Treatment Handbook
Series 4 (pp. 143-154). Bethesda, MD: National Institute on Alcohol Abuse and
Alcoholism.
Ware, J. E., & Sherbourne, C. D. (1992). The MOS 36-item Short-Form Health Survey
(SF-36): I. Conceptual framework and item selection. Medical Care, 30, 473-483.
Waxman, H. M. (1994). An inexpensive hospital-based program for outcome evaluation. Hospital
and Community Psychiatry, 45, 160-162.
White, W. L. (1996). Pathways from the culture of addiction to the culture of
recovery (2nd ed.). Center City, MN: Hazelden.
White, W. L. (1998). Chasing the dragon: A history of addiction and recovery in
America. Bloomington, IL: Lighthouse Institute, Chestnut Health Systems
(www.chestnut.org).
Winters, K. C., & Henly, G. A. (1989). Personal Experience Inventory (PEI) test
and manuals. Los Angeles, CA: Western Psychological Services.
World Health Organization. (1996a). Schedules for Clinical Assessment in
Neuropsychiatry, Version 2.1. Geneva: World Health Organization, Mental Health
Division.
World Health Organization. (1996b). Composite International Diagnostic Interview,
Version 2.0. Geneva: World Health Organization, Mental Health Division.
Wu, A. W., Revicki, D. A., Jacobson, D., & Malitz, F. E. (1997). Evidence for
reliability, validity and usefulness of the Medical Outcomes Study HIV Health Survey (MOS
HIV). Quality of Life Research, 6, 481-493.
Sections
*Administration Skill Level:
L: Low preprinted questions and responses that can be self- or
clerically administered
M: Medium clinical schedule of issues to cover that may require training on
critical concepts and limited probing
H: High requires detailed clinical judgment/ratings, extensive problems, or
extensive quality assurance in order to achieve reliability
Instrument |
Area (X-yes, x-some) |
Admin. Skill Level* |
Authors Comment |
|
Screen-ing |
Diag-nosis |
Place-ment |
Report-ing |
Tx Plan-ning |
Sever-ity Meas-ure |
Out-come |
|
|
Multiple Domain Measures |
Addiction Severity Index (ASI) (McLellan et al., 1985) |
x |
|
x |
x |
X |
X |
X |
M |
Advantages of the ASI: · one of the oldest and most widely
used measures
· in the public domain
· supported by several commercial vendors
· moderate but well-known psychometrics
Disadvantages:
· does not cover most modern diagnostic criteria, placement, or treatment guidelines
(which it predates 10+ years)
· only partially standardized and requires substantial training/quality assurance to
get good psychometrics |
Comprehensive Addiction Severity Index for Adolescents
(CASI-A) (Meyers et al., 1995) |
|
X |
x |
X |
X |
X |
X |
L |
The C-ASI addresses many of the gaps in the original ASI and
has some software/user support. However, it is still early in its development and has only
limited psychometric or norm data. It is generally not used as a screener because of
length (90+ minutes). |
Drug Abuse Treatment for AIDS-Risk Reduction (DATAR) forms
(Simpson, 1992) |
|
|
|
x |
X |
X |
X |
L |
A descendent of DARP, this form has been used for almost a
decade in a series of clinical research projects at Texas Christian University and has
appeared in several public domain evaluation handbooks produced by NIDA and others. |
Form 90 (Miller, 1991) |
|
|
|
|
x |
x |
X |
M |
This was one of the main instruments used in Project MATCH and
was used to generate many of the clinical reports for its interventions. |
Individual Assessment Profile (IAP) (Flynn et al., 1995) |
|
|
X |
|
X |
x |
X |
L |
This instrument blended questions from the ASI with other
research measures from the Drug Abuse Treatment Outcome Study (DATOS) and was used for
centralized intake as part of the DC Initiative and several other studies. |
Global Appraisal of Individual Needs (GAIN) (Dennis, 1998) |
x |
X |
X |
X |
X |
X |
X |
L |
The GAIN was designed to serve as a standardized
biopsychosocial assessment with integrated components for screening, diagnosis, placement,
treatment planning, outcome monitoring, and research. Variations of this instrument have
been used for standardized intake and research projects funded by NIDA, NIAAA, CSAT, and
several states. Selected items can be used as a 20-30 minute screener version. Preliminary
psychometric and normative data now are available, but no commercial support is available. |
Recovery Attitude and Treatment Evaluation (RAATE) (Mee-Lee et
al., 1992) |
X |
x |
X |
|
|
X |
|
L |
This was one of the first tools developed for screening and
patient placement and has scales that can help inform diagnosis. Unfortunately, it does
not conform to current diagnostic or placement criteria (which it predates) and its
usefulness for outcome monitoring is limited because its questions are not time bound and
do not cover service utilization. |
Personal Experience Inventory (PEI) (Winters & Henly,
1989) |
X |
|
|
|
x |
X |
x |
L |
This instrument (and its sister, the POSIT [see next entry])
is designed for screening adolescents with substance use, mental distress, and a variety
of other yet undefined problems. This version is commercially supported. |
Problem Oriented Screening Instrument for Teenagers (POSIT)
(Radhert, 1991) |
X |
|
|
|
x |
X |
x |
L |
This instrument (and its sister, the PEI) is designed for
screening adolescents with substance use, mental distress, and a variety of other yet
undefined problems. This version is in the public domain and has some federally financed
software and support available. |
Teen-Addiction Severity Index (T-ASI) (Kaminer, 1991) |
x |
|
|
x |
X |
|
X |
L |
This is a modified version of the ASI that includes more
questions related to adolescents. It does not, however, have all of the items for
constructing the original ASI indices or its own severity measures. |
Treatment Services Review (TSR and Teen-TSR) (Kaminer et al.,
1997; McLellan, Alterman, et al., 1992) |
|
|
|
|
X |
|
X |
L |
A complementary measure to the ASI, the TSR is designed to
track the subsequent behavior and services received each week. It is in the public domain.
The Teen-TSR has been adapted for adolescents. |
|
Single Domain Measures |
Adolescent Relapse Coping Questionnaire (ARCQ) (Meyers &
Brown, 1996) |
|
|
|
|
|
|
X |
L |
A measure of expected coping behaviors in response to a
situation involving high pressure to use substances. |
Alcohol Use Disorders Identification Test (AUDIT) (Babor, de
la Fuente, et al., 1992). |
X |
x |
|
|
|
X |
|
L |
Developed by the World Health Organization, this is designed
as a screener to identify problem drinkers, including those who might not yet meet
diagnostic criteria but are of interest for public health interventions. |
Beck Depression & Anxiety Inventories (Beck, 1990, 1996) |
X |
x |
|
|
|
X |
X |
M |
These are among the most widely used measures of depression
and anxiety and are commercially supported. |
Behavioral and Symptom Identification Scale (BASIS 32)
(Medical Outome Study, 1997) |
X |
|
x |
|
|
X |
x |
L |
This short battery, which is geared toward mental health
populations, measures psychosis, daily living/role functioning skills, relation to
self/others, impulsive/addictive behavior, and depression. |
CAGE (Mayfield et al., 1974) |
X |
|
|
|
|
|
|
H |
Although very popular with clinicians and public aid programs,
this 4-question public domain screener is often unreliable unless it is accompanied by
extensive training and quality assurance. |
Child Behavior Checklist (CBCL) and Youth Self Report (YSR)
(Achenbach & Edelbrock, 1983, 1987) |
X |
|
x |
|
X |
X |
|
L |
The parent (CBCL) and youth (YSR) versions of this instrument
provide dimensional measures of internal (e.g., depression, anxiety) and external (e.g.,
attention deficit, hyperactivity, conduct disorder) problems as well as measures of social
competency. National norms are reported in NIDAs National Household Survey on Drug
Abuse and both are commercially supported. |
Client Satisfaction Questionnaire (CSQ) (Larsen et al., 1979) |
|
|
|
|
|
|
X |
L |
A general measure of client satisfaction that has been used
widely. |
Clinical Institute Withdrawal Assessment (CIWA). (Sullivan et
al., 1989) |
X |
X |
X |
|
|
X |
|
H |
Clinical rating scale combining physiological symptoms,
observations, and self-reported symptoms. Unfortunately, it is focused only on alcohol and
may not generalize to other drugs. |
Drinker Inventory of Consequences (DrInC) (Miller, Tonigan,
& Longabaugh, 1995) |
X |
x |
|
|
X |
|
|
L |
Although limited to drinking, this scale is particularly
useful for identifying the specific problems caused by drinking that can be addressed in
treatment. |
Drug Abuse Screening Test (DAST) (Skinner, 1982) |
X |
|
|
|
|
X |
X |
L |
A short screener parallel to the Michigan Alcohol Screening
Test (MAST) that is often used to screen for substance abuse and/or to measure change.
Unfortunately, it does not map directly on to DSM- IV. |
Family Environment Scale (FES) (Moos, 1974) |
|
|
|
|
x |
X |
X |
L |
This and several subsequent shorter versions are among the
most common measures of family functioning and particularly important to evaluating family
therapy or family services. |
Hamilton Depression Scale (HAM-D) (Hamilton, 1967) |
X |
x |
|
|
|
X |
X |
M |
The HAM-D is one of the original dimensional measures of
depression. |
Michigan Alcohol Screening Test (MAST) (Selzer, 1971) |
X |
|
|
|
|
x |
|
L |
The original 25-item public domain version and several
subsequent shorter versions have been used widely for over 20 years, but have only limited
correlation with both the frequency of use and measures of dependence. |
Millon Clinical Multiaxial Inventory (MCMI) (Millon, 1977) |
X |
x |
|
|
|
|
x |
L |
Primarily a measure of personality "traits" (not to
DSM-IV), this measure also can be used to track change in "state" and is well-
supported commercially. |
Minnesota Multiphasic Personality Inventory (MMPI &
MMPI-2) (Butcher et al., 1989; Hathaway & McKinley, 1951) |
x |
x |
|
|
X |
x |
|
L |
A battery of personality measures, including the MacAndrews
scale related to substance use and measures related to impulsiveness, stress, and several
co-occurring problems. |
Minnesota Multiphasic Personality Inventory Adolescent
(MMPI-A) (Butcher et al., 1992) |
x |
x |
|
|
X |
x |
|
L |
This is a shorter version of the MMPI-2 geared more toward
adolescents. |
Readiness to Change Questionnaire (RTCQ) (Heather et al.,
1991) |
X |
|
X |
|
X |
|
|
L |
The RTCQ is a scale for measuring treatment readiness based on
stages of change theory. This version is copyrighted, but can be used at no cost. |
Short Form 36 (SF-36) (Ware & Sherbourne, 1992) and
Medical Outcome Study HIV questionnaire (MOS-HIV) (Wu et al., 1997) |
|
|
x |
|
|
X |
X |
L |
The SF-36 and several shorter versions often are used for
measuring a return to a persons original quality of life after a major operation or
illness. The instruments have very limited sensitivity, however, for chronic health
behavior conditions. The MOS-HIV is a variation designed for people with HIV. |
Situation Confidence Questionnaire (SCQ) (Annis & Graham,
1988) |
|
|
|
|
X |
x |
x |
L |
The questionnaire provides a profile of the clients
perceived ability to resist using drugs in a variety of situations based on Banduras
concept of self-efficacy. |
Substance Abuse Subtle Screening Inventory (SASSI) (Miller,
1985) |
X |
x |
|
|
|
|
|
L |
This is a screener that has been used primarily in schools,
criminal justice facilities, and obstetrician offices. It has scales that facilitate
diagnosis for substance use and is commercially supported. |
Symptom Checklist-90 (SCL-90, SCL-90-R) (Deragotis et al.,
1974) |
X |
x |
x |
|
|
X |
X |
L |
One of the oldest psychiatric dimensional measures, the SCL-90
and its commercial cousins (SCL-90R, Brief Symptom Inventory) are among the most widely
used and commercially supported instruments. Although the scales can inform diagnosis,
they do not match current criteria (which they predate). |
Diagnostic Only Measures |
Alcohol Use Disorders and Associated Disabilities Interview
Schedule (AUDADIS) (Grant & Hasin, 1992) |
|
X |
|
|
|
X |
|
M |
This instrument formed the basis of what later became
NIAAAs longitudinal survey instrument. |
Composite International Diagnostic Interview (CIDI) (WHO,
1996b) |
|
X |
|
|
|
|
|
M |
The CIDI is primarily a diagnostic/epidemiological interview. |
Diagnostic Interview Schedule (DIS) (Robins et al., 1996) |
|
X |
|
|
|
|
|
M |
Primarily a diagnostic/epidemiological interview. Some
manuals, software, training, and other support are available. |
Diagnostic Interview Schedule for Children (DISC) (Schaffer,
Fisher, & Lucas, 1997) |
|
X |
|
|
|
|
|
M |
Primarily a diagnostic/epidemiological interview. Some
manuals, software, training, and other support are available. |
Psychiatric Research Interview for Substance and Mental
Disorders (PRISM) (Hasin et al, 1994) |
|
X |
|
|
|
x |
|
M |
The PRISM was explicitly designed for use by physicians in
clinical practice. |
Schedules for Clinical Assessment in Neuropsychiatry (SCAN)
(WHO, 1996a) |
|
X |
|
|
|
|
|
M |
Primarily a diagnostic/epidemiological interview. Some
manuals, software, training, and other support are available. |
Structured Clinical Interview for DSM-III-R (SCID) (Spitzer et
al., 1992) |
|
X |
|
|
|
|
|
M |
Primarily a diagnostic/epidemiological interview. Some
manuals, software, training, and other support are available. |
|
|
|