SAMHSA's National Mental Health Information Center

This Web site is a component of the SAMHSA Health Information Network

    | | |    
Search
In This Section

About the Toolkits

Illness Management and     Recovery

Assertive Community     Treatment New!

Family Psychoeducation

Supported Employment

Co-occuring Disorders:     Integrated Dual Diagnosis     Treatment

Feedback Form

Related Links

EBP Toolkit Homepage
 
 
 
 
Page Options
printer icon printer friendly page

e-mail icon e-mail this page

bookmark icon bookmark this page

shopping cart icon shopping cart

account icon  current or new account

This Web site is a component of the SAMHSA Health Information Network.


Skip Navigation

Evidence-Based Practices: Shaping Mental Health Services Toward Recovery

Assertive Community Treatment

Monitoring Client Outcomes

What are client outcomes?
Client outcomes are those aspects of clients’ lives that we seek to improve or to manage successfully through the delivery of mental health services. Medications help clients manage their symptoms. Supported employment programs help clients find work in the community. Dual disorders groups help clients reduce their dependence on alcohol and illicit drugs. Relapse prevention programs help clients stay out of the hospital. Some outcomes are the direct result of an intervention, such as getting a job through participation in a vocational program, whereas others are indirect, such as improvements in quality of life due to having a job. Some outcomes are concrete and observable, such as the number of days worked in a month, whereas others are subjective and private, such as satisfaction with vocational services. Every mental health service intervention, whether considered treatment or rehabilitation, has both immediate and long-term client goals. In addition, clients have goals for themselves, which they hope to attain through the receipt of mental health services. These goals translate into outcomes, and the outcomes translate into specific measures. For example, the goal of a supported employment program is community integration through employment. The outcome for clients is obtaining and holding regular jobs in the community. The outcome measure for a supported employment program may be the number of weeks that a client has worked at competitive jobs during the past quarter.

Why monitor client outcomes?
Client outcomes are the bottom-line for mental health services, like profit is in business. No successful businessperson would assume that the business was profitable just because the enterprise was producing a lot of widgets (e.g. cars, clothes) or employees were working hard. This does not mean that the owner does not need to pay attention to productivity, but rather one would not make the assumption that productivity necessarily leads to profit. In mental health, productivity measures, such as the number of counseling sessions or the number of clients served, tell us very little, if anything, about the effects of services on clients and their welfare.

This fact has led to a broad-based call for outcome monitoring. At the policy and systems level, the Government Performance and Results Act of 1993 requires that all federal agencies measure the results of their programs and restructure their management practice to improve these results. In a parallel fashion, there is a significant movement in human service management toward client outcome-based methods (Rapp & Poertner, 1992). Studies have shown that an outcome orientation of managers leads to increased service effectiveness in mental health (Gowdy & Rapp, 1989). This has led Patti (1985) to argue that effectiveness, meaning client outcomes, should be the “philosophical linchpin” of human services organizations.

Recovery and client outcomes
Recovery means more than controlling symptoms. It's about getting on with life beyond the mental health system. As Pat Deegan (1988) wrote:

The need is to meet the challenge of the disability and to reestablish a new and valued sense of integrity and purpose within and beyond the limits of the disability; the aspiration is to live, work, and love in a community in which one makes a significant contribution (p.15).

While the goals of each individual are unique in detail, people with severe mental illness generally desire the same core outcomes that we all want:

  1. To live independently in a place called home
  2. To gain an education, whether for career enhancement or personal growth
  3. To have a job that enhances our income, provides a means to make a contribution, and enables us to receive recognition
  4. To have meaningful relationships
  5. To avoid the spirit-breaking experiences of hospitalization, incarceration, or substance abuse

If this is true, then mental health services should be focused on the most powerful methods available to help consumers achieve these outcomes. The evidence-based practice that is described in this resource kit was chosen for its ability to achieve one or more of these outcomes.

A powerful resource for program leaders

If funds are the lifeblood of an organization, then information is its intelligence. Collecting and using client outcome data can improve organizational performance. Consider the following vignette.

Participants in a partial hospitalization program sponsored by a community mental health center were consistently showing very little vocational interest or activity. Program staff began gathering data monthly on clients' vocational status and reporting this to their program consultant. He returned these data to program staff using a simple bar graph every three months. The result of gathering and using information on clients' vocational activity was evident almost immediately. Three months after instituting this monitoring system, the percentage of the program's clients showing no interest or activity in vocational areas declined from an original 64 percent to 34 percent. Three months later this percentage decreased an additional 6 percent, so that 72 percent of program participants were now involved in some form of vocational activity.

This example shows that when information is made available, people respond to it. Peters and Waterman (1982) in their study of successful companies observed:

We are struck by the importance of available information as the basis for peer comparison. Surprisingly, this is the basic control mechanism in the excellent companies. It is not the military model at all. It is not a chain of command wherein nothing happens until the boss tells somebody to do something. General objectives and values are set forward and information is shared so widely that people know quickly whether or not the job is getting done-and who's doing it well or poorly (p. 266).

They observed that the data were never used to “browbeat people with numbers” (p.267). The information alone seemed to motivate people.

What is clear from these examples is this: The collection and feedback of information influences behavior. Current research suggests several principles to improve organizational effectiveness:

  • The role of information in an organization is to initiate action and influence organizational behavior.
  • The act of collecting information (measurement) generates human energy around the activity being measured.
  • To ensure that information directs human energy toward enhanced performance, data collection and feedback must be used:
    • to foster and reinforce desired behaviors;
    • to identify barriers to performance and ways to overcome them; and
    • to set goals for future performance.
  • Feedback directs behavior toward performance when it provides “cues” to workers to identify clear methods for correction and when it helps workers learn from their performance.
  • Feedback motivates behavior toward performance when it is used to create expectations for external and internal rewards, is linked to realistic standards for performance, and is directed toward the future versus used punitively to evaluate past performance.

Managers who are committed to enhancing client outcomes have a powerful tool. By proactively and systematically collecting and using client outcome information, managers can enhance the goal-directed performance of program staff, as well as increase their motivation, professional learning, and sense of reward. Minimally, supervisors and managers should distribute (or post) the outcome data reports and discuss them with staff. Team meetings are usually the best time. Numbers reflective of above average or exceptional performance should trigger recognition, compliments, or other rewards. Data reflecting below average performance should provoke a search for underlying reasons and the generation of strategies that offer the promise of improving the outcome. By doing this on a regular basis the manager has begun to create a “learning organization” characterized by consistently improving client outcomes.

Outcomes and evidence-based practices

The foundation of evidence-based practices is client outcomes. The decision to implement an evidence-based practice is based on its ability to help clients achieve the highest rates of positive outcomes. Therefore, one key component of the implementation of an evidence-based practice is the careful monitoring and use of client outcome data. The problem for many mental health providers is that current data systems do not capture relevant client outcomes or are unable to produce meaningful and timely reports. Providers must find ways to develop evidence-based practices information systems that are easy to implement and to maintain.

The following material is designed to guide programs that are implementing an evidence-based practice in developing a practical and useful information system. Some programs may go their own way and develop a system anew. Other programs may adapt existing information systems to suit their needs for monitoring client outcomes. These guidelines will help programs to make such beginnings and adaptations. In addition, programs may wish to expand the evidence-based practices information systems that we describe, to build on the success they have had using a basic system or to customize a system to their needs and context. We encourage such expansion once a basic system has been implemented successfully, and we make recommendations for such enhancements at the end of this section.

We begin with advice on getting started, and then we describe a simple, yet comprehensive, system for monitoring evidence-based practice outcomes. We follow this with ideas on using tables and graphs of outcome data to improve practice and on expanding basic systems.

Guidelines for an evidence-based practices information system

Many practitioners feel overwhelmed by the demands of their jobs and cannot imagine adding the burden of collecting client outcomes. Reporting systems already exist in many mental health settings, but they are time-consuming, and they do not provide useful feedback to improve practice. Thus, resistance is likely when implementing a new system to monitor client outcomes. To overcome this resistance we recommend starting with a very simple system and making the system practical and immediately useful.

Start simply
At the outset, the system must be simple to implement, use, and maintain. Complexity has doomed numerous well-intended attempts to collect and use client outcome data. One way to keep it simple is to limit the amount and sources of information that it contains. Begin with a few key client outcomes and build the system around them. Collect data from practitioners, without the initial need for data collection from clients and families. Start with simple reports that tabulate results for the past quarter and show time trends, and then let experience with the system determine what additional reports are needed.

Fit the needs of practitioners
The system must not create undue burden for practitioners, and it must provide information to them that is useful in their jobs. If possible, the system should collect already known information about clients, and it should require little time to record the data. The system should fit into the workflow of the organization, whether that means, for example, making ratings on paper or directly into a computer. It should collect information on participation in evidence-based services and on client outcomes. Program leaders and practitioners can then keep track of what services clients are using and how they are doing on key outcomes. It should produce easy-to-read and timely reports that contribute to planning and lead to action, for individual clients, for treatment teams, and for the program as a whole.

These two guidelines may lead to a system that consists of a single outcome measure that is collected regularly and used by the program leader and practitioners to monitor their progress toward stated goals for an evidence-based practice. For example, a supported employment program may decide to monitor the rate of competitive employment among those clients who have indicated a desire to work. Practitioners may be asked to indicate whether each client has worked in a competitive job during the past quarter. These data can then be tallied for the entire program to indicate the employment rate during the past quarter, which can be compared to prior quarters and can be used to develop performance goals based on client choices for the upcoming quarter.

The system suggested by these two guidelines can be implemented in a variety of ways, from paper and pencil to multi-user computer systems. Begin with whatever means you have available and expand the system from there. In the beginning, data may be collected with a simple report form, and hand-tallied summaries can be reported to practitioners. A computer with a spreadsheet program (e.g., EXCEL) makes data tabulation and graphing easier than if it is done by hand. A computerized system for data entry and report generation presents a clear advantage, and it may be the goal, but do not wait for it. Feedback does not have to come from a sophisticated computer system to be useful. It is more important that it is meaningful and frequent.

As a client outcome monitoring system develops, program leaders and practitioners will weave it into the fabric of their day-to-day routines. Its reports provide tangible evidence of the use and value of services, and they will become a basis for decision-making and supervision. At some point, the practitioners may wonder how they did their job without an information system, as they come to view it as an essential ingredient of well-implemented evidence-based practices.

Once a basic system has been implemented for a single evidence-based practice, we encourage programs to consider expanding to a comprehensive system for monitoring multiple evidence-based practices. We provide two additional guidelines for developing such a system.

Include all evidence-based practices in one system
The system should monitor the participation of clients in all evidence-based practices. This can be as simple as recording whether clients are eligible for each practice, and in which practices they have participated during the past quarter. For those practices that are implemented, participation rates can be monitored over time, as a means of monitoring the penetration of the practices in the population of eligible clients. For those practices that are not yet implemented, the system will create incentive to do so.

Likewise, the system should monitor a core set of outcomes that apply across evidence-based practices and that are valued by clients and families, as well as by providers and policymakers. For example, keeping people with mental illness in stable community housing, rather than in institutions or homeless settings, is an agreed-upon outcome for several evidence-based practices. Consequently, keeping track of quarterly rates of hospitalization, incarceration, and homelessness will enable evaluation of the effectiveness of a range of services.

Make the data reliable and valid
For an information system to be useful, the data must reliable and valid. That is, the data must be collected in a standardized way (reliability), and the data must measure what it is supposed to measure (validity). Thus, the outcomes must be few in number and concrete, in order for practitioners to stay focused on key outcomes, to understand them in a similar way, and to make their ratings in a consistent and error-free fashion. To enhance reliability and validity, we recommend simple ratings (e.g., Did the client hold a competitive job in this quarter?), rather than more detailed ones (e.g., How many hours during this quarter did the client work competitively?). In addition, reliability will be enhanced if the events to be reported are easy to remember, and thus we recommend collecting data at regular and short intervals, such as quarterly at the outset, and we recommend collecting data for salient events. We recommend the following outcomes:

  • psychiatric or substance abuse hospitalization
  • incarceration
  • homelessness
  • independent living
  • competitive employment
  • educational involvement
  • stage of substance abuse treatment

These few outcomes reflect the primary goals of the evidence-based practices. Assertive community treatment, family psychoeducation, and illness management and recovery share the goal of helping clients to live independently in the community. Thus, their goal is to reduce hospitalization, incarceration, and homelessness, and to increase independent living. Supported employment and integrated dual disorders treatment have more direct outcomes, and thus it is important to assess work/school involvement and progress toward substance abuse recovery, respectively. A Quarterly Report Form is presented at the end of this section as an example of a simple, paper-based way to collect participation and outcome data on a regular basis.

A stand-alone computerized client outcome monitoring system has been developed for the Evidence-Based Practices Project. It follows the above guidelines closely and is available to those programs who wish to start with such a system.

Using tables and graphs in reports

The single factor that will most likely determine the success of an information system is its ability to provide useful and timely feedback to practitioners. It is all well and good to worry about what to enter into a system, but ultimately its worth is in converting data into information. For example, the data may show that twenty consumers worked in a competitive job during the past quarter, but it is more informative to know that this represents only 10 percent of the consumers in the supported employment program and only three of these were new jobs. For information to influence practice, it must be understandable and meaningful, and it must be delivered in a timely way. In addition, the monitoring system must tailor the information to suit the needs of various users and to answer the queries of each of them.

The outcome monitoring system should format data for a single client into a summary report that tracks participation in practices and outcomes over time. This report could be entered in the client's chart, and it could be the basis for a discussion with the client of treatment and rehabilitation progress and options. Further value of a monitoring system comes in producing tables and graphs that summarize the participation and outcomes of groups of clients. Below are some examples of tables and graphs that are useful when implementing and sustaining an evidence-based practice.

Quarterly summary tables
Whether for an entire program, for a specific team, or for a single practitioner's caseload, rates of participation in practices and client outcomes should be displayed for the past quarter. Such a table can address the following kinds of questions.

  • How many of my clients participated in our supported employment program last quarter?
  • How many of my clients worked competitively during the last quarter?
  • What proportion of clients in our program for persons with severe mental illness were hospitalized last quarter?
  • How did the hospitalization rate for those on assertive community treatment teams compare to the rate for clients in standard case management?
  • How many clients with a substance use disorder have yet to participate in our integrated dual diagnosis treatment program?

Simple percentages or proportions, based on quarterly tallies, provide important feedback for both program management and clinical service provision.

Movement tables
Movement tables summarize changes from the previous quarter. They are created by cross-tabulating the same variable from two successive quarters. For example, participation in the family psychoeducation program can be cross-tabulated as shown below.

Participation during Q2
no
yes
Participation during Q1
no
50
20
yes
10
40

This table indicates that, out of 120 clients overall, 50 clients did not participate in the program during either quarter (no/no), 40 participated during both quarters (yes/yes), 20 began participation during Quarter 2 (no/yes), and 10 stopped participation after Quarter 1 (yes/no). Thus, there was a net gain of 10 clients in the family psychoeducation program from Quarter 1 to Quarter 2. The same kind of table can show changes in outcomes between quarters as well. This would answer a question such as, “Were more clients working in competitive jobs during the most recent quarter, as compared to the previous quarter?” Movement tables can be prepared for various groupings of clients. For example, the net gain in competitive employment could be compared across caseloads from multiple case managers or across multiple vocational specialists.

Longitudinal plots
A longitudinal plot is an efficient and informative way to display participation or outcome data for more than two successive periods. The idea is to plot a participation or outcome variable over time, to view performance in the long term. A longitudinal plot can be for an individual, a caseload, a specific evidence-based practice, or an entire program. A single plot can also contain longitudinal data for multiple clients, caseloads, or programs, for comparison. Below is an example comparing one case manager's caseload to all other clients in a supported employment program over a two-year period.

Employment over a two year period

This plot reveals that JP’s clients were slower to find employment in the first year (Quarters 1-4), when compared to other clients in the program, but they made continued progress throughout year two (Quarters 5-8), whereas the rate of employment for the other clients has leveled off. Longitudinal plots are powerful feedback tools, as they permit a longer-range perspective on participation and outcome, whether for a single client or a group of clients. They enable a meaningful evaluation of the success of a program, and they provide a basis for setting goals for future performance.

Recommendations for additions to the basic evidence-based practices information system

Mental health service programs that are sophisticated in using information systems or that have been successful in implementing a start-up system may want to collect and use more information than we recommend for a basic system. For example, programs may want more detailed participation data, such as the number of group sessions attended or the number of contacts with a case manager. They may want to include additional client outcomes or to collect them in a more detailed way.

Programs may also want to collect feedback directly from consumers and family members. Recipients of services are important informants for programs seeking to improve outcomes. Programs may want to know if clients are satisfied with their services and the outcomes they have achieved. They may seek input from consumers about how to improve the services, practically and clinically. Programs may want to know if the services are helping consumers and families to achieve their goals. These are worthy ambitions, and such data have become part of many monitoring and quality improvement systems.

We did not recommend collecting data from consumers and family members as part of a basic system for monitoring client outcomes for a number of reasons. First, we recommend starting with a set of outcomes that practitioners can reported quickly and accurately. The task of collecting data from clients and families could impede progress and distract focus. Second, there are no well-validated questionnaires to assess many of the constructs that are frequently included in consumer and family surveys. Outcomes such as satisfaction, quality of life, and recovery are multifaceted and difficult to measure objectively. Third, it is hard to obtain a representative sample of respondents. Mailed surveys are often not returned. Interviews may be done with those individuals who are easy to reach and cooperative. Questions may be asked only of those who show up for routine appointments. Unless the data are collected from a representative sample, it is difficult to interpret the findings, because it is not clear to whom they generalize. Fourth, there may be better ways to get feedback from consumers than by trying to collect quantitative data from them. A program may be better off holding focus groups for consumers or families to discuss a specific evidence-based practice with the practitioners or with quality improvement personnel. Likewise, a program may learn more about consumer perceptions of services and their feelings about recovery from qualitative interviews with a small group of consumers. Fifth, quality improvement personnel may be better able and qualified to collect, analyze, and interpret data from consumers and families. A treatment team may collect informal feedback from consumers through their day-to-day contacts, but it may be better left to others to collect systematic data. In many agencies, formal reporting systems already include client-based assessments, and it may be possible to build on these efforts rather than to duplicate them.

Yet, programs may want to collect data from the recipients of their services. If a basic outcome monitoring system has been implemented, then expanding data collection to include consumers and family members may be appropriate and feasible. Programs are encouraged to explore their options, although it is important to remain mindful of the issues discussed above. We include the Kansas Consumer Satisfaction Survey, and a Quality of Life Self-Assessment developed in New York, as examples for programs to consider.

When thinking about expanding data collection beyond the basic set of outcomes, it is important to realize that more is not necessarily better. Unless the data can be reported reliably and validly, the value of adding more data to the monitoring system is illusory. The old adage, “garbage in, garbage out,” must be kept in mind when the temptation is present to expand a working system. Feedback that is based on unreliable, invalid, or unrepresentative data may be no better for a system than no feedback at all. Nevertheless, the thoughtful and gradual expansion of a working system for collecting and using client outcome can increase the value of the feedback. The litmus test is not what and how much data a program collects, but rather whether the program uses the data to inform and improve the practice.

References

Deegan, P. E. (1988). Recovery: The lived experience of rehabilitation. Psychosocial Rehabilitation, 11(4), 11-19.

Gowdy, E., & Rapp, C. A. (1989). Managerial behavior: The common denominators of effective community based programs. Psychosocial Rehabilitation Journal, 13, 31-51.

Patti, R. (1985, Fall). In search of purpose for social welfare administration. Administration in Social Work, 9(3), 1-14.

Peters, T.J., & Waterman, R.H. (1982). In search of excellence. New York: Harper & Row.

Rapp, C. A., & Poertner, J. (1992). Social Administration: A Client-Centered Approach. New York: Longman.

Back to Top

Back to Assertive Community Treatment

Home  |  Contact Us  |  About Us  |  Awards  |  Accessibility  |  Privacy and Disclaimer Statement  |  Site Map
Go to Main Navigation United States Department of Health and Human Services Substance Abuse and Mental Health Services Administration SAMHSA's HHS logo National Mental Health Information Center - Center for Mental Health Services