This Web site is a component of the SAMHSA Health Information Network |
| | | | | | |||||||||||
This Web site is a component of the SAMHSA Health Information Network. |
Evidence-Based Practices: Shaping Mental Health Services Toward RecoveryAssertive Community TreatmentMonitoring Client Outcomes
What are client outcomes? Why monitor client outcomes? This fact has led to a broad-based call for outcome monitoring. At the policy and systems level, the Government Performance and Results Act of 1993 requires that all federal agencies measure the results of their programs and restructure their management practice to improve these results. In a parallel fashion, there is a significant movement in human service management toward client outcome-based methods (Rapp & Poertner, 1992). Studies have shown that an outcome orientation of managers leads to increased service effectiveness in mental health (Gowdy & Rapp, 1989). This has led Patti (1985) to argue that effectiveness, meaning client outcomes, should be the “philosophical linchpin” of human services organizations. Recovery and client outcomes
While the goals of each individual are unique in detail, people with severe mental illness generally desire the same core outcomes that we all want:
If this is true, then mental health services should be focused on the most powerful methods available to help consumers achieve these outcomes. The evidence-based practice that is described in this resource kit was chosen for its ability to achieve one or more of these outcomes. A powerful resource for program leadersIf funds are the lifeblood of an organization, then information is its intelligence. Collecting and using client outcome data can improve organizational performance. Consider the following vignette.
This example shows that when information is made available, people respond to it. Peters and Waterman (1982) in their study of successful companies observed:
They observed that the data were never used to “browbeat people with numbers” (p.267). The information alone seemed to motivate people. What is clear from these examples is this: The collection and feedback of information influences behavior. Current research suggests several principles to improve organizational effectiveness:
Managers who are committed to enhancing client outcomes have a powerful tool. By proactively and systematically collecting and using client outcome information, managers can enhance the goal-directed performance of program staff, as well as increase their motivation, professional learning, and sense of reward. Minimally, supervisors and managers should distribute (or post) the outcome data reports and discuss them with staff. Team meetings are usually the best time. Numbers reflective of above average or exceptional performance should trigger recognition, compliments, or other rewards. Data reflecting below average performance should provoke a search for underlying reasons and the generation of strategies that offer the promise of improving the outcome. By doing this on a regular basis the manager has begun to create a “learning organization” characterized by consistently improving client outcomes. Outcomes and evidence-based practicesThe foundation of evidence-based practices is client outcomes. The decision to implement an evidence-based practice is based on its ability to help clients achieve the highest rates of positive outcomes. Therefore, one key component of the implementation of an evidence-based practice is the careful monitoring and use of client outcome data. The problem for many mental health providers is that current data systems do not capture relevant client outcomes or are unable to produce meaningful and timely reports. Providers must find ways to develop evidence-based practices information systems that are easy to implement and to maintain. The following material is designed to guide programs that are implementing an evidence-based practice in developing a practical and useful information system. Some programs may go their own way and develop a system anew. Other programs may adapt existing information systems to suit their needs for monitoring client outcomes. These guidelines will help programs to make such beginnings and adaptations. In addition, programs may wish to expand the evidence-based practices information systems that we describe, to build on the success they have had using a basic system or to customize a system to their needs and context. We encourage such expansion once a basic system has been implemented successfully, and we make recommendations for such enhancements at the end of this section. We begin with advice on getting started, and then we describe a simple, yet comprehensive, system for monitoring evidence-based practice outcomes. We follow this with ideas on using tables and graphs of outcome data to improve practice and on expanding basic systems. Guidelines for an evidence-based practices information systemMany practitioners feel overwhelmed by the demands of their jobs and cannot imagine adding the burden of collecting client outcomes. Reporting systems already exist in many mental health settings, but they are time-consuming, and they do not provide useful feedback to improve practice. Thus, resistance is likely when implementing a new system to monitor client outcomes. To overcome this resistance we recommend starting with a very simple system and making the system practical and immediately useful. Start simply Fit the needs of practitioners These two guidelines may lead to a system that consists of a single outcome measure that is collected regularly and used by the program leader and practitioners to monitor their progress toward stated goals for an evidence-based practice. For example, a supported employment program may decide to monitor the rate of competitive employment among those clients who have indicated a desire to work. Practitioners may be asked to indicate whether each client has worked in a competitive job during the past quarter. These data can then be tallied for the entire program to indicate the employment rate during the past quarter, which can be compared to prior quarters and can be used to develop performance goals based on client choices for the upcoming quarter. The system suggested by these two guidelines can be implemented in a variety of ways, from paper and pencil to multi-user computer systems. Begin with whatever means you have available and expand the system from there. In the beginning, data may be collected with a simple report form, and hand-tallied summaries can be reported to practitioners. A computer with a spreadsheet program (e.g., EXCEL) makes data tabulation and graphing easier than if it is done by hand. A computerized system for data entry and report generation presents a clear advantage, and it may be the goal, but do not wait for it. Feedback does not have to come from a sophisticated computer system to be useful. It is more important that it is meaningful and frequent. As a client outcome monitoring system develops, program leaders and practitioners will weave it into the fabric of their day-to-day routines. Its reports provide tangible evidence of the use and value of services, and they will become a basis for decision-making and supervision. At some point, the practitioners may wonder how they did their job without an information system, as they come to view it as an essential ingredient of well-implemented evidence-based practices. Once a basic system has been implemented for a single evidence-based practice, we encourage programs to consider expanding to a comprehensive system for monitoring multiple evidence-based practices. We provide two additional guidelines for developing such a system. Include all evidence-based practices in one system Likewise, the system should monitor a core set of outcomes that apply across evidence-based practices and that are valued by clients and families, as well as by providers and policymakers. For example, keeping people with mental illness in stable community housing, rather than in institutions or homeless settings, is an agreed-upon outcome for several evidence-based practices. Consequently, keeping track of quarterly rates of hospitalization, incarceration, and homelessness will enable evaluation of the effectiveness of a range of services. Make the data reliable and valid
These few outcomes reflect the primary goals of the evidence-based practices. Assertive community treatment, family psychoeducation, and illness management and recovery share the goal of helping clients to live independently in the community. Thus, their goal is to reduce hospitalization, incarceration, and homelessness, and to increase independent living. Supported employment and integrated dual disorders treatment have more direct outcomes, and thus it is important to assess work/school involvement and progress toward substance abuse recovery, respectively. A Quarterly Report Form is presented at the end of this section as an example of a simple, paper-based way to collect participation and outcome data on a regular basis. A stand-alone computerized client outcome monitoring system has been developed for the Evidence-Based Practices Project. It follows the above guidelines closely and is available to those programs who wish to start with such a system. Using tables and graphs in reportsThe single factor that will most likely determine the success of an information system is its ability to provide useful and timely feedback to practitioners. It is all well and good to worry about what to enter into a system, but ultimately its worth is in converting data into information. For example, the data may show that twenty consumers worked in a competitive job during the past quarter, but it is more informative to know that this represents only 10 percent of the consumers in the supported employment program and only three of these were new jobs. For information to influence practice, it must be understandable and meaningful, and it must be delivered in a timely way. In addition, the monitoring system must tailor the information to suit the needs of various users and to answer the queries of each of them. The outcome monitoring system should format data for a single client into a summary report that tracks participation in practices and outcomes over time. This report could be entered in the client's chart, and it could be the basis for a discussion with the client of treatment and rehabilitation progress and options. Further value of a monitoring system comes in producing tables and graphs that summarize the participation and outcomes of groups of clients. Below are some examples of tables and graphs that are useful when implementing and sustaining an evidence-based practice. Quarterly summary tables
Simple percentages or proportions, based on quarterly tallies, provide important feedback for both program management and clinical service provision. Movement tables
This table indicates that, out of 120 clients overall, 50 clients did not participate in the program during either quarter (no/no), 40 participated during both quarters (yes/yes), 20 began participation during Quarter 2 (no/yes), and 10 stopped participation after Quarter 1 (yes/no). Thus, there was a net gain of 10 clients in the family psychoeducation program from Quarter 1 to Quarter 2. The same kind of table can show changes in outcomes between quarters as well. This would answer a question such as, “Were more clients working in competitive jobs during the most recent quarter, as compared to the previous quarter?” Movement tables can be prepared for various groupings of clients. For example, the net gain in competitive employment could be compared across caseloads from multiple case managers or across multiple vocational specialists. Longitudinal plots This plot reveals that JP’s clients were slower to find employment in the first year (Quarters 1-4), when compared to other clients in the program, but they made continued progress throughout year two (Quarters 5-8), whereas the rate of employment for the other clients has leveled off. Longitudinal plots are powerful feedback tools, as they permit a longer-range perspective on participation and outcome, whether for a single client or a group of clients. They enable a meaningful evaluation of the success of a program, and they provide a basis for setting goals for future performance. Recommendations for additions to the basic evidence-based practices information systemMental health service programs that are sophisticated in using information systems or that have been successful in implementing a start-up system may want to collect and use more information than we recommend for a basic system. For example, programs may want more detailed participation data, such as the number of group sessions attended or the number of contacts with a case manager. They may want to include additional client outcomes or to collect them in a more detailed way. Programs may also want to collect feedback directly from consumers and family members. Recipients of services are important informants for programs seeking to improve outcomes. Programs may want to know if clients are satisfied with their services and the outcomes they have achieved. They may seek input from consumers about how to improve the services, practically and clinically. Programs may want to know if the services are helping consumers and families to achieve their goals. These are worthy ambitions, and such data have become part of many monitoring and quality improvement systems. We did not recommend collecting data from consumers and family members as part of a basic system for monitoring client outcomes for a number of reasons. First, we recommend starting with a set of outcomes that practitioners can reported quickly and accurately. The task of collecting data from clients and families could impede progress and distract focus. Second, there are no well-validated questionnaires to assess many of the constructs that are frequently included in consumer and family surveys. Outcomes such as satisfaction, quality of life, and recovery are multifaceted and difficult to measure objectively. Third, it is hard to obtain a representative sample of respondents. Mailed surveys are often not returned. Interviews may be done with those individuals who are easy to reach and cooperative. Questions may be asked only of those who show up for routine appointments. Unless the data are collected from a representative sample, it is difficult to interpret the findings, because it is not clear to whom they generalize. Fourth, there may be better ways to get feedback from consumers than by trying to collect quantitative data from them. A program may be better off holding focus groups for consumers or families to discuss a specific evidence-based practice with the practitioners or with quality improvement personnel. Likewise, a program may learn more about consumer perceptions of services and their feelings about recovery from qualitative interviews with a small group of consumers. Fifth, quality improvement personnel may be better able and qualified to collect, analyze, and interpret data from consumers and families. A treatment team may collect informal feedback from consumers through their day-to-day contacts, but it may be better left to others to collect systematic data. In many agencies, formal reporting systems already include client-based assessments, and it may be possible to build on these efforts rather than to duplicate them. Yet, programs may want to collect data from the recipients of their services. If a basic outcome monitoring system has been implemented, then expanding data collection to include consumers and family members may be appropriate and feasible. Programs are encouraged to explore their options, although it is important to remain mindful of the issues discussed above. We include the Kansas Consumer Satisfaction Survey, and a Quality of Life Self-Assessment developed in New York, as examples for programs to consider. When thinking about expanding data collection beyond the basic set of outcomes, it is important to realize that more is not necessarily better. Unless the data can be reported reliably and validly, the value of adding more data to the monitoring system is illusory. The old adage, “garbage in, garbage out,” must be kept in mind when the temptation is present to expand a working system. Feedback that is based on unreliable, invalid, or unrepresentative data may be no better for a system than no feedback at all. Nevertheless, the thoughtful and gradual expansion of a working system for collecting and using client outcome can increase the value of the feedback. The litmus test is not what and how much data a program collects, but rather whether the program uses the data to inform and improve the practice. ReferencesDeegan, P. E. (1988). Recovery: The lived experience of rehabilitation. Psychosocial Rehabilitation, 11(4), 11-19. Gowdy, E., & Rapp, C. A. (1989). Managerial behavior: The common denominators of effective community based programs. Psychosocial Rehabilitation Journal, 13, 31-51. Patti, R. (1985, Fall). In search of purpose for social welfare administration. Administration in Social Work, 9(3), 1-14. Peters, T.J., & Waterman, R.H. (1982). In search of excellence. New York: Harper & Row. Rapp, C. A., & Poertner, J. (1992). Social Administration: A Client-Centered Approach. New York: Longman. |
Home | Contact Us | About Us | Awards | Accessibility | Privacy and Disclaimer Statement | Site Map |