Your browser doesn't support JavaScript. Please upgrade to a modern browser or enable JavaScript in your existing browser.
Skip Navigation U.S. Department of Health and Human Services www.hhs.gov
Agency for Healthcare Research Quality www.ahrq.gov
www.ahrq.gov

Section 6: Operating a Care Management Program

After a State selects its care management program design, target population, and program interventions, it should plan a program implementation strategy. By carefully planning program rollout, designing monitoring strategies, and using measurement for program improvement, States can maximize resources and build support for the program.

Incorporating information from the 13 State Medicaid care management programs in the initial AHRQ Learning Network and additional literature, this section of the Guide, Operating a Care Management Program, provides information to State Medicaid staff about:

  • Implementation strategies.
  • Program monitoring.
  • Data systems.
  • Continuous quality improvement.

Implementation Strategies

States can use a variety of implementation strategies for their care management programs, including operating a pilot program and implementing the program in phases by disease, region, population, or statewide. Both the pilot and phased approaches offer the State the opportunity to address problems and unexpected challenges before larger implementation, to gauge program effectiveness, and to build program support.

Under any implementation strategy, States should draw on other States' experiences, other program successes and failures within the State, and, if appropriate, the vendors' established infrastructure and experience in managing care for the Medicaid population.

Pilot Program

States implement a pilot to assess the program intervention on a smaller scale. States can evaluate results and implement program refinements before expanding the program.

Iowa launched an asthma management program with a target of 250 members. Iowa's initial strategy was to perform outreach to only the highest asthma utilizers for participation in the program. After difficulty in reaching this population, Iowa broadened the outreach population and was successful in enrolling more than 250 members in the program. By implementing and reviewing its pilot program, Iowa was able to refine its enrollment strategy before expanding the program to people with diabetes.
Kansas selected Sedgwick County as its pilot site to implement a care management program focused on asthma, diabetes, congestive heart failure, and other high-risk or high-need members. Sedgwick County was chosen because it had a large concentration of patients, an established PCCM program, and significant legislative support. Kansas' vendor, Central Plains, also has a strong presence in the county. To demonstrate the success of the care management pilot, Kansas has identified a reference, or comparison, group. Wyandotte County is similar to Sedgwick County in population size, density, and socioeconomic composition and will allow the State to compare health outcomes and other metrics of participants and non-participants.

Lessons Learned: Pilot Programs

  • Allow adequate time. The pilot program will need time to demonstrate early results.
  • Be realistic and plan expansion carefully. Use caution when planning an expansion based on results from a pilot program, because positive outcomes might fail to occur when a program is implemented on a larger scale.
  • Engage stakeholders. Secure support from agency leadership and stakeholders on selected evaluation methodology (e.g., control group evaluation).
Virginia's Department of Medical Assistance Services (DMAS) was directed to launch a statewide disease management program. Anthem, one of Virginia's Medicaid managed care organizations, approached the State with a proposal to provide a pilot disease management program at no cost to the State. DMAS agreed to the pilot, and Anthem's subsidiary, Health Management Corporation, implemented Healthy Returns, which ran from June 2004 through June 2005. Through this pilot, the State was able to determine which specific program components were effective. Virginia has modeled its larger statewide disease management program on the Healthy Returns program.

Program Implementation in Phases

Another means of testing an intervention on a segment of the population is to implement the program in phases by disease, region, or population. Unlike in a pilot program, States may have an abbreviated program implementation timeline or strategy, and may have only a short time to evaluate and refine the program.

Implementing a Care Management Program in Phases

  • By Disease. States can phase in a care management program by starting with one disease and then adding additional diseases.
  • By Region. States can implement a care management program by starting in one region or area with a high disease burden or where a model can be tested for replication.
  • By Population. States can focus resources on populations that are most likely to show effects of the intervention.
North Carolina's Office of Rural Health and Community Care developed Community Care of North Carolina (CCNC), which comprises many separate networks. In launching CCNC, the State leveraged strong, established relationships between the State and local communities. Initially targeting easier, less costly populations, such as women and children, lent the program credibility and bought time to build a strong infrastructure for more challenging populations. Subsequently, as more networks joined the program, more experienced networks shared lessons learned with new networks, and the State's greater standardization of best practices encouraged efficiency. Because of the program's demonstrated success, county commissions encourage providers in the counties to participate in the CCNC program.

Ongoing Program Monitoring

An important component of operating a care management program for both State-run and vendor programs is program monitoring. Different than program evaluation, program monitoring can track the program's progress, identify areas for program improvement, recognize program strengths, and ensure that vendors are complying with the contract. Please go to Section 7: Measuring Value in a Care Management Program for more information on program evaluation.

Regular Reports

Receiving regular reports is a useful way for care management program staff to remain apprised of vendor or in-house program activities. States receive weekly, monthly, quarterly, or annual reports on almost all facets of the care management program. Examples of reports that States require include the following:

  • General Member Reports contain information on the number of members enrolled by age, ethnicity, county, and intensity or intervention level; number and percentage of members who decided to opt out by disease and risk status; most common education modules provided to members; and most common provider alert criteria.
  • Care Management Line Reports contain information on the call center, such as number of incoming and outgoing calls, nature of calls received, average time required to return member calls, and average hold time.
  • Provider Reports contain information on number of providers participating in the care management program, number of providers educated on evidence-based clinical practice guidelines, results of provider focus groups, and provider satisfaction surveys.
  • Complaint Reports contain information on provider and member complaints and resolutions.
  • Care Management Reports contain information on the number of health assessments and care plans completed for members, number of members being actively care managed and their status, number of referrals to behavioral health, and number of times members were assisted with transportation, scheduling appointments with providers, discharge planning, and pharmacy issues.
  • Utilization Reports contain information on select performance indicators related to utilization. Utilization measures, such as HbA1c tests and beta-blocker prescription claims, can be compared to frequency of services recommended by evidence-based guidelines.
  • Staffing Reports contain information on ratio of nurses to members, an updated telephone directory of staff, and analysis of staff turnover and fluctuations in staffing.
  • Annual Reports contain an overview of program successes and challenges encountered throughout the year, member and provider satisfaction, results of the vendor's internal quality assurance monitoring, and aggregate clinical and financial outcomes. States can develop, or require the vendor to develop, an annual report to share with program stakeholders to demonstrate program successes.

Lessons Learned: Regular Reports

  • Establish clear goals and desired frequency for each report. States use regular reports for general program monitoring and identifying future program enhancements.
  • Group reports by smaller categories. Since States receive many reports, providing member information by population, disease, and severity can make information easier to understand.
  • Streamline reports or develop summary-level reports. Concise and aggregate-level reporting make interpretation easier.
  • Solicit feedback from report users. Program managers, providers, and care managers may have ideas to improve regular reports.

Often, States are inundated with the quantity and complexity of program reports. Interpreting reports received from vendor or internal staff can become a significant issue for program management staff, especially when staff, resources, or both are limited.

Pennsylvania implemented two strategies to simplify its monitoring strategy. First, it streamlined program reports into a Disease Management Summary Report, a 15-page high-level monthly report designed to answer the key questions of program management staff. The streamlined report provides information on program enrollment activity, monthly population changes, program interventions, care coordination referral support, triage service in the total population and in the disease management population, and community outreach activity. Second, Pennsylvania then began coordinating monthly meetings between senior vendor staff and State staff to communicate about aspects of the disease management program and to clarify information contained in the reports.

Onsite Monitoring

States can employ onsite monitoring of a care management vendor to understand program operations and to make suggestions for program improvement.

Indiana program staff, in their initial program, the Indiana Chronic Disease Management Program, accompanied nurse care managers on in-person visits with members to better understand program operations approximately 1 year after program implementation. Although Indiana had no formal tool for evaluation, Medicaid staff assessed a variety of actions performed by the nurse care managers, including:
  • Recording information gathered during the visit.
  • Implementing strategies encouraging members to actively engage in the disease management program.
  • Assessing members' readiness and ability to set self-management goals.
  • Communicating regarding next followup visit.

Subsequently, program staff reviewed records to assess how quickly the nurse care manager documented the visit in the Chronic Disease Management System and assessed how the nurse care managers managed their caseloads. The data gathered during the in-person visits to nurse care managers helped program staff understand issues the nurse care managers face and identify areas for improvement.

Texas staff visited their vendor headquarters to meet with program staff and to learn about the call center. Their two major goals for the site visit were ensuring that activities specified in the contract were being accomplished and understanding the vendor's call center operations. To prepare for the site visit, Texas developed an onsite monitoring tool that lists items for evaluation. To follow up on specific questions from regular reporting on call center operations, Texas staff listened in on calls and offered recommendations to redesign the call center scripts. In addition, Texas staff reviewed call center staff's methods for recording information from calls. Texas expects to repeat a site visit to the vendor headquarters annually. In addition to an onsite review of the call center, Texas staff plan to conduct a more comprehensive review of operations by evaluating home visits by nurse care managers.

Data Systems

Data systems are critical for effective program monitoring and other ongoing program operations. Specific needs vary by State according to program design and program model. Please refer to Section 5: Selecting a Care Management Program Model for additional information on specific resources needed.

Lessons Learned: Data Systems

  • Ensure that data systems are compatible. States should make certain that data can be exchanged between the State and vendors or partners.
  • Run data early and often. States should run reports to identify data or systems issues.
  • Involve a data analyst. States should employ a technical member on the care management team to interpret data issues.

Data systems compatibility and readiness. When contracting with a vendor, States should ensure that their systems can interface with the vendor's data system. Inability to interface could result in delays for program implementation or could affect evaluation efforts at a later stage. Moreover, States should consider whether data can be exchanged easily with the receiving entity. For example, States must consider how secure data will be transmitted, whether data can be modified, and who will have access to the data.

Prior to program implementation, ensuring the data system's readiness is essential. During implementation and throughout the program, States can send test files to the vendor to ensure that all data is transferred accurately. Texas employs a technical member on their team to ensure that database programming and reporting match program design.

Member identification and stratification. Many States provide more intense care management services to the most high-risk or high-cost members. To identify and categorize the most high-risk or high-cost members, States or their vendors can employ a risk stratification tool or a predictive model. To supplement the identification and stratification tool, States also consider which patients are most "impactable" by using individual-level tools, such as health assessments, the Patient Activation Measure,w or other measurement and screening tools. Please go to Section 3: Selecting and Targeting Populations for a Care Management Program for more information.

Measurement. States use measurement to assess how their program is performing, identify areas for improvement, and evaluate whether the program is successful. States should measure:

  • Structure (organizational, technological, and human resources infrastructure needed for delivering high-quality care).
  • Process (services that constitute recommended care).
  • Outcome (measures of disease-specific health and disability).

Please refer to Section 7: Measuring Value in a Care Management Program for more information on the selection of measures and feasibility of data collection.

Continuous Quality Improvement

Many State care management programs strive to improve members' quality of care. To improve the health system and the quality of care delivered to members, States can implement continuous quality improvement, which is a process to test, understand, and revise processes constantly.x

States employ small tests of change as a model for continuous quality improvement. Small tests of change answer the questions:

  • What are we trying to accomplish?
  • How will we know a change is an improvement?
  • What change can we make that will result in improvement?

The Plan-Do-Study-Acty (PDSA) cycle is used to make changes continuously that result in improvement. To conduct a PDSA cycle, the State can develop a plan to test the change (Plan), carry out the test (Do), observe and learn from the consequences (Study), and determine what modifications should be made based on the test (Act). For example, a State might ask, "What is the most effective way to roll out our care management program to the eligible Medicaid population?" The State might predict that "Educating members while we have them on a telephone call will increase enrollment." The first PDSA cycle might unfold as follows:

  • Plan. Introduce the program to eligible Medicaid members during the first health assessment phone call from the call center.
  • Do. Identify the members via the program selection and stratification criteria, and notify them by phone. However, some cannot be reached.
  • Study. Learn that the initial call is already too long, and the intervention is less successful than predicted.
  • Act. Explore other ways to reach members.

PDSA Cycle

Step 1: Plan. Plan the test or observation, including a plan for collecting data.

  • State the test objective.
  • Predict what will happen and why.
  • Develop a plan to test the change (Who? What? When? Where? What data must be collected?)

Step 2: Do. Test on a small scale.

  • Carry out the test.
  • Document problems and unexpected observations.
  • Begin the data analysis.

Step 3: Study. Set aside time to analyze the data and study the results.

  • Compare the data to your predictions.
  • Summarize and reflect on lessons learned.

Step 4: Act. Refine the change, based on what was learned from the test.

  • Determine what modifications should be made.
  • Prepare a plan for the next test.

After the State completes this first cycle and has measured its effectiveness, it decides to introduce the program by a mailing to eligible members. The State thinks that a mailing in tandem with the calls would improve enrollment. The second PDSA cycle goes as follows:

  • Plan. Introduce the program to eligible Medicaid members through a mailing.
  • Do. Send the letter immediately after eligibility is determined, informing the beneficiary to expect a call for an initial assessment. Then follow up with initial call.
  • Study. Learn that two methods of notification in quick succession about the program are more effective than one and that the intervention increased the number of members who knew about the program.
  • Act. Experience satisfaction with this outcome and expand its use.

Lessons Learned: PDSA Cycles

  • Be innovative to make the test feasible. States may need to modify existing conditions make the test possible.
  • Test over a wide range of conditions. States should experiment with the test in multiple settings to ensure success.
  • Do not try to obtain stakeholders' buy-in or consensus. States should not spend valuable time obtaining support. Instead, States should focus on making the test successful.

States can use the PDSA cycle on a smaller scale, as well. For example, a clinic that wants to create a self-management form for patients to document their goals, might greatly benefit from a small test. The clinic might predict that "Use of self-management forms will increase if physicians find them easy to use." The first PDSA cycle might develop as follows:

  • Plan. Introduce the self-management form to a physician.
  • Do. Ask the physician to use the form on three to four patients.
  • Study. Learn that the questions on the form are unclear and that the form fails to evaluate patient commitment to goals.
  • Act. Revise the form and test again.

After revising the self-management form, the clinic asks another physician to use it on another group of patients. The second PDSA cycle goes as follows:

  • Plan. Introduce the form to the physician.
  • Do. Ask the physician to use the form on three or four patients.
  • Study. Learn that the questions on the form document patient goals and commitment clearly.
  • Act. Experience satisfaction with this outcome and expand use of the new form.

Consequently, States can use the PDSA cycle to measure outcomes quickly and modify the program accordingly. States should use the small tests of change to experiment with programmatic and operational changes.

Conclusion

By carefully planning program implementation, designing monitoring strategies, and using measurement for program improvement, State Medicaid staff can maximize resources and build support for their program. Based on their program design and included populations, States should choose interventions that target patients and providers.


Footnotes:
w. Hibbard JH, Stockard J, Mahoney ER, et al. Development of the Patient Activation Measure (PAM): conceptualizing and measuring activation in patients and consumers. Health Serv Res 2004; 39(4):1005-26.
x. Berwick, DM. Continuous improvement as an ideal in health care. N Engl J Med 1989; 320(1):53-6.
y. Available at: Institute for Health Improvement. Testing changes. http://www.ihi.org/IHI/Topics/Improvement/ImprovementMethods/HowToImprove/testingchanges.htm. Accessed November 5, 2007.


Return to Contents
Proceed to Next Section

 

AHRQ Advancing Excellence in Health Care