[THIS TRANSCRIPT IS UNEDITED]

NATIONAL COMMITTEE ON VITAL AND HEALTH STATISTICS

SUBCOMMITTEE ON POPULATION-SPECIFIC ISSUES

January 13, 1998

Hubert Humphrey Building
200 Independence Avenue SW
Washington, D.C.

Proceedings By:
CASET Associates, Ltd.
10201 Lee Highway, Suite 160
Fairfax, Virginia 22030
(703) 352-0091

Topic: Medicaid Managed Care


List of Participants:

Lisa I. Iezzoni, M.D., Chair
Hortensia Amaro, Ph.D.
Richard K. Harding, M.D.
Vincent Mor, Ph.D.
George H. Van Amburg
M. Elizabeth Ward
Carolyn M. Rimes
Olivia Carter-Pokras, Ph.D.
Dale C. Hitchcock
Ronald W. Manderscheid, Ph.D.

TABLE OF CONTENTS

Population Standard for Age Adjustment Statistics:
Harry Rosenberg

Possible Effects of the New OMB Standard on Race Ethnicity for Vital Statistics:
James Weed

Remarks: Lisa Iezzoni

National Perspective:
Rachel Block
Mary Jo O'Brien
Margaret Schmid

Provider Panel:
Donald Unlaw
Kathryn Coltin
Eileen Peterson
Michael Collins
David Baldridge


PROCEEDINGS (9:07 a.m.)

DR. IEZZONI: Good morning. I have just been informed that a number of our colleagues need to leave a little bit on the early side.

This first hour of the session is going to be chaired by George van Amburg. So George, can I turn the mike over to you?

DR. VAN AMBURG: Sure, thank you. One of the issues that has been floating around in the statistical field for the last four or five years has been the issue of age adjustments in standard population for age adjusting. There have been several workshops convened by the National Center for Health Statistics to look at this issue, to try to determine what should be done if anything with respect to the population standard that is currently being used by NCHS for age adjustment.

As background, I think that you need to realize that there are at least two federal agencies that are using on a regular basis two different standards for age adjustment. NCHS is using essentially a 1940 U.S. population, now called the U.S. standard million, but NCI uses the 1970 population.

This creates great havoc with everybody, because when an adjusted rate is published, one doesn't necessarily know what the population base is, and certainly the press doesn't when they look at two different rates. Then there is also the crude rate, so now we have three rates being published, and it is quite confusing for a number of people.

So NCHS did convene a couple of workshops. I think they have come to some recommendations, and Harry Rosenberg, who is chief of the mortality branch at the NCHS, is here to discuss them.

DR. ROSENBERG: Thank you, George. I am pleased and honored to have been invited to speak to the Subcommittee on Population Specific Issues on the population standard for age adjusted mortality statistics. This is a very important technical issue about which we are about to officially propose the first change since 1943.

First, I would like to give you a brief presentation on the technical issue at hand, using an illustration from a recent report of the National Center for Health Statistics. I have given everyone this handout. If you could take a look at it, I will walk you through the relevant points.

If you look on the back of the handout, first of all, the handout is our annual report on mortality statistics for 1995, published in June of 1997. If you look at the illustration on the reverse side, it presents two lines. Those lines depict the average level of mortality risk in the United States from 1930 through 1995. The top line is called accrued death rate and the bottom line is called an age adjusted death rate.

The top line, you will see, is relatively flat, suggesting that the average risk of death in the United States during the past 50 years or so has remained relatively constant. In fact, it declined by about 18 percent between 1940 and 1995. However, if you look at the lower line, the decline is fairly substantial. In fact, the dashed line from 1940 to 1995 declined by over 50 percent, which is to say, the true risk of death in the United States since 1940 has been cut by one half. So you see, these two charts give you a very different impression of what is happening to mortality in the United States.

Why is the top line so much more level than the lower line? The reason for that is that the population composition of the United States is changing. The population is getting older, and because the risk of death is higher at the older ages, it tends to push the crude death rate up artificially.

Now, we have a technique for taking away the bias that is due to the aging of the population. That technique is called age adjustment. One of the tools that we use in age adjustment is to take an arbitrary set of population weights, and we hold those constant over a long period of time. We don't change them.

The weights that were selected in 1943 for age adjustment were the population in the most recent census at that time, which was the census of 1940. The technique obviously is very important, because it helps us understand what the true risk of death is in the United States. If we didn't age adjust our death rates, for example, by race, between the white and the black population, the black population's mortality would be lower than that of the white population. But once we use this technique and apply it to the two races, the black rate is 40 percent higher than that for -- I think it is 60 percent higher than that of the white population. So the technique is very important, not only for showing the true change in the risk of death over time, but also in depicting comparisons between groups such as race groups, ethnic groups and geographic areas.

So I want to impress on you the importance of this statistical technique, which some might find a little bit esoteric and technical, but it is very, very important. It is as important to health statisticians as the consumer price index is for people who work in economics and labor statistics.

Are there any questions before I move along?

From time to time, the statistical community and the non-statistical community has agitated for change in the population standard of 1940. As a result of that agitation, in 1991 the National Center for Health Statistics convened a workshop representing academia, federal agencies, the states and others to examine the technical aspects of possibly changing the standard used for population adjustment.

Part of the reason the workshop was held also was that when you tell people that you are using 1940 as a population standard, it sounds like you are a little out of touch with reality. So we certainly had a communication problem in using this to explain what the level of mortality is, not only to the public, but also to the Congress, when the director of NCHS had to make presentations to the Congress.

So we held our meeting in 1991, and we made a number of recommendations. The first recommendation we mae was to stay with the 1940 standard, and not to change it. But we also, coming back to George van Amberg's point, recommended strongly that all the federal agencies at least in the Public Health Service get on board and use the same standard.

However, unfortunately we didn't do the type of proactive work after that workshop to insure buy-in from the other federal agencies in the Public Health Service, and they continued pretty much to do their own thing. The National Cancer Institute used the 1070 standard, parts of CDC used the 1980 standard, and NCHS used the 1940 standard. So we didn't have our own house in order.

In the National Center for Health Statistics, we decided that we would reconvene or have a second workshop to revisit this issue, largely because we had to reach some agreement for the year 2010 Healthy People objectives. Many of the objectives of that exercise are cast in terms of the age adjusted death rate. The question of 1940 also had reared its ugly head in those discussions.

So in the Division of Vital Statistics, where we produced the national mortality statistics, we had started thinking about the workshop. I got a telephone call however from some people in the Heart Institute, and they asked me if we might accelerate the timetable, because the Wall Street Journal in November of 1996 published an article about trends in heart disease and charged that the use of age adjusted death rates exaggerates the decline in heart disease mortality, suggesting that perhaps we were boasting about improvements that might not be real.

The Heart Institute felt that they needed to respond in a sound, technical and scientific way. At the same time, just a few months later, the New England Journal of Medicine had an article about trends in cancer mortality, and once again it suggested that using the 1970 standard as they were doing was possibly giving an overly optimistic view of the progress that the Cancer Institute and others were making in the progress against cancer mortality.

So because of our own interest and that of the Cancer Institute, the Heart Institute, as well as the agencies of the Centers for Disease Control, we convened a second panel in 1991. The composition of the panel is in the second handout that I have given you, and you will see that the --

DR. IEZZONI: When was the second panel?

DR. ROSENBERG: Excuse me, I beg your pardon. It is June of 1997. I misspoke, thank you, June of 1997.

If you will look at the participants, you will see that they are a broadly based group representing the National Center for Health Statistics, the state vital statistics activities, the United Nations, relevant federal agencies such as the National Cancer Institute, the National Heart, Lung and Blood Institute, other parts of the Centers for Disease Control and George van Amburg, who at the time was representing the U.S. National Committee.

So we had again a broadly based group. The purpose of the second workshop was not to look at technical issues which we focused on in the first workshop, but rather to look at policy issues and implementation issues addressing four particular questions. One, should the United States or the Public Health Service use a single standard, multiple standards? If one or more than one, which standard should be used, how often should the standards be updated and how should the recommendations of the workshop be moved forward.

The results of that very excellent workshop were as follows. I would like to read these to you, and then I can submit these in writing for the minutes of this meeting.

The first recommendation was actually quite dramatic and exciting. It was that the population standard should be changed from 1940 to the projected population of the year 2000. The single standard should also be used by all agencies for official data presentation. For special analyses, alternative standards cold be used.

So in doing this, there were some technical reasons aside from the communication problem that I mentioned and the credibility problem that I mentioned earlier. By using the more recent standard, the age adjusted death rates would look more like the crude death rate and we would bring into alignment the average risk of death measured by the crude death rate and the age adjusted death rate, which has some technical advantages.

The second thing in this first recommendation that is important again is that all the agencies at least in the department and the Public Health Service would use the same standard, and this would greatly alleviate some problems that the Public Health Service and the department have created for the states and the state health departments and for others in understanding mortality data.

The second recommendation was that the agencies of the Public Health Service and the department should implement the population standard beginning with the data year of 1999. What is a data year 1999 mean? We are currently in the calendar year 1998, although you would guess that I didn't know that, but we are publishing mortality statistics for 1996 and '97. Those statistics we call for data year 1996 and data year 1997, as opposed to calendar year, which we are all living in.

The third recommendation -- and please stop me if you have any questions, if that is your usual protocol -- agencies should continue to use and publish their current standards until the official implementation year when the new common standard will be adopted. To avoid confusion, agencies implementing the new standard prior to 1999, if they wish, should simultaneously publish rates adjusted to both the old and the new. The purpose of doing that was so that other users could have continuity in the time series and not be confused. Thus, if NCHS chose for example to begin implementing the new standard now, we would nevertheless continue to publish using the old standard.

Four, the fourth recommendation, after the implementation date, agencies should use the new standard in all press releases and all other communication with the public. We felt that it was obvious that if we can get the same message out on cancer and heart disease and all the other conditions, and the same standard for Healthy People 2010, that we would be much more credible with the public and with the press.

The fifth recommendation was that NCHS will convene a work group to evaluate the age adjustment standard at least every 10 years. Here we are recognizing the importance of not having a static situation as we have had over the past 50 years, but that we have a situation in which we re-evaluate whether the year 2000 standard is appropriate or not.

Now, it may indeed be appropriate for the next 50 years, for all we know, but this is a recommendation that we in fact formally appraise and reevaluate it.

The sixth recommendation is that NCHS will be responsible for naming the new standard and will determine the number of significant digits in the standard. This led to quite a few amusing suggestions for what we would call this. One recommendation for example was the millennial standard, another was -- we have a statistician, Randy Curtin, who is very knowledgeable and involved in a lot of complex methodological issues, and he has been involved in this debate for the length of his career at NCHS, so someone suggested the Lester R. Curtin Memorial Standard Million Population. That was rejected, and we are going to be using the year 2000 standard as the official name.

The seventh recommendation was technical, and it suggested that we use essentially the same methodological technique that we have been using with respect to age groups.

The eighth recommendation was that NCHS will convene an implementation committee to move these recommendations from a proposal to reality. The implementation committee did meet in early December, and I'll have a few words to say about that.

The ninth and last recommendation is that NCHS will publicize the new standard in a variety of publications, including the morbidity and mortality weekly report, public health reports, appropriate professional newsletters and that we will try to also publish the recommendations and the rationale for them in scholarly journals.

As I said, we did have our first meeting of the implementation committee in early December. We discussed how we would move this proposal through the Public Health Service. We are currently working on several fronts to do this. One recommendation is that the proposal be brought to the department's data council, I believe in February will be the next meeting. We have been asked to present a one-page summary which we will present at that meeting.

We will also convene a forum in Atlanta at the Centers for Disease Control to appraise our colleagues in the agency, and also to bring the matter to the attention of the director of CDC, Dr. Satcher, who ultimately will be bringing the recommendation to Secretary Shalala.

We also were somewhat more specific in our implementation committee meeting about the types of instructional materials and tutorials that we will try to develop to publicize the new standard. It was suggested in our implementation committee that we bundle our educational material on age adjustment with educational material on ICD-10, the 10th revision of the international classification of diseases, which will be implemented in exactly the same data year as the new population standard.

Thank you very much. If you have any questions?

DR. VAN AMBURG: Do any of the members have any questions?

DR. AMARO: I had a quick one. I was wondering if you have any way of having a sense of what the implications are for how the achievement or progress toward the 2000 objectives are going to look like. Is the performance going to change by changing this? Also, how it is going to impact on data by race and ethnicity?

DR. ROSENBERG: Those are very good questions. First of all, the Healthy People 2000 goals, the targets, will look very different from the way the targets look now. However, the path toward the targets that we have seen historically will not change an awful lot. In other words, if there was a downward trend in heart disease using the old standard, the new trend downward will look pretty much the same, but the target will be quite different, very different.

DR. AMARO: So the targets will be changed?

DR. ROSENBERG: The targets will be changed using this standard. But the image that you have and the difference between the targets and reality will be very similar. So if the current differential between reality and a target is let's say 20 percent, using the new standard it may turn out to be a difference of 25 percent. It won't be enormously different, but it will be a little different, and the numbers will look quite different.

It will require explanation on the part of the Healthy People 2010 project to inform all of the users of this document and this exercise that these changes were made and why they were made. So they will also have some work to do.

With regard to the differentials between the races, the new standard will have an impact on that. The 60 percent that I described, the differential between the races, will be reduced somewhat from what it is now. That will occur because the old standard gives more weight to the younger population than the new standard.

The difference in mortality between the black and white young population is quite large, but the difference between the older population, between the white and black population, is quite a bit smaller. In fact, the very oldest black persons have better mortality than their white counterparts.

So we will indeed see some differences in race differentials, as well as Healthy People 2000 goals.

DR. AMARO: So that general pattern also holds for other ethnic groups that have a distribution toward the younger population?

DR. ROSENBERG: That's exactly right. You will see that throughout. It will make a big difference in the picture.

DR. AMARO: So it will reduce differences, is what you're saying?

DR. ROSENBERG: It will reduce differences under certain statistical circumstances, and those are that the population distribution of the minority group is younger than that of the white population. That holds for the black population, it holds for the Hispanic population, it probably does not hold for the Asian population groups. So the differential between the Asian and the white population will probably look pretty much the way it does now.

DR. AMARO: Thank you.

DR. MOR: I have a question about social security. I assume that social security would be included in this, is that correct, or not?

DR. ROSENBERG: Social security, which is our partner in other projects, such as the creation of the life tables of the United States, they will be informed of what we are doing. They are very sophisticated with regard to the use of our data, and use it quite a lot.

They in presenting the population with regard -- and mortality with respect to the social security trust fund tend to look at specific age groups, rather than looking at summary measures, and they look at causes of death. So I would say that they will be interested in what we are doing, but they won't be materially affected by it in their work.

DR. MOR: And the same would be true of HCFA in terms of their projections, their part A, part B trust fund projections?

DR. ROSENBERG: Pretty much so, exactly, because what they do is, they actually take the age group 55 to 59, 60 to 64, and they are carrying those forward, and they are not looking at any single summary measure that averages over everything. The only reason we tend to use those summary measures, which can be very misleading, incidentally, is that it simplifies the health message to our data users and the press. It doesn't mislead entirely; it is sort of like saying the person died in the river that was three inches deep on the average.

So you have to be kind of careful when you do this. We use these, they are very effective, but for very complex demographic exercises, it is not the way one wants to go. But HCFA does -- there are substantial changes over time in the age distribution of the 65 and over population.

DR. ROSENBERG: Absolutely.

DR. MOR: And I have seen any number of statistics from both Social Security and HCFA which summarize the entire 55 plus population or the average Medicare beneficiary or social security recipient. Will they use this kind of approach in readjusting those, or not?

DR. ROSENBERG: That is a very good question. I think we will certainly hope they get the message. I think that what you are suggesting -- and I think it is an excellent suggestion, is that we include them in our review and comment procedures. We have not to this point.

DR. VAN AMBURG: Vince, the work group did look at splitting out the 85 and over age group in the age adjustment process, changing the age groups, and the effect of that was very, very small, if I recall correctly. This time, anyway. Any other questions? Harry, is there some way the committee here can help you in this process? This is going to the data council?

DR. ROSENBERG: Yes, this will be going to the data council probably next month. We haven't scheduled it as yet, but I think it would certainly help our efforts greatly if the committee felt really comfortable with what I have described, and would suggest that they support the recommendations of the work group in our process. That would be extremely helpful to us in our efforts to get buy-in into this proposal, which we think will greatly improve the way we do health statistics in this country.

DR. IEZZONI: Why don't we --

DR. VAN AMBURG: Consider that as an action item?

DR. IEZZONI: Yes.

DR. VAN AMBURG: Okay. Thank you, Harry.

DR. ROSENBERG: Thank you very much.

DR. VAN AMBURG: Our next presenter is Jim Weed, who is the deputy director of the Division of Vital Statistics at NCHS, and he is going to talk a little bit about the effect of the OMB, new OMB standards on race and ethnicity in the vital statistics.

DR. WEED: Harry brought a solution; I'm afraid I'm bringing a problem. There probably aren't enough handouts for everybody. Don't be too intimidated by the pile of paper. There is an appendix which comprises probably half of it, which describes the vital statistics system. I won't get into that too much, just the first few pages of the handout.

I'm here to discuss the effect of the new race and ethnicity standards on basically vital statistics. It is seemingly a simple issue, but it turns out to be extremely complex, and the solution is complex, and I'll probably demonstrate that as I move through this.

I'm sure that you have all been familiar with the Office of Management and Budget's Federal Register notices. I wouldn't try to reproduce it here, except that I think to get our minds into a little bit, I would like to just very briefly cover the five or six bullets that describe basically the standards that they have been proposing.

The first bullet is that when self identification is used, -- get that, self identification -- a method for reporting more than one race should be adopted. The method for respondents to report more than one race should take the form of multiple responses -- underline that, too -- multiple responses to a single question, and not a multi-racial category.

A third bullet is, when a list of races is provided to respondents, the list should not contain a multi-racial category. Then the fourth bullet is, based on research conducted so far, two recommended forums for collecting the data or the instructions accompanying the multiple response are, mark one or more or select one or more.

Finally, if the criteria for data quality and confidentiality are met, provisions should be made to report at a minimum the number of individuals identifying with more than one race. Data producers are encouraged to provide greater detail about the distribution of multiple responses.

Finally, the new standards will be used in the decennial census, and other data producers should conform as soon as possible, but no later than January 1, 2003.

Towards the end of that same note, there is a paragraph on tabulation of data, and I want to get that out on the table also. It says, when aggregate data are presented, data producers shall provide the number of respondents who marked or selected only one category separately for each of the five racial groups. In addition to these numbers, data producers are strongly encouraged to provide the detailed distributions, including all possible combinations of multiple responses to the race question. If data on multiple responses are collapsed at a minimum the total number of respondents reporting more than one race shall be made available.

So the thrust of that paragraph is that if a person reports only one race, they get counted in the single race group. But if they report more than one, they are not to be allocated or imputed or somehow put back into a single race group; they are kept separate. That is the tricky part of it.

Now, for vital statistics, what is the import of all of this? Well, just very briefly, let's look at what the vital statistics system is in the United States. Basically, the legal authority for vital registration rests with the states and the territories. There is no federal law mandating registration of vital events, but the National Center for Health Statistics is required to produce national statistics from the data obtained on vital records in the states.

Now, the vital records in the reports originate with private citizens. Basically, the data are provided by members of families affected by the events, by their physicians, funeral directors, medical doctors and so forth, medical examiners, those kinds. The responsibilities of these individuals are defined in state laws.

Basically, in a birth certificate, very briefly speaking, the responsibility would rest with the hospital of birth or the attendant, a physician or midwife. Basically the attendant would collect the data from the mother. This is the usual way I think that it happens. The data are collected from the mother and very often are put on a work sheet, not the actual birth certificate, but on a work sheet that is filled out and then a hospital records administrator will type the data onto the birth certificate, or as is the case now, about 85 percent of U.S. births are recorded on electronic birth certificates now, so the data are entered into a computer.

For a death certificate, basically it is the responsibility of the funeral director to get all of the demographic and -- all the data except for the medical certification. That means the funeral director is supposed to make sure that the death certificate has on it the race of the decedent. So if it is not the funeral director, the funeral director would get that information generally by talking to a family member or informant, whoever that might be. It could be a family member or a relative of some kind or friend.

On page four of my handout, there are excepts from the birth and death certificates showing the way the 1989 revision of the certificates, collects and reports race and ethnicity. I have cut the piece out and pasted it in here so that you can see just that aspect of each certificate. On the top part of the birth certificate, you can see that there are two questions, one for Hispanic origin and one for race. Mind you, these are the standard certificates now. The states have the right to their own laws, to modify these as they see fit, but for the most part they keep these particular elements as we've got them here, although there is some variation for Hispanic origin in some states where ancestry is asked instead.

Hispanic origin is given for mother and father, and the respondent is asked to specify which if they can, whether it is Cuban, Mexican and so forth, and the race just simply says race and then, dash, with American Indian, black, white, et cetera specified below what it says.

On the death certificate, it is very similar. I have circled the 14th and 15th items. The 14th says decedent was of Hispanic origin and a check box with specify, and the race is also specified.

Now, there is noting to prevent anybody from writing more than one race on a birth or death certificate. The mother could if she wanted to report that she has two races on the work sheet, and that could theoretically be typed into the box on the birth certificate. The same could hold true for the death certificate, if more than one race was reported for the decedent.

Currently however, what we do is, we code only the first mentioned race. So whatever is written or reported first on the work sheet is, if there is more than one even put on the birth certificate or death certificate, we take the first one and use that as the race that is to be reported to the NCHS by the states. We do not get any more than one race reported to us on the birth or death certificates.

So you can see that we are maybe a little bit away from what the standards would ask for in a certain sense, although one could theoretically consider the possibility of changing the questions to specify one or more races, and then give examples. In other words, specifically ask for one or more races. But if you don't give a list to check from, it won't be quite the same as what the standard is asking for. The standard does say multiple responses are available.

So that is one of the questions that relates to how we ask race on a birth and a death certificate. I have listed them on the bottom of page two. The open-ended question versus the list problem. Will vital statistics be able to continue using an open-ended format and still be in compliance with the standard? And if the answer is yes, is it advisable to do that? Do we really want to just stick with an open-ended question? The problem is that problem of comparability with other data systems, other data collections.

A second question, if you do have a list of what are the races that are listed in it, who decides what they are? Can a state change the list? Can it have different races listed than what another state has, or from what the standard might be in some way, shape or form?

Then there is a practical question of space on the death certificate. It is already pretty crammed in, and putting these open-ended questions or putting a list of races on there might be a little bit tricky.

We are of course moving toward an electronic death certificate, and the flexibility that that provides probably would allow us to accept more than one race to be reported. So space may not be that big a problem. However, since almost all states have an electronic birth certificate, and many hospitals are using software that provides that, all the software has to be modified. Not only that, but the work sheets that are given to the mothers to fill out have to be modified. Do we give the mothers a checklist? It would complicate the work sheet.

Right now, I think work sheets tend to have a line with race written on it, nothing else. And of course, it probably varies by hospital, and in some states the work sheet may even be specified by the state, by the state law. But I'm certain that variation does occur.

So there is a lot of practical questions involved in how you will collect these data at this state and local level.

A second point to be made here is that the Office of Management and Budget in issuing these revised standards made it clear that they were outlawing the term multi racial. There are currently three states that require all state administrative forms to include multi racial as reportable race.

Now, it remains to be seen as to whether those states will change their laws so that they will be in conformance with the OMB standards. Certainly the OMB standards don't overrule a state law, but it would be interesting to see if the need for the term multi racial dies away, now that the states have this new standard. I personally hope so.

So we have the problems that are listed on page three of the handout that revolve around the collection of data on more than one race. The death certificate can be a bit of a problem as well, since the funeral directors may or may not want to probe and find out if there is more than one race for the decedent. It is very likely that funeral directors fill out the race item by observation and don't ask. It may be relatively obvious in most cases to them. As long as the family doesn't object, they don't have to ask the question.

However, if they don't ask the question and if they don't provide a list of races to check, it is kind of hard to see how a death certificate is going to be very likely to pull in multiple race identification.

This is one of the problems, the problem of using administrative records to fill out data like this. Certainly, the death certificate is an example of a situation where the respondent cannot report for him or herself. So in a sense, the death certificate can be seen as an exception to the standard. As it is written, it literally says, when self reporting is the case. Well, obviously that is not the case in the death certificate, maybe one of the few.

I don't know, I think you might think of a proxy respondent in terms of an informant, a family member, somebody who certainly could report more than one race.

So there is a gigantic problem here in vital records, trying to collect multiple race data. On the one hand, getting all the hospitals in this country to collect on their work sheets, or somehow provide the mothers the opportunity to select more than one race, and secondly, getting all the funeral directors to inquire about all the races that a decedent is.

It is fortunate that there is just starting right now a new panel to evaluate and revise the birth and death certificates. It just had its first meeting last week, and the meetings will go on for the next year, year and a quarter, and they will be looking at all different ways of things that need to be changed in the standard certificates, and certainly the race item will be one that is going to I'm sure cause a lot of discussion.

Well, that is one problem, the problem of vital records trying to collect data using these standards. It is not as I said a federal system, it is a state system, so we have to work with the states to accomplish this.

Now, the problem doesn't end there. As I said a little bit ago, the Census is expected to use the new standards starting in the 2000 census. So on page five of the handout, I have incorporated at the bottom of the page a copy of what the dress rehearsal short form is going to look like on race and ethnicity.

You will see that they are following the instructions of the standard, which are to ask the Hispanic ancestry question or the ethnic origin question first, and then comes the race question. It says mark one or more races, and they provide a checklist as well as some write-in space for particular tribes or other Asian-Pacific Islander groupings. By the way, I trust everyone knows that there are now five official race groups instead of four. The Native Hawaiians and other Pacific Islanders were put into their own group.

Now, that questionnaire will be used for the year 2000. That is going to create I think a basic incompatibility of vital statistics data with census data. Typically in the year 2000 or at the decennial census, we will use the decennial census data, which are more detailed, to calculate things like life tables and state-specific death rates.

What we will do is, we will combine three years of mortality data around the census and calculate death rates and life tables for that decennial census year. We also do that by race.

The problem is here is that the census is already going to collect multiple race data probably relatively efficiently. I don't know, but I'm guessing that maybe four to eight percent of the population might mark more than one race. But by 2000, it is very unlikely the vital records are going to do the same thing. I shouldn't say no, but it does seem like a rather formidable task, at least to come anywhere near the kind of reporting level of multiple races that you are going to see in the census.

The census may only pick up two percent, but it could easily be four, five or six percent. So we are going to have a little problem in getting numerator and denominator in race to have the same base by race.

Now, the Census uses vital statistics to make inter-censal estimates. So they take the year 2000 census and they will use birth and deaths and migration to make the population estimates for 2001 and 2002 and so forth up to 2009.

There is a little problem here. If they get these kinds of data on race with say four or five or six percent of the population with more than one race reported, but if vital records cannot be comparable, then the updating of the census to the year 2001 and so on is not going to be very good, because the input to updating the populations of more than one race are going to be minimal.

In other words, if most of the vital records, if they are only one race, are going to update the single race group, but there will be multiple race groups that simply can't be updated, because there will be very little data to do it. Migration data, military data, other things may also not have multiple race data. So by the year 2009, there is a good question as to what kind of race data will be available on an inter-censal basis. So we've got this interaction that goes on with the Census.

Now, the census data are used to calculate rates, and then the rates are used to make projections. So the census' projections also will depend on all of this, and that is going to get complicated.

I think I'm probably running out of time.

DR. VAN AMBURG: Are there any questions from the committee?

DR. IEZZONI: This is a big problem. It is interesting, hearing Harry's presentation at the same time, though, because I think that there is going to be a real back and forth about how you're going to come up with your year 2000 standard around race and ethnicity.

One of the things that we are going to need to hear about, perhaps not now because we are in fact out of time is how you are going to work that out together. I think that you can't avoid thinking through the race and ethnicity issues as you come up with your year 2000 standard for the age.

DR. MOR: That is exactly the point I was going to make. I was hoping, Harry, that you were going to say, oh, and we'll have these weights and coefficients up on the Web, and anybody can download them. But good luck.

DR. WEED: It's going to be a lot of work. It goes on. For example, if the inter-censal estimates provided by the Census cannot be updated easily, this is going to affect the sampling strata for surveys like the CPS and the health interview survey and such. So it is one gigantic interactive circle.

DR. VAN AMBURG: I have one question, probably for both of you. If the standards for tabulating race and ethnicity are followed, the OMB standards, if you do follow those closely, and do not allocate those that identify multiple race to a single race, what is going to be the effect on things like infant mortality and some of the mortality figures particularly, if you do get the data from the states?

DR. WEED: Let me just make a little comment on the effect of -- well, take efficacy, if you can't allocate multiple race people back into a single race category, you have to keep them separate, what is probably going to happen is that the minority races are going to be smaller, because people who are Indian and white and black and white and Asian and white are going to report both races and they will not be in a single race group. It is going to diminish the minority groups, single race groups, proportionately more than white.

When you diminish a population, you are going to raise the death rates at a lower life expectancy. So life expectancy, I think I can predict it is going to go down for all the minority groups.

On infant mortality, if we use birth and death records and they are both collected the same way and they are both not getting multiple races, I don't think it should have a big impact on infant mortality, but the infant death rate would.

DR. MOR: But I think you implied though, and I think it will occur, that you are more likely to get multiple races on the birth system than on the mortality system.

DR. WEED: Yes.

DR. MOR: So it is going to change the denominator.

DR. WEED: To the extent that that happens, you're right.

DR. AMARO: And maybe you will get more what has happened in some cases with Hispanics, where you can be born of one race or ethnic group and die of a different one, because the person who is filling out the latter doesn't have as much information, so that problem is going to increase.

DR. MOR: That's right.

DR. AMARO: My concern, especially with the death data, is based on what happened in New York, where the records were so incomplete for Hispanics that Hispanics had to be left out of those estimates, if I remember right. Am I recalling that right, Marjorie? So I'm wondering if this is going to be a bigger problem now, where they don't fill it out because they don't know, and now you have more missing information, especially for the death.

DR. WEED: Yes, there are a quite a large number of problems of that type that float around in here. We are only just beginning to sense what they are.

Really, I think that once a lot of groups find out what this is going to mean to rates and to population estimates, and it is going to move through the whole population, there are probably going to be quite a bit of excitement about it, to say the least.

We are by the way working on what we call bridging studies. We are trying to figure out ways that we can determine what caused the shifts, what would have happened if we hadn't done this, so that we can have something bridging for at least the next five to eight years. Then the system will have to move on into its own.

DR. VAN AMBURG: Thank you very much. I would hope that you wold keep the subcommittee informed of your progress in dealing with these things.

DR. IEZZONI: Yes, I think that would be excellent. I think we would like to have both of you back, Mr. Rosenberg and Weed, maybe in a few months once you have had a chance to think a little more about this.

DR. WEED: I am participating in a work group that was established by the Office of Management and Budget to look at various tabulation issues and bridging studies and what to do about them, how to work them out. So there will be things to report.

DR. AMARO: Who is the work group that is working on specific issues of race and ethnicity and tabulation?

DR. WEED: It is in the October 30 Federal Register notice. There is a statement about further research ongoing towards the end of it. That is basically what we are involved in. It is described there.

DR. VAN AMBURG: But I think she wanted to know who was on the group.

DR. WEED: Oh, who is on it?

DR. AMARO: Yes, or what the name is, so we can identify it and communicate with you.

DR. WEED: What is the name of it? I guess it might be called the tabulation work group.

DR. IEZZONI: Maybe we can try to find out. Olivia, maybe you can find out about that.

DR. CARTER-POKRAS: There are three work groups. Kim is on one, I'm on another one, Tony Di Angelo from the Indian Health Service is on another one, and the Office of Civil Rights is also getting together to work on some of these issues. So they have a policy group at the Indian Health Service that Tony di Angelo is on. I'm on the one that is developing the questions and the instructions, and you are also --

DR. WEED: I'm on two of them, actually. There is one on technical issues and one on policy issues.

DR. CARTER-POKRAS: Right, so you're on the same one with Tony di Angelo.

DR. IEZZONI: We'll get that information.

DR. CARTER-POKRAS: So we may want to bring everybody together, because everybody is working at this point separately.

DR. IEZZONI: We had talked about maybe at the June full committee meeting having a breakout session about this. I think that we should follow up on that. So we'll be back.

Great. I had hoped we would have 15 minutes for a break at this point, but I am concerned, because some of the subcommittee members do have to leave. So why don't we just stand up and stretch, long enough for people to come to the table who are on the panel, and for Drs. Rosenberg and Weed to move on to the rest of their day. We'll start at 10:15.

(Brief recess.)

DR. IEZZONI: Folks, why don't we reconvene for the session on Medicaid managed care? We are a slight bit strapped for time, and so I think what we will do is not take time to go around the room and introduce everybody. I think people have a sense of who the subcommittee members are. As each of the panelists gives their presentation, perhaps you could introduce yourselves and tell us where you are from.

Let me just say that this session right now is being broadcast on the Internet. So it is going to be important for people not just for the transcribing process, but also because this is being broadcast on the Internet, to make sure that you speak into your microphones.

So I would like to welcome people to the second day of our data gathering on Medicaid managed care. We are very excited to have the panel that we have today presenting to us. Yesterday was very interesting, and it looks like today is going to pick up where yesterday left off.

Rachel Block from HCFA, I wonder if we could lead with you?

DR. BLOCK: Thank you. I have distributed a copy of my written remarks, and I'm not going to cover all of them, but just so that you would have something to take away.

There are a couple of things that I would like to say to preface what are the major considerations and strategies that HCFA is currently either considering or deploying. Also, I have my own landry list of things that I particularly would like the subcommittee to consider and possibly develop either some recommendations or some ideas for future work that others could continue.

The first thing that is apparent when one looks at Medicaid generally and Medicaid managed care programs specifically is the first two major bullets that I have included in my written remarks.

The first is that Medicaid is and I think will always be a highly dynamic program. If you take a snapshot at one point in time and you go back to look even just a couple of years later, what you will find are likely to be some very different things, both in terms of the national profile and also in terms of what states specifically are doing. I have tried to outline there what are a variety of dimensions that should be considered in evaluating those trends.

The second thing is, it has become trite to say this, but I think it bears emphasis when we are looking at things like standards and guidelines in the areas that you're interested in. That is that if you have seen one state Medicaid program, you've seen one state Medicaid program. The general framework within which they operate is broadly defined in federal law, but one of the remarkable, albeit sometimes troublesome aspects of Medicaid is that states have and continue to exercise significant flexibility to tailor the program according to their needs.

So what we see I think in managed care is a convergence around some broad strategies in terms of managed care purchasing, but a wide variation in terms of the details of how, where and when managed care is implemented.

I have provided some data, given that this is a data oriented crowd. Unfortunately, we do not have the final 1997 report completed, but I am expecting that it will be up on our Web site within about a week. But I have given you at least the unofficial estimate that we will come out with, which is that 15 million or approximately 50 percent of Medicaid beneficiaries were enrolled in managed care as of our point in time report, which was from the middle of 1997.

We see significant proliferation of HMO risk contracting. There are about 35 states that have HMO risk contracts for some or all of their program. But again, that is significant because it means that there are at least 15 that have none, and who are utilizing other kinds of managed care models.

We continue to see a general trend toward mandatory enrollment and clearly, the new provisions of the Balanced Budget Act have further reinforced capacities to go in that direction. But it still varies significantly by aid categories and the types of managed care models that are available by different aid categories.

Finally, one of the things that I think the subcommittee needs to consider carefully as it proceeds with its thinking and recommendations in this area is that there are very few, really only one or two states that have what I would call entirely comprehensive managed care models. There continue to be populations, and most commonly, services, categories of services that are outside of the managed care model, and therefore, any data collection, monitoring or other kinds of strategies that we want to think about in terms of evaluating managed care need to be very careful to figure out what really is in managed care and what isn't. Obviously, that raises significant issues from a population point of view in terms of the coordination of care and what influence that coordination of care might have in terms of good outcomes.

Given the high degree of variation and what we know are the ground rules, if you will, to standardize information then, I think that we need to be thinking about a model that on the one hand encourages standardization where that is appropriate or necessary for the kinds of measures that we are most critically interested in, but that that standardization also needs to take into account a wide degree of variation in terms of purchasing models, benefits packages and so forth.

In addition, I think as most people would observe, there are real opportunities created by the new technologies that continue to proliferate, and hopefully reduce in their cost as the technologies become more widespread in this very highly competitive marketplace. But needless to say, all of those technologies raise new questions or reinforce old questions and concerns that exist in the area of confidentiality and security.

We at HCFA are in the process of trying to develop some guidelines to follow in terms of how personal identifiable information is transmitted, for example, through Internet transmission, intranet transmission. Right now, we have no standards other than to tell people that it is our view that the Federal Privacy Act at least requires a very high degree of security for any such transmissions. I'm not sure that anybody has the standards figured out to deal with that.

Let me talk briefly about a couple of things that we currently do and what we are planning on doing, what I have labelled as the micro perspective here. You may know that the Balanced Budget Act among other things mandated states' participation in what we call the Medicaid statistical information system.

This is an electronic person level database that has been used as a substitute for the old HCFA 2082 reporting format and requirements. All states will be mandated to participate in this system as of January 1, 1999. I think that we are expecting that virtually all will be successful, at least if not right on January 1, within one or two quarters thereafter.

This is very significant from our point of view, because it will really enhance the basic Medicaid data that we currently collect, and also get all states in on the same system for the first time.

Another Balanced Budget Act provision requires the states to also provide us encounter data consistent with that format. So we are currently working on what I have labelled here parallel specifications for encounter data submissions through the MSIS system.

On a more macro level or aggregate level, we currently collect and report on two basic areas of Medicaid managed care. The first is the encounter data, which I mentioned. The second is something that we call the national summary. This is a basic categorization of the characteristics of different states' Medicaid managed care programs. Thanks to some of that new technology that I mentioned, we now put that up on our website, and it is readily accessible. Unfortunately, we only collect this data at this point on an annual basis, and I am hopeful that in the next year or two, as we get the states into the MSIS system, we can start to have more regular updating of at least some of the basic enrollment data that we currently collect and report on annually.

We are also working on something that we are calling for now a key performance indicators report. This has come out of technical assistance work that we have been doing with states to try to facilitate their use of encounter data as part of their program design and monitoring. This will be selected indicators of program performance that cover four areas: enrollment, utilization quality, financing characteristics and satisfaction.

This is almost in final draft form at this point. We are ready to start testing it through our regional offices, and I will be able to give the subcommittee an update on that probably in a couple of months. Hopefully Margaret will talk briefly about some ideas that we are kicking around with Medicaid directors to establish a national HEDIS repository.

Finally, we are confronted with the wonderful opportunity to implement the new children's health insurance program. Clearly, many states will be building on or integrating this program with their exciting Medicaid program. There are additional reporting requirement issues associated with that, because the law specifies certain areas that both the states and the Secretary will have to evaluate, in terms of the impact of these programs. So we are currently working with the states and others to come up with the reporting requirements for those programs.

I would like to identify hopefully a couple of key issues that I know are of interest to the subcommittee. One of them flows directly from the previous presentation, and that is on issues of race and ethnicity.

Clearly, there is a great interest in terms of health outcomes and quality assessment to identify any of the health status or outcome variables and how they might differ in terms of populations of different race and ethnic categories.

You have heard about the issues of classification within that basic framework. My big concern is that I have no reason to believe that the data that states currently collect through their Medicaid programs have any value whatsoever. So I think that any recommendations that you and others could work on in terms of what would be some practical strategies to validate some basic data collection at the front end of the eligibility process and/or through the health care delivery system I think would be very helpful. So this is largely a validation exercise as opposed to the question of how to classify different categories.

Here, my only concern would be that the simpler it is, I think the more likely it is that we will agency get something out of that process.

I have questions, and I'm not an expert in this area, but I suspect many on the subcommittee are, about how much we really know about the validation of the diagnostic information that is generally included in claims based or encounter based types of systems. And since that diagnostic information obviously has value and importance in terms of assessing managed care, then I think that that is an area that probably needs some examination as well.

Another area that -- and I just was in Philadelphia on Friday, visiting with a health plan and the city health department, where they are collaborating on what I think is a very interesting exercise, that basically amounts to, how can they really at a community level and a health plan level assess birth outcomes, obviously a very complicated area.

But the solution that they are going to test is what I would call an enhanced encounter data form, in which the basic encounter type information that you would normally expect would be collected, and all the health plans in the city health department, clinics and so forth have basically agreed to collect this information, and it will be part of their reimbursement process with their providers. But in addition, they will collect standardized information on some key risk factors, because clearly, the encounter information alone does not tell you enough about what happened in terms of the birth outcome.

So I think that what I am thinking of now is enhanced encounter specifications for global or population- based conditions. This is one just one example. I would hope there are others out there who are innovating in the same area. I am not aware of any other projects like that, but I think that would be very important.

Bottom line, I think is a couple of things. One, HCFA is keenly interested in trying to do whatever we can through both the regulatory types of tools that we have, but maybe more importantly through the technical assistance and other kinds of leadership role that we can play with states in terms of how we can help to improve their capabilities, in terms of assessing managed care, with the key role that obviously data and information systems play in that. But it is also fair to say that the quality of what we expect from states' data is entirely dependent -- not entirely, almost entirely dependent on the quality of data reported at the health plan or provider level. So we have to think of that as a very interdependent relationship.

In addition, I think that since HEDIS has now become such a widespread and accepted tool, an interesting question for the subcommittee to consider is what is it besides HEDIS that would be useful to evaluate managed care.

One of the gaps in HEDIS structurally is that it doesn't necessarily provide the kind of timely, early warning data that people might want to have to assess those instances in which an individual managed care plan or a provider within a managed care system is performing at a level that is endangering the individual or the member's health in some significant way.

So if HEDIS doesn't provide that kind of early warning system, what are the areas that we in states should concentrate on in terms of looking at that. Certainly not that states aren't doing that now, but I don't think that there is any particular standardized approach to looking at that.

In addition, what are the future areas, either clinically, outcome data or what have you, that should supplement HEDIS as part of a managed care assessment process? It seems to me one of the great values of HEDIS is the consensus process that has been used to come up with the new specifications. So we would need to consider how to replicate or duplicate that process in some way.

Finally, the increasing attention that NCQA will be placing on the information system capabilities of the health plans to report HEDIS data, and if there are additional data that we think are necessary for health plans to report or collect in order to round out if you will our assessment of managed care and its impact on the population, then that also needs to be factored into the assessment of the information system capabilities of the plans.

DR. IEZZONI: Thank you. As we did yesterday, I think I would like to have only questions of clarification at this point. Then when the three persons on this particular panel are done, we will have a larger discussion. So are there any specific areas of confusion, other than the obvious?

Our next speaker is Mary Jo O'Brien. Could you just introduce yourself briefly?

DR. O'BRIEN: I'm Mary Jo O'Brien. I'm a vice president with the Lewin Group. Prior to coming to the Lewin Group, I was the commissioner of health in the state of Minnesota, which I think is mostly why I'm here.

Today I want to build on both Rachel's comments, but alert you to a project that we have just completed with the Department of Health and Human Services. You have some members of your group, Dale Hitchcock and Jim Scanlon, who are intimately involved as the project officers on this project.

I have handouts, and I won't attempt to go through all the overheads, and also copies of the report.

As we talk about working with Medicaid agencies, working with other purchasers of health care services and working with health plans to improve the health of the population, we need to begin to look at the capacity of public health agencies at the federal, state and local level to perform changing and increasing responsibilities.

The attempt here was to look at what was being collected about our public health infrastructure at the state and local level, which is specifically what we focused on, look at the data systems that are currently available within federal and state and local agencies, and suggest a strategy for obtaining information about the public health infrastructure as we move forward.

As we went out and we worked on this project, it became clear to us that we are in a rapidly changing environment, not only the marketplace where health care services are purchased and provided, but also within the public health infrastructure as people are trying to decide on and work with policy makers on what is the appropriate role for public health. I think this is a very critical time for us to be paying attention to that public health infrastructure.

What I would like to do is just talk a little bit about the project, and move very quickly -- again, you have copies of the slides, and I wont' attempt to read them for you. But I want to focus on what the purpose of the project was, which was not to make an evaluation of the public health infrastructure, but to come up with recommendations on a strategy of obtaining comprehensive information about the public health infrastructure.

The public health infrastructure really is defined as both federal, state and local levels, and then the relationship between all three of those levels of public health, and also more broadly defined, especially as we look at ways of collecting information to include the community at large, which is really part and parcel of the public health infrastructure. So I think it is important to think about that.

What we sought to do in this project is, we wanted to identify what data or information should be collected and how it should be collected. So we are really trying to get a handle on that.

I'd like to focus a little bit on slide three with a definition of what the public health infrastructure as we defined it for this project was. We looked at it as the systems, competencies, relationships and resources that enable performance of -- and we defined 10 essential services. I don't think that is necessarily perhaps an appropriate definition or framework; it could be used as that. But I think what is most important is an attempt to begin to look at the system, the competencies, the relationships and the resources that enable performance of core public health functions.

I'm not going to go over the 10 essential services, and just get to slide seven, why is public health information needed. I think this committee certainly has been talking about that as have others. Certainly we need to develop performance measures, we need to demonstrate the cost effectiveness of public health infrastructures and be an advocate for appropriate resources, and also to look at optimal approaches for delivering public health services, especially with what we see as increasing demands.

Slide eight looks at the methodology that we used. I think significant for the discussion of this committee is the fact that we conducted state and local interviews, identified 10 states -- you can see those states identified there -- but also again looked at it from both a state and local perspective and looked at it from a relational perspective. States and locals, as Rachel said each state is very unique, and its relationships with its local units of government is also very unique. You need to spend as much time looking on the relationship as you do looking at each individual state and/or each individual community, and explore their experiences, both their uniqueness of their marketplace, but also with their local communities and experiences that they are having.

I'm going to skip you forward to slide 11. I think these guiding principles are important, of what we were trying to do with this data strategy and its relevance for this committee's work, especially as you are focusing on Medicaid managed care, and especially as you are focusing on Medicaid managed care as it begins to deal with more difficult to serve populations, and as we begin to bring more people into public systems where we are going to be doing public purchasing, and we are trying to begin to integrate efforts to reach more difficult to serve populations.

We believe that any kind of strategy where you seek to characterize information needs to be done over a period of time. This isn't an attempt to go and do a quick snapshot and then forget about it for another 10 years. This really needs to be an attempt to look at something that will be available over time. As I said earlier, it needs to begin to focus on the federal, state and local levels in that relationship.

Really, the ultimate goal is to explore the impact of the public health infrastructure on health outcomes and costs, which is what we are all interested in, not only for our public populations, but for the populations as a whole.

We attempt to minimize reporting burdens, looking at slide 12, build upon existing methods where available, and really step back and look at a macro level. We've got lots of anecdotal stories about what may be going on in a local community or a local state, but we really need to step back and look at the macro level, or have some way of doing that, so we can again see some of the things that are working, best practices, but some of the things that aren't working. Sometimes we are afraid to talk about the things that aren't working, and we as states and localities don't exactly go around and advertise when something doesn't work for us, so I think we need to pay attention to that.

The major components of the data strategy, looking to slide 13, really build on three things that complement each other. They could be implemented separately, but they are designed to interact and provide direction to one another.

This was built on a very long project, and you all have copies of the major reports, which I think can provide you with mor information. But really, looking at national surveys which would provide a basic macro level vital statistics on the infrastructure, begin to alert you as to some major macro level trends. Case studies, which would look at some sentinel communities, the successes and the failures of what is going on, address the complex and qualitative nature of most infrastructure needs. And then very importantly, begin to establish some kind of research agenda, so that as we establish best practices and cost effectiveness, we begin to establish some kind of a research agenda and we begin to see what are the components that work and what is it that is not working.

The national survey, moving to 14, talks about what we are seeing as the things that are important as we look at some kind of a national survey. We believe that the national survey informs the sampling frame for the case study efforts, but also informs the research agenda, and they could be complementary and build on information from one to the other.

The content of a national survey I won't go into, but most importantly, I think also we need to look -- as we look at the Healthy People 2010 goals, we need to look upon this kind of an effort as support and monitoring all infrastructure related objectives, especially around Healthy People 2010.

The local survey really is -- the intent is to mirror a state survey. So when you talk about the survey, we really need to talk about doing it at the same time that the state level and the local level, and not to do it differently.

One of the problems we have with what is going right now is, the states may be doing a survey through their national organization, but it doesn't tie into what is going on at the local level, so you don't really get a good state picture of what that relationship is and what those supports are. I think that we really miss a lot by doing that.

I'm going to move you down to slide 19, and look at the case study rationale. We really think that this would be a core in-depth examination of complex issues and relationships. When we were out visiting with the 10 states and with the localities, what we heard again and again and again is, we are unique. Every state believes they are unique, and in some respects they are, every community believes that they are unique. But we want to know about best practices. We want to know what is working and what isn't. We want to be able to be proactive and not reactive, as we see that there are things that are coming forward that we need to pay attention to. We think that the case studies may provide an opportunity for us to do that.

They certainly provide a lot of flexibility. We are looking at something that would be highly qualitative data. We think it is important to provide a team of knowledgeable, objective interviewers. We think that that is a problem a lot of times with case studies, and that we really need to minimize reporting burdens.

Case studies really are intended in slide 20 to provide detailed comprehensive, complex information on a wide variety of infrastructure topics. Looking at resources, relationships, systems, that is where data and information needs and capacity and competencies.

It is being from a state which has a pretty good information system. When I was commissioner of health in Minnesota, i spent a lot of time on setting up the Minnesota Health Data Institute and working on data. I can tell you, it is one thing to collect information; you've got to have an infrastructure that can begin to analyze the information, develop policies based on the information. We are doing a lot of collecting of information. We are not building an infrastructure, both at the public health level, but also at the state agencies, of individuals who can work together and really begin to develop the kinds of information analysis that will inform policy decisions.

Having worked -- and I know Rachel did too -- in a governor's office, it is very frustrating, when you are trying to get something in two days, and there is nobody to call, and there is nobody that even has the capabilities of beginning to give you the analysis. This is coming from a state that everybody looks to, a state like Minnesota. I see Elizabeth over there in Washington, and she is smiling; she knows exactly from where I talk. We have got to be serious about investing in not only the collection of the data itself, but in the folks who can really take that data and turn it into information for purchasers or health plans for the population at large.

End of sermon, sorry. The case studies structure, we would suggest 15 to 20 sentinel communities, and then to continue to not only follow those communities, maybe select different ones. I think there are a lot of ways of doing that.

But the important thing here is that when you look at the interviews that you do in these case studies, that you not just talk to the public health agencies themselves. We spend a lot of time, public health folks, talking to each other. You should talk to the community at large, including policy makers, executive decision makers, representatives of community leaders, minority communities, social service providers, business leaders, certainly representatives of private health care organizations and the media, and that we begin to broaden our definition of what that public health community is, and where we can have an impact.

I'm going to close with a little discussion on the research agenda itself. I think the research agenda establishes a crucial link between the infrastructure and outcomes. We need to know what we are investing in and what are some of the outcomes that we expect.

We think that both the case studies and the national survey can really inform and help move us toward some kind of a national research agenda around population-based health issues and around some of the core health functions and strategies of public health.

Recommendations for implementation. I think the important thing to take away from these recommendations is that there is going to have to be a lead agency assigned the responsibility for implementation of any data strategy. Right now, we've got lots of folks doing lots of things, but we don't have any coordinated approach, and we would suggest that there really needs to be some kind of a lead agency that is actively involved that is bringing folks together.

We have suggested funding implementation. We really believe that there is a necessity to think about very quickly moving forward on this strategy. Maybe you can't do everything, but we think that it is important to begin to get a picture of what is going on in communities and states around this country as public health is in the midst of massive changes.

Thank you.

DR. IEZZONI: Thank you. Are there any quick questions, confusions? Why don't we move to Margaret Schmid? Just introduce yourself briefly, Margaret, if you could.

DR. SCHMID: Thanks very much. I'm Margaret Schmid. I'm a manager in NCQA's State-Federal Projects Unit, which Jessica Briefer-French heads up. She regrets that she couldn't be here today, but I'm going to present her testimony and be happy to participate in questions and answers on Jessica's behalf.

The National Committee for Quality Assurance is an independent not for profit organization that provides information on managed care quality. NCHA collects a standardized set of measures on managed care organizations relevant to all populations served.

NCQA developed and maintains a comprehensive set of measures of managed care performance called HEDIS. HEDIS measures are objective, externally validated population-based summary measures with very specific and rigorous specifications, such that managed care plans collect and report the same set of measures in the same way, allowing for an apples to apples comparison of plan performance.

NCQA maintains the HEDIS measurements that in consultation with managed care plans, providers, purchasers, quality measurement experts and consumers through the Committee on Performance Measurement. This committee meets to evaluate measures for inclusion in the measurement side and to identify areas in need of additional measure development or refinement.

The measurement set includes a number of measures originally developed for commercial purchasers, as well as measures developed specifically for use with the Medicaid population. HEDIS is broadly accepted in the managed care environment as the standard for measuring and reporting on managed care plan performance.

HEDIS measures are widely used as indicators of plan performance. Over 300 managed care plans serving commercial populations voluntarily submit HEDIS reports to Quality Compass 1997, NCQA's national database of managed care plan performance information on the commercial population.

In addition, the Health Care Financing Administration requires all Medicaid managed care plans to report HEDIS data annually, starting in 1997. As of the summer of 1996, 25 states were currently requiring or planning to require managed care plans serving Medicaid beneficiaries to report HEDIS plan performance measures.

There are 51 HEDIS measures in the current reporting set applicable to the Medicaid population, organized into six domains: effectiveness of care, access and availability of care, use of services, health plan stability, cost of care and health plan descriptive information.

The consumer assessment of health plans study, or CAHPS, is a consumer survey currently in the HEDIS testing set for Medicaid. It is anticipated that this survey will be incorporated into future versions of HEDIS for the Medicaid population.

Some examples of current measures include: childhood immunization status, parental care in the first trimester, cervical cancer screening, well child visits in the first 15 months of life, adults' access to care, in- patient care, actually numerous measures of that, provider availability, availability of language interpretation services, indicators of financial stability, board certification rates and provider turnover.

As indicated, NCQA currently connects commercial data on a voluntary basis and makes it available to the public through Quality Compass, a data set containing individual plan results as well as regional and national norms. NCQA collects Medicaid HEDIS data on behalf of HCFA.

In 1997, NCQA received approximately 30 Medicaid submissions, some as required by states and others as voluntary submissions by managed care plans. NCQA is currently working with the American Public Welfare Association to obtain a grant from the Commonwealth Fund to develop a more comprehensive national database of HEDIS statistics on the Medicaid population.

It is anticipated that this database will be in place for the 1997 reporting year, and that data including national norms would be available to participating states by the end of 1998. In addition, NCQA has been working with HCFA, the APWA and selected Medicaid and health plan representatives to develop a method for applying HEDIS measures to Medicaid beneficiaries receiving care in fee for service and primary care case management programs.

I would just note that both Don and Rachel have been part of this effort. We anticipate this methodology will be released shortly, allowing states and policy makers to attempt comparisons of different systems of care available to Medicaid beneficiaries.

From NCQA's perspective, the primary barrier to collecting national data sets stems from each state having its own unique reporting requirements and processes in the Medicaid environment. Because Medicaid is administered at the state level, each state has its own approach to monitoring performance. In spite of this decentralized approach, the availability of a nationally standardized set of measures has been clearly influential in guiding states' selection of measures and measurement methods.

NCQA and APWA with the cooperation and support of HCFA are seeking to increase the number of states voluntarily using HEDIS to measure performance through a creation of the national database previously mentioned, that would provide useful and generally comparable comparative statistics. Part of the effort will include finding ways to make that database self supporting as grant funding cannot be relied upon to support such an effort indefinitely.

In addition to the problem of obtaining national data about Medicaid managed care, there are practical barriers to obtaining HEDIS data even at the state level. Health plans vary in their organization, in the sophistication of their data systems, and in the accessibility of clinical information. Health plan relationships with providers differ with respect to payment arrangements and other factors that affect data collection. There are differences geographically in the health plan covered services that are available through other vehicles, such as health department immunization clinics.

In addition, some Medicaid programs carve out services and manage them separately, frequently including behavioral health services. Each of these factors can affect the completeness and accuracy of data and the ability of the health plan to process the data.

In addition, the consolidation of the managed care industry has added enormous complexity to health plan information systems. In the Medicaid managed care environment, the participation of traditional Medicaid providers and managed care networks or as independent managed care plans, where they often have little managed care experience, can affect a plan's reporting capabilities.

Finally, little guidance has been available to Medicaid agencies about how to collect and use these data. With funding from the RWJ funding for health care strategies and the David and Lucille Packard Foundation, NCAQ has been working over the last two years to assist states in implementing HEDIS for the Medicaid population. Some of that assistance has been provided through training and workshop presentations. In addition, we have published supporting materials on our website.

I would add that Rachel's comment about validation actually -- the issues which we have decided as difficulties in the Medicaid environment are equally true in the commercial and the Medicare environment, with the sole exception of the traditional Medicaid providers' participation. But other than that, all of the issues that face us in collecting genuinely comparable information about plan performance in the Medicaid environment face us also in the commercial and the Medicare worlds.

In fact, NCQA has been motivated as we have begun to audit HEDIS results, starting in the commercial world, actually, with some of the very largest and most well-known plans, with the most sophisticated data systems. We have been motivated to develop a compliance audit system which we have just announced a few months ago, and are now trying our best to encourage and hopefully to make as a standard that HEDIS results would be audited before they are actually analyzed.

This again just adds to the complexity. We realize that full well. But if we are interested in really understanding plan performance, we at NCQA believe that not only must we have clear specifications which are followed, but that they actually need to have an independent audit somewhat comparable to a financial audit.

Going back to Jessica's comments, I would just finish up by saying that while the challenges to collecting data are significant, over the past three years significant progress has been made. Among the most significant challenges is the continually evolving nature of individual state Medicaid programs, making it difficult to track performance over time.

A second challenge is in agreeing on the purpose and scope of the methods to be collected. Again, I hark back to something Rachel said: a single measure or set of measures cannot serve all purposes. For example, one type of measure may be needed to assess the performance of Medicaid managed care, while a different type of measure may be more appropriate for evaluating public health needs, tying right now into Mary Jo's presentation.

It is common as we have learned working on the development of HEDIS, for multiple interests to be represented in any discussion of what should be measured. This is something which can only cloud the issue. Your work here can be invaluable in sorting out the purposes and the different approaches in helping identify what can actually move this endeavor ahead.

DR. IEZZONI: Thank you. I think that was very clear. I suspect there aren't questions of clarification on that. Vince?

DR. MOR: I need to ask a question.

DR. IEZZONI: Yes, because Vince is actually going to be leaving.

DR. MOR: But this is divided into two panels as well?

DR. IEZZONI: Yes.

DR. MOR: So we are ready for questions?

DR. IEZZONI: Yes, we are ready for questions.

DR. MOR: I have some specific questions in relation to plan contract for Rachel. First, and we heard some testimony yesterday about state variation in reporting requirements that are in their contracts. I wondered if we could have the HCFA perspective on that, specifically walking us through the managed care data collection requirements for state Medicaid programs, like what state requirements are there and what is the condition of approval under the chapter 1115 Medicaid waiver demonstration projects. What is that process?

DR. BLOCK: I think that there are two broad categories of data that I would address. The first would be encounter data. It has been HCFA's practice to require as a condition of approval the 1115 statewide health reform demonstration project states to collect 100 percent encounter data. There is only one exception to that, which is the state of Massachusetts, and in that case what we basically agreed on was a strategy where the state would assure the plan's capability to collect and report that data, and that HCFA would find some alternate means to get that data from the plans.

The initial historical purpose of that requirement was to provide the independent research and evaluation contractors that HCFA uses, which is another component unique to the 1115s, to have access to the range and type of data that we thought was necessary for the evaluation process.

I think that since then, as more of these programs have become operational, the staff who are the project officers for those waivers have come to rely on routine reports from states, not necessarily getting all the raw data into HCFA, but to have that be the basis for a portion of what the states are reporting to us in compliance with the other terms and conditions associated with the waiver relating to quality and access, and some of the other kinds of things that are more program assessment related.

DR. MOR: How about the 1915 freedom of choice waivers?

DR. BLOCK: There has not been a specific requirement in terms of encounter data. We cobbled together an administrative requirement, is probably the best way to describe it, going back to the waivers approved, starting a couple of years ago, that basically said that the plans would be required in those waiver programs to collect encounter data for the purpose of studies and reporting on for basically clinical or public health categories, prenatal care, asthma, and I forget the third, and one that would be the state's choosing. That was based largely on the quarry model, which had been the set of standards that we have been using as guidelines for states to organize their quality assurance strategies and a component of that strategy not unlike the NCQA and other accreditation processes, is for focused studies.

So it was not so much the encounter data as an end in itself, rather, encounter data to support clinical focused studies in highly relevant areas.

Now with BBA, we are faced with some very challenging, but I think interesting requirements that states will have to begin to incorporate into all of their managed care programs, and I'll leave the 1115s aside, because there are unique issues there. But basically, in terms of either the 1915B waivers or the new state plan option under which states can now implement managed care programs of certain types, there are more explicit requirements for the Secretary to specify the data to be collected by states in the context of their quality assessment and improvement strategy, which is the broad label that describes the new quality standards. It is described as a data set that is used for the Medicaid plus choice program or alternate data sets to be approved by the Secretary, and basically you can read under that label -- today that would be HEDIS, tomorrow it would be whatever additional or supplemental requirements would apply to Medicare.

DR. MOR: And the standardized format? Has that been specified yet, or have you proceeded along with that in terms of --

DR. BLOCK: We are just at the stages of developing our regs to support the BBA implementation. Those provisions don't take effect until January 1999, so we do have at least a little bit of time to work on it.

But I think it would be safe to say that we would build from HEDIS. Then the only question would be, how do we deal with the issue of alternate measurement sets, and also, are there additional data sets that we would required in the context of that strategy. None of this, by the way, directly explicitly suggests what it is that HCFA would do with the information that derives from those requirements. I think that that ties in with Mary Jo's comments.

The other explicit requirement which I mentioned as part of my remarks is that states will begin to submit to us through this MSIS system actual encounter data. We do today through the voluntary MSIS system collect some encounter data from a couple, six states, I think it is. It is not a complete set, but they at least have mapped or formatted their encounter data in a form that is consistent with that MSIS system.

Now we will be faced with how do we do that on a larger scale. I would be happy to share with the subcommittee as soon as we have got a draft template what are the specifications that we will be working on with states on. But that largely follows what I will call the pseudoclaims nature of MSIS as opposed to directly addressing HEDIS or other kinds of more complex data gathering and collection strategies that are necessary to support many of the kinds of measures and assessment tools that I think people are interested in.

DR. MOR: But did you do a plan to specify to states what the data requirements and the format and data specifications for these quasi-encounter data?

DR. BLOCK: Yes. The last thing I would mention is something called QISMC, the quality improvement system for managed care. These are the new health plan standards that HCFA will be using for both Medicare and Medicaid, that will be the framework for what health plans will be expected to do with regard to quality.

There are no similar articulated standards at this point on the Medicare side. On the Medicaid side, I mentioned quarry, and this will essentially update and replace quarry, but also now be a mandate for the Medicare contracting plans.

The reason I mention it is that there are two things about QISMC that are relevant. One is that it will include in it an explicit requirement that health plans provide to HCFA or to states specified performance measures, and it is anticipated that there will be an exercise around setting baselines and benchmarks and so forth in the context of those quality systems that the plans will be required to conduct and maintain.

In addition, there will be specific areas of focused study for both Medicare and Medicaid, and the standards essentially say that the health plans would be expected to conduct -- it is a little bit of a complicated schedule, because it plays out over the studies that are expected to be two to three years, and it is a cumulative effect. But let's say for example, two or three studies a year, just for purposes today.

There is a public comment draft of the standards that is going to be out hopefully very, very soon. Again, the essence of the QISMC standards will be incorporated as part of that regs process that I mentioned to implement BBA.

DR. MOR: I am assuming as part of that, since under BBA, under services, some kind of sanctionable event. Are you working on some kind of specification for how one would derive an under service indicator? Is that in the QISMC or whatever?

DR. BLOCK: I'm trying to think in terms of QISMC. There clearly are a number of ways in which QISMC, once implemented, would be designed to address that. But I wouldn't necessarily think of it as a single measure, but rather something which either from a consumer point of view or from the HCFA contract manager point of view, whether it is central or regional office, that you would have in effect a composite of information available to make some judgments about what was going on.

But again, I would stress that I'm not sure that either QISMC or HEDIS or any of the other things that we have described as the standardized data sets at this point, effectively address the question of, how do you get that early warning signal when either a plan or a provider within a plan is providing less than adequate care, whether that is under service or other ways that you would describe less than adequate.

I think that we have tended to rely on complaints and grievances and appeals as a source of data to indicate those instances, and I don't think that anyone feels that that is either complete or adequate in that regard.

So I think that there are some tools that are available, some measures that would help to inform that. But they still tend to follow that kind of longitudinal model of cumulative data over a period of time for a population as opposed to the more regulatory model of how to assess an individual plan, an individual's health care at a given point in time.

DR. MOR: May I ask one more question before I go?

DR. IEZZONI: Yes.

DR. MOR: It's for Mary Jo. I'm from Rhode Island, so I was very curious about what the specifics are from Rhode Island. But you seemed to put a lot of the onus on the public health apparatus as being the responsible entity for both stimulating recruiting and doing the public health analysis. Clearly what has happened over the last X number of years is that as managed care companies have basically assumed population responsibilities or at least ostensibly assumed population responsibilities, one could think of a reasonable approach to require that they be involved in the public health apparatus within any particular state, including the analytic capability, rather than just squirreling away the analytic resources that are in their own areas, make them more of a public resource.

Are you going to be looking at that, or is that at all addressed in your larger report, the interrelationship?

DR. O'BRIEN: Yes, that is an appropriate thing to look at as either trying to get at it at the macro level in the survey, but more specifically at the case study level.

First of all, I want to agree that, being from Minnesota we did involve the health plans in public health strategies, and not only just the health plans, really the communities beyond the health plans, the communities at large. So I agree with you.

That was not the scope of this effort here. It certainly would be something that you would want to look at at the case study and then subsequently the research level to really see what is working as far as that, either oversight function that many health departments are beginning to have around the quality of managed care and involving managed care organizations in that discussion, but also as health departments and communities are working more closely with Medicaid agencies as purchasers, and not only just Medicaid agencies as purchasers, but even their state employees. We really need to step back and look at the state as a purchaser.

Rachel, you have heard me talk about this ad nauseam, but it is funny how we don't as state governments and even as local communities understand the cult that we have in the marketplace as purchasers with the health plan. So I think that there are many ways that you need to address that, both as purchasers but also as public health professionals and as communities.

DR. BLOCK: I didn't describe the attachments to my presentation, but I attached three diagrams, one of which is my way of trying to conceptualize the relationship between Medicaid agencies, public health agencies, health plans, providers and federal agencies. I am just offering it as a potentially helpful illustration, particularly if you can read it, because it is rather small print.

DR. O'BRIEN: And I would like to build on Rachel's comments and diagrams. I think that one of the biggest challenges we have at the federal, state and local level of government agencies is learning how to work with each other and support each other. That sounds very simple, but I can tell you that that is a really tough issue that is going on at state and local levels. I would never speak for the feds, but I also suggest at the federal level also.

DR. IEZZONI: We have to say goodbye to Vince, because he has to be in Rhode Island. So I'll see you at the next one, thank you. Hortensia, do you have a question?

DR. AMARO: Yes, I had a couple of questions for Rachel and one for Mary Jo. I am particularly interested in the behavioral health issues, and I have a few questions about that.

I was wondering to what extent behavioral health is included in all this reporting, especially in the situation of carveouts, and whether they are required to report similar information in a systematic way, whether there are any focus studies on mental health or substance abuse, and whether there is going to be any way of documenting disapproved services or documentations of mental health and substance abuse services that are requested but not covered or approved.

DR. BLOCK: Let me take the last one first. Certainly not because it is an explicit requirement in the sense that you will find somewhere written down in this HCFA policy that service denial is a measure. But I would say that for any mental health/substance abuse waiver that I have seen in the last couple of years, which is the only time I have been looking at them, I would say that some mechanism and explicit reporting requirement between the state and the delivery system, whatever form that takes, that they are all trying to capture data on service denials. Theoretically, that means that data on service denials should be available.

I think that the proliferation of these carveout programs as you know has been of relatively recent vintage, so we are just starting to get in on a more large-scale basis the waiver renewal kinds of data to support the waiver renewals. I would have to go back and check with the staff who work on those to see what if anything we could put together in terms of how states have reported that data, what kinds of issues have come up in terms of their ability to collect that data.

In terms of the overall question of whether and to what degree the overall strategy that I described, and specifically the reporting requirements that I described apply to behavioral health or any other type of carveout arrangement. The basic threshold requirements for states to submit their data through the MSIS system is irrespective of their program financing and delivery strategies. So any data relative to a mental health program, managed care or otherwise, would flow through that system.

I know that SAMHSA -- and they may have presented on this yesterday -- just recently has been working with the Medstat group to analyze some of the MSIS data that we have had and collected on an historical basis, and tried to tie that in with some diagnostic information. That is why I raised that question as part of my testimony, because I think that was an issue that they ran into when they were trying to draw some conclusions about the utilization data that they were able to get from that.

So it won't matter whether it is a carveout program or what the nature of that carveout is, in terms of MSIS and the encounter data information. We will be working in the next year to try to come up with -- and I hope that SAMHSA has already started to think about this, because we will call them first -- a specification for mental health encounters. We are also going to be working on specifications for long-term care encounters. That is probably the second most frequently asked question out of the mainstream of the basic stuff that we usually do.

In terms of how BBA will apply to the carveout programs, there are significant policy issues that we are trying to sort through on that and many other questions around BBA implementation. I don't know that we have a definition answer to that, but again, as the regs process moves along over the next couple of months, we will be able to present what the basic framework is going to be and how that framework will apply to all the different managed care models, or differences in terms of the regulatory basis in which we have looked at not so much the state programs, but the individual delivery systems or health plans themselves. We have to figure out how that carveout model fits within the continuum, and we have some choices to make there that we haven't gotten to yet.

DR. AMARO: The other question had to do with, I have wondered about the extent to which information on complaint and appeals reflect satisfaction or quality of care, because it depends on so many issues, whether subscribers are aware of the procedures, how comfortable they feel, partly the population mix. I was wondering if there has been any thought to doing some studies of this measure and its validity as an indicator of performance.

DR. BLOCK: It certain is -- I know that complaints and grievances of some sort are one of the descriptive measures in HEDIS. It is a requirement that states document how the grievance procedure in the context of their program is communicated to and to be performed by the health plans.

I think that one of the key issues that comes up when you try to look at the data first is that the health plans generally don't do a very good job of collecting and reporting it, so that may or may not mean that there are complaints and grievances out there. It is not until you get to the more formalized processes of appeals that I think you can rely a little bit more on the information.

But the second thing which frankly is one of the reasons the health plans may not do as good a job of collecting the data -- and I hope that Don and Kathy might comment on this from a health plan perspective -- is that you need to have a specific definition of what is a complaint. That definition needs to be clearly and consistently articulated and applied.

I think that there is still a lot more work to be done, even just in defining that. In the meantime, as Margaret mentioned in her testimony, I suspect that over the next couple of years most of the states will either field or adopt for their purposes the consumer assessment of health plans survey, and I think most people think that patient surveys are a better way of getting at the issues that you are raising, not to the exclusion of health plan reported data, but to help to supplement our understanding of the data that the health plans do collect and report.

DR. AMARO: And just one quick question for Mary Jo. One of the things that has come up in the last couple of days is the fact that especially the Medicaid managed care population is probably one that relies more than other populations on public health programs that are federally or otherwise funded through grants or city dollars.

That could impact on outcomes that providers have of those populations, but yet are not attributable to the services that they are providing. So the need for looking at the feasibility of client level data for public health services to get a more complete picture ideally, if they could be linked through a unique identifier to services provided to people, but also just trying to capture client level data in the public health system. I was wondering whether the group considered that, and what your thoughts are on that.

DR. O'BRIEN: We did consider that, and really had to narrow the focus of our -- what we have for you, and I couldn't even begin to go into it, we have not only copies of the final report, but I am also leaving copies of an appendix A, which is profiles of existing mechanisms to collect public health infrastructure data, so that you see what the world is like right now. Also, including a data dictionary, where we outline what we think would be appropriate to be in a national survey and get more specific.

But I think that your point is particularly relevant as we look at the issue of unique patient identifiers. Having worked on that issue in Minnesota, it is a very, very tough issue to work on with legislatures. Until we get there, it is going to be difficult to do. There are some basic things that we need to get done at a state level and/or a national level. We are just having a difficult time with our data and with getting good client-based information.

DR. IEZZONI: Hortensia, I'm glad you raised that, because that was going to be my question as well. Rhoda Abrams had talked from HRSA yesterday, and spoke very eloquently about the safety net. They used the word churning for talking about how Medicaid patients go on and off, and how the safety net picks up a lot of the needs of people when they are off.

I think given what our committee wants to know is, what are the major gaps potentially for this population and looking at Medicaid managed care, that is a data gap right there, not being able to track patients as they go into these other delivery systems, the safety net systems. That may be impossible to fill, given the problems surrounding the unique identifier that everybody is well aware of. But it is something that we are going to want to highlight.

DR. O'BRIEN: And remember that this was about how we could get information about the public health infrastructure, including the safety net providers and/or providers of personal health care services. But I think that if you look through this, you will find some interesting information to inform even that debate as we begin to see -- we have to know what is going on now before we can begin to decide what we need to do.

DR. IEZZONI: Based on your knowledge then, how would you advise us, given that what we are focusing on now is the Medicaid managed care, and knowing that the safety net providers are a very critical resource? What would you say, based on what you have learned in your study to advise us in this area?

DR. O'BRIEN: I guess what is the most important thing right now for states and localities -- and we are actually working with a number of states around different strategies of safety net providers, specifically with Medicaid agencies as purchasers, trying to figure out how to stabilize, maintain, change the safety net network in their communities. It varies across communities, too.

But that whole issue of agencies with different roles having the capacity and the willingness to work together. It takes a little bit more than willingness; it also takes a capacity to work together. I think that monitoring what is going on right now at a notional level -- again, Rachel was talking about the early warning signs within the system. I think we need to look at that within the public health system, too, a more broadly defined public health system as that relationship between states and locals. Some states are clearly pushing more responsibilities down to the local level with not sufficient resources. And especially as we are looking also at welfare reform and what that is going to do to the relationship between states and locals, and as we are looking at the new children's health initiatives.

There is just a lot going on at the state level and the local level, and I think we need to begin to now take a look at what that is and how it is impacting different responsibilities that people have at the state and local level. I wish I had a good answer to all of that, but I don't.

DR. IEZZONI: Margaret?

DR. SCHMID: If I may just make a couple of comments about things that we have observed in some of our projects, some of the states or localities, but I think this is particularly state, are setting up statewide databases for some of the things -- like immunization would be an obvious one, which does in fact allow then health plans or providers to go and access the information about whether their current enrollees or current patients have had required immunizations.

Some initiatives like that, which would certainly fall into the public health category, that can be ways for selective treatments or selective types of preventive care at least filling that gap.

Another thing that we have noticed is, some states are experimenting with using the state birth certificate registry as sources of information on some of these same kinds of things. This shows up in our work with HEDIS, because health plans are responsible for having that information about their enrolled populations. Sometimes the persons haven't been enrolled for long enough, or we know there is a high potential that they have had the required treatment, but not through the plan. These kinds of institutions help everyone get that information.

Just a final comment that I wanted to make is that the Bureau of Primary Health Care is financing a program which we are a part of to provide training to community health centers who are very important traditional providers to the Medicaid population for learning how to do HEDIS measures as the states are beginning to require HEDIS collection in some of the programs, and as these community health centers are very important participants in providing care to the population.

We are doing a training program to help them see, first of all, what is this, and how might they put this into practice in their environment. So I think that there are some initiatives underway which are beginning to help identify some of these areas of real difficulties and real data gaps and taking some steps to fill them.

DR. BLOCK: Can I add one thing to that? I mentioned the Philadelphia project, which is about to commence. They tried to do an immunization registry, and I think they were less than sanguine about how it was going to work out. Immunization is clearly such a highly visible public health priority, and yet appears in its complexity to defy just about any effort that anyone has to collect good data on it.

But setting that aside, that was actually why they decided to pick prenatal care as their first project. They had a smaller number of providers to deal with in terms of who they had to recruit to participate in the data collection effort. In addition to that, I think they felt at least in their particular instance that the city health department clinics were actually in a slightly better position to adopt this new encounter reporting form, just because of other reporting requirements that the city health department had.

But that project was designed because they did not feel that the state collected vital statistics data was valuable or what they really wanted to know. So it is interesting for a couple of reasons, in terms of -- it was a reaction to what they felt were some real shortcomings in the vital stats data, but also the capabilities.

One of the things that will be assessed through this project is what are the issues that the different types of providers in this mixed managed care model environment will face as they adapt to this new reporting form. I for one think it will be a very promising experiment. But they have a long way to go to get there.

DR. IEZZONI: Quickly, because we have to move on. Olivia?

DR. CARTER-POKRAS: A couple of years ago, Health Care Financing Administration staff came and presented to National Committee on Vital and Health Statistics staff and members about racial and ethnic disparities between similarly insured populations such as the Medicaid population. Some of those findings have also been found within the Medicaid populations.

But from what we understand, the states are not required to perform the linkage which is needed between the enrollment eligibility determination data and the encounter data. We heard from three states yesterday who were not doing that kind of analysis to see whether there were any racial ethnic disparities in access to quality health care.

I wanted to find out, since you mentioned specific areas of focused study, if this is perhaps one of the areas that you are including in that.

DR. BLOCK: Two things. One is that we have not required that the states or the health plans or the PCCM programs differentiate their studies based on race or ethnic categories. I think that it would be safe to say in terms of a managed care environment that we would expect the health plan to be assessing any significant population variables of which race and ethnicity might be one that was predictive of health outcomes or other significant quality issues, and that we would expect the health plan to address that in terms of how they approached the overall strategy and design of their quality improvement system. So that is one observation.

The second, as I mentioned is, there is certainly no explicit requirement in terms of the methods by which states go about linking their different data systems. In Medicaid HEDIS, we hotly debated the question of who should be responsible for collecting race and ethnicity information, and we somewhat ambivalently concluded that there was no good alternative but to expect the states to do it.

So one of the reasons that I mentioned it as part of my testimony is because I think there are still significant questions in terms of whether and how well we can ever expect the eligibility system to accurately capture the information in the first instance.

Alternatively, does it make more sense to think of it as something which, to the extent that the information is valuable, is valuable in the context of the health care delivery system, and not the basic day to day operations of the state Medicaid program as you would normally think of it. That might lead us to think that whether it is in terms of quality studies or anything else, that perhaps we should go back and re-debate that issue in terms of whether it is in fact the state Medicaid agencies or the health plans that should be required to collect the data.

As you can probably imagine, since the subcommittee has talked about this issue before, there are many complex considerations to be taken into account there. But I think it goes back to the fundamental question for any data collection or reporting requirement or requirements for the method by which data is collected, that is, to trace it back to what is the expected value of the information. I think that should at least help to inform both the method and the degree of specificity that is needed in terms of how it is collected and reported and who is responsible for collecting and reporting it.

DR. CARTER-POKRAS: So I just want to make sure I understood this correctly. There is no specific requirement in contracts for states to report information on whether issues of access or quality indicators look similar across different ethnic, racial or gender groups? There is no requirement for providers to report that way, is that right? Did I hear you correctly?

DR. BLOCK: The states might incorporate that into their contracts, but there is no HCFA requirement that the performances incorporate that into their contracts.

Now, having said that, we would expect a state to address overall their strategy for assessing and assuring access and quality across all populations enrolled in managed care.

DR. CARTER-POKRAS: So that is a requirement, that general statement is a requirement?

DR. BLOCK: That is a certainly a broad requirement, and as I mentioned, the Balanced Budget Act now includes more specific requirements for all programs in '99. That will include more explicit detail in terms of what the elements of that strategy are. But there is no specific federal requirement that states in their managed care contracts somehow differentiate based on a number of population variables, of which race and ethnicity is only one.

DR. CARTER-POKRAS: So it is left up to the states how they are going to deliver on that expectation?

DR. BLOCK: Correct, but we do expect the state to address how they will assess and evaluate access, quality and other factors across all of the populations enrolled.

DR. CARTER-POKRAS: And how often would you say that as part of that, states provide this kind of breakdown that I'm talking about, respond to the general requirement by providing data across gender, race and ethnicity to document that access and quality are similar across population groups?

DR. BLOCK: I think I can't answer that in a systemic way, but the examples that readily come to mind have been more in the context of some of the specific focused clinical areas of study that the states have included as their priorities for quality assessment and improvement, things such as HIV screening counseling for pregnant women.

I believe that states that have elected to look at that measure have often tried to differentiate their rates. Obviously it is gender-based, but also to look at any differences in terms of racial and ethnic patterns. That from my understanding has largely been looked at that way because there is concern about whether and to what degree different kinds of counselling strategies and materials might be appropriate for different groups. So that may have an impact on the effectiveness of the delivery system to provide that service.

But in that one example, I think you can see what some of the complexities are, in terms of what are any number of different variables that the state or the health plans might want to look at.

But to summarize that answer, to the extent that I have seen states try to break out race and ethnicity as it relates to either health care utilization or outcomes, it has generally been I think in the context of focused studies.

My hunch would be that that was something that that specific state identified as a priority, in terms of their quality assessment and improvement strategy, and therefore articulated that priority explicitly to the plans, or it was something that the plans designed on their own as part of their internal quality assurance assessment and performance strategies.

DR. IEZZONI: Sara Rosenbaum in the back, do you want to come up to the microphone?

DR. ROSENBAUM: If I could just add a couple of points to what Rachel was saying, since it is such an important issue for you. Title 6 of the 1964 Civil Rights Act actually vests primary jurisdiction for this in the Office of Civil Rights.

So you would probably want to take testimony from the Office of Civil Rights as general data reporting requirements under all federally assisted health programs. This has been a matter of continuing controversy for many, many years now. HCFA shares authority of course, because it is responsible for administering the Medicaid program.

In a recent study that we did for the joint center, what we found from going through the database from the contract study was that a fair number of states actually have relatively detailed anti-discrimination requirements, and some of them are quite notable. They go well beyond anything that the Office of Civil Rights has ever articulated as a discriminatory activity in managed care,

the biggest one being what happens when one company sells two books of business, one to private buyers and one to public buyers, and the characteristics of the networks are different, the utilization measures are different, things are different. That is actually not that uncommon a practice. Some of it may be the result of unfair insurance discrimination, and some of it may be the result of differentials that don't have an insurance defense.

Most states in enforcing those measures of course are interested from us in knowing what the data capabilities are to back up any performance standard that is a contract requirement. The answer to your question is that no state, even the states that have very strong anti-discrimination provisions articulate in their contracts the data they expect the contractors to provide that will let them measure this, nor do they as you are hearing from HCFA generally collect match their eligibility, i.e., their beneficiary data against their enrollment data. As you heard yesterday, they can't do that because it is not the same identifiers in many cases.

So what you have is a situation where there are essentially no federal anti-discrimination standards or data requirements. States in fact have moved into the vacuum with some standards, but the mechanics of the data reporting are simply not present in the contracts.

Yet, what I see from a review of the state contracts is that the states are extremely sensitive to this issue in many cases. They are looking for the right standards to apply and they actually probably would like some guidance on the data.

DR. BLOCK: One follow-up. I will check with the 1115 staff to see if this has been included as a condition, either a standard or a state specific for any of the 1115 waivers, because in my speaking I was addressing the 1915B waivers.

DR. IEZZONI: Is this anything that NCQA or HEDIS looks at as well, that plans you look at evaluate look at?

DR. SCHMID: No, the assumption behind HEDIS actually is that a managed care plan is responsible -- a distinction characteristic of managed care is that the plan is responsible for delivering a set level of performance in those areas where the experts agree that there is an identifiable level of performance can be known and should be delivered to the populations. That is really a population-based measure.

So the HEDIS measures are designed to look at the level of services that are delivered across the enrolled population. The effectiveness of care measures for those of you -- I don't know if people are familiar with how we have them identified, but childhood immunization or adolescent immunization or mammography or cervical cancer screenings, those kinds of things, where there is a clinical standard that has been articulated, the HEDIS will tell you whether the plan has delivered to that standard or exceeded it.

Then there are other sets of measures where there is not a clinical standard that has been identified by those who are professionally competent to do so, but nonetheless HEDIS will tell you what is the rate of services that is being delivered.

You asked about mental health. There are some HEDIS measures that look at mental health, percentage of the enrolled population with the mental health benefit, that has received in-patient services or what we call day-night services or out-patient services. it is not clear that there is a standard for what percentage of any given enrolled population should receive those services.

But that is what HEDIS does. It will tell you what services the plans are delivering. Then the idea of HEDIS is to be able to have benchmarks which are basically norms. They are not best practices, they are not goals, they are normative: this set of plans, what is the average, what is the high end and what is the low end, and then the purchaser can determine what is the standard that is good enough, and what are we really looking for and how do you work with these plans that are below it, what quality improvement strategies that you declare of them over what kind of a time line.

We are very excited about the database that HCFA is very involved in, along with the APWA. That is a project that is almost certainly going to move forward, which will allow for the first time some establishment of national benchmarks for HEDIS results from the Medicaid program, which we don't yet have.

DR. COLTIN: Kathy, with regard to the race and ethnicity breakout of HEDIS measures, the only precedent that exists right now is that HCFA for Medicare HEDIS has in fact required health plans not only to report the measures, but to report the Medicare ID numbers of all members in the denominator and all members in the numerator, and to provide those in electronic files to HCFA.

HCFA's plan is to link those up with the enrollment files that they have. Of course, because enrollment in Medicaid is at the federal level, they have those files, and they are then going to look at these measures by race and ethnicity, based on what they have in their enrollment files.

Now, it is likely for any given health plan, the sample sizes will be too small in many of these categories. But by aggregating across plans in regions or states, they will be able to get a picture as to whether there are any patterns that emerge in that area.

Now, thinking about doing something like that for Medicaid would require this to happen at the state level. But the mechanisms have been developed in the plans to be able to report.

I think the big issue will be whether the plans are using the same ID number for the beneficiary that the state is using.

DR. IEZZONI: Well, thank you. You have all given us a lot of information. Thanks.

We have two providers who I saw smiling, shaking their heads, looking bemused during the prior three panelists' presentations. I just on a personal note want to draw everybody's attention to Don Unlaw's tie at some point, if they have an opportunity to look at it. It is very colorful. But, Don, you are from Arizona, where we are about to go on to learn more about what is happening out there, so Don, and then our own Kathy Coltin will be wrapping us for us before lunch.

DR. UNLAW: Thank you very much for inviting me to come. I was laughing and nodding my head with a lot of the presentations that were going on. I am also struck with the difference which you are going to get in terms of my testimony and the kinds of testimony that you have heard so far this morning, which has been early macro, looking at policy and the creation of that policy and how that policy is implemented or not implemented with the states.

I am at the end, one might say the butt end, of the totality of all of those different kinds of things. Let me talk to you just a moment about myself and what I do in a health plan, to give you some kind of perspective about both our state and the health plan, and then take you into a very micro look at what kinds of things we collect and how we use it.

I'm a bit of an outcomes nut. I'm a Ph.D type. Did a lot of research while I taught school for 12 years at the graduate level. Did a lot of course work as well as teaching in the area of outcomes, and have had a longstanding interest in that area, both in behavioral health where I was a provider, and ran provider agencies for a number of years, and also in my teaching. Now in the last 10 years, working for a physicians IPA.

Because of that interest, I tended to gravitate to some interesting national committees. So I have a perspective that has some national perspective to it. I was on the HEDIS Medicaid development committee along with Kathy and Rachel Block and a number of other people that you heard this morning. I'm on the QISMC committee that you heard about. I'm also working in the NCQA state policy development organization with them on the fee for service- managed care indicators comparison study that we are trying to put together across states. So in that process I've had a chance to meet a number of people who are in the policy development arena, and also hopefully bring to them my perspective of what that means as it relates to how the work actually gets done at the ground level.

I am with Arizona Physicians IPA, which is a bit of a misnomer, in that it is really not an IPA, it is what is known in the business as a contract HMO. We have the largest Medicaid health plan in Arizona. We have about 135,000 members, and we are in both rural and urban counties. We are in eery county in the state except for two small counties.

We are in a state that is a highly penetrated, competitive managed care state. As some of you may know, Arizona was the last state in the union to embrace Medicaid. Did so very reluctantly in 1982. It is the only state in the nation that came up totally as a Medicaid managed care state. It had no fee for service Medicaid ever, and there are war stories and horror stories that can be told about the early days of that process. But since I was not part of that, I'm not going to deal with it. We will talk more about the latter days and what it is like now.

We are a full service benefit state, in that we provide services to all of the entitled populations. That would include AFDC, TANIF population, SSI, long-term care, SOBRA, and a variety of state eligible populations as well.

My job at the health plan is, I'm in charge of quality measurement. Most of the studies that are conducted, I'm the person who runs HEDIS that you have heard about at the plan level. I'm in charge of prevention and wellness programs, disease state management. I do a lot of special projects. I often tell people, I have a lot of fun in my job; I work out my values eery day. I enjoy very much the kind of work that I do, particularly with the Medicaid population.

Just to give you an overview of my remarks, I asked what in the world would the committee on vital statistics want to know from a guy like me down in Arizona, running a health plan, quality operation. I asked if staff would send me some questions, and they did, an extremely excellent set of questions that I could base a graduate course on, and I was told I had 15 minutes. As a result, I took some real liberty. I'm going to answer those questions that are of specific interest to me, and of course I'll be available to answer other questions that you may have.

I'd like to talk briefly about a concern I have about new and expanding managed care Medicaid states. I'd like to start out by talking a little bit about that.

Second, I want to get into talking about data that we collect at a plan level and how we use it and how we transmit it to states, and some of the questions that came up earlier perhaps, at least for Arizona. You might have some answers in response to those.

Then lastly, I want to talk about some of the barriers and needs that I perceive that I have on a day to day level, collecting data and trying to make sense out of it from a variety of different perspectives.

Let me start out with the major concern I have about expanding states. I'm fond of saying I am old enough to remember when managed care was a Socialist conspiracy in the 1950s when it first came out. It was going to ruin medicine in America, and there were all kinds of critical comments about it. I am now living in the '90s, and it is the entrepreneurial ripoff and quality problem of medicine in the '90s. The total kind of change from the '50s to the '90s.

And of course, it is neither one. One of the things that it had in the '50s that sold it was that it would be in the best interest of managed care to keep people healthy. I'm sure that is a banner that has been waved since its earliest inception. By and large, that has not been proven true. One might ask the question, why has that not been true, since economically and for other reasons, you would think that it really would be in the best interests of managed care to keep people healthy, so why didn't it happen?

My own theory about that is that when managed care moved in on fee for service medicine, there was so much waste in the system, huge amounts of waste in the system, that the focus really came on utilization. As a result, there was a tremendous push and a lot of data generation at the plan level to control and deal with the utilization issues, rather than deal with preventive health and some of the things that people had promised in the early years.

I think that what happens is that in immature markets, which is what I would call an immature managed care Medicaid market, one that is just starting with managed care and Medicaid, there is going to be a tremendous temptation to push very strongly on the utilization side with a lack of focus on the quality side of data development and indicator and monitoring.

I think it is critical that to whatever degree this committee has influence in that process, that it look at putting a focus on day one, in that those states that are expanding and beginning in managed care Medicaid, that they focus upon quality indicators and qualitative data at least as heavily as they focus on the utilization side.

What kind of data do we collect and how do we use it and what kinds of things do I do with it back home on the ranch? We collect a variety of different types of data. I'm going to take you kind of through a data chain, and as I take you through the data chain I'll talk to you briefly and anecdotally about its use in a variety of different ways.

First of all, we use prior authorization data. Prior authorization data consists of ICD-9 quotes, anticipated procedures, CPT coding. It includes typically projected length of stay, links the provider and the member all into a single file.

We use that data for daily census. For instance, our hospitals are required to call us when they admit a patient, and we are at risk for admissions. So we create a daily census report and we use that for case management and utilization review in the hospitals. We use it for IBNRs, our anticipated costs, which are going to come but are yet not claimed, so it gives us an idea of how much money we have out there that we have got to spend and have no other record for.

It provides us with case management data. For instance, if we have case management programs going on for brittle diabetes that go into ketoacidosis or have particular problems that require admission, it is an alert that allows case management to take and work from it.

It is also used for a variety of other utilization analysis and reporting, as well as financial reporting and analysis.

Prior authorization is the most real time, online data that we have regarding service utilization. And it is critical data in terms of those areas for which we are at risk financially, such as in-patient hospital admissions and specialty care.

Our next largest group of data that we use are what we would call encounter and claims data, and I'll talk about them the same way, because basically they are the same data. For my purposes when I say the word encounter data, it simply means to me that it is a capitated service and separates it out from a claims data, where we actually pay the claim. However, the data expectations are essentially the same. HCFA 1500, as we say, is a HCFA 1500, whether it is encountered or whether it is in for a claim. There is however significant difference in the quality of that data, and I will mention that on the way by.

What are the sources for that encounter and claims data? Well, hospitals, laboratories, pharmacies, ancillary providers such as home health care and nursing homes, primary care physicians, specialists, specialty clinics of different types and kinds all make up the different sources or inputs of all of this data, typically coming in on two primary forms, the HCFA 1500 and a UB-92, which is the hospital billing or claim form, or facility billing or claim form.

What is it used for? Health plan utilization patterns and trending. We would like to know as a health plan how we are doing, and from cross utilization standpoints we can take that data and look at it to determine what our in-patient rates are, what kinds of diagnosis, the top 20 diagnoses we have. Currently in hospitals, in ambulatory settings, in nursing homes, in a variety of different kinds of arenas, developing strategies to deal with perceived problems as a result of that analysis, case management, physician utilization patterns, trending and comparison with peers, sometimes called profiling, where we take a look at physicians' use patterns and compare those patterns with their peers within different counties as an example.

This data is an enormous source for inputs to our own quality outcomes performance indicator system. It is a system which is built around preventive health indicators and disease state management demand, management indicators, which automatically notifies and reminds members of desired clinical events. So it is a proactive utilization system, and that encounter system feeds that information system, in order that those notifications and reminders can be sent out and delivered.

It also is a major source for most of our quality reviews. It is used in our HEDIS reporting, it is also used in the development of what is called hybrid methodology for medical record reviews, which is also part of the HEDIS process, but one which we also use for a variety of focused studies which are not in the HEDIS environment.

Membership data. Membership data is supplied entirely by the state, which may go to one of the questions about race and ethnicity and goes to some of the questions about unique identifiers, and a lot of the questions that people addressed either tangentially or directly this morning, at least in Arizona.

We are not supplied racial information. We have a field for it, we have requested it, it would be very helpful to us, but we cannot receive it at the present time from the state. The state however does collect that information in their eligibility system, and has it available to them.

Initially, it was not transmitted in the membership file, because I believe -- at least the local logic has it -- that there was some question about adverse discrimination by race. So as a result, they were withholding that information on purpose in the front end. Now that we would like very much to discriminate by race for positive reasons in a variety of different ways, we cannot in effect do that. We are hopeful that that will change, and we have several health plans, including my own, that have tried to do that.

We cannot collect that information on our own and have it in our information system, which I know is the next logical question, because we receive a file transfer of the membership data every night from the state. This enrollment data is coming down every single night electronically from the state, and would wipe out any information that we would put in our membership files by hand. So we would have to build a completely separate file just to contain racial and ethnic information which we frankly are not willing to do. We believe the state has a responsibility for that, and you have just now heard the argument that held sway, that was mentioned earlier around that particular issue.

But however, we do collect data, membership enrollment files, from that downloaded transfer. We do an initial health assessment that is based on the PRA. Some of you may be familiar with that. It was the predictor for readmission assessment that was developed I think originally under Medicare out of the University of Minnesota. We have adapted that for our own uses with Medicaid, and it is basically a health screening tool that we send out to every new member. That data comes back, it is loaded into a specialized data set, and based upon the responses, the data is sent either to customer service for follow-up, to a case management for follow-up, and in all cases the data is sent to the primary care physician.

Some of that data indicates, for instance, chronic health conditions, indicates prior utilization prior to coming onto the plan, admissions to the hospital, number of times they saw their doctor last year, a variety of questions of that sort, which can alert the health plan that there are problems or potential problems with that member's health.

Also, I use member satisfaction surveys as a source. We use member focus groups as a source. We run focus groups in a variety of different kinds of areas. Those focus groups deal with for instance member information, whether or not we are communicating, what it is that we want to communicate to members on membership materials and so on, member complaints, tracking and analysis. Every single time a member contacts our plan and has a complaint, it is logged to what is called log two resolution. Those are all collected, categorized and trended, and used to design programmatic responses, if necessary.

They are used for program design. The membership files are also used for the HEDIS and QOPIS, QOPIS being the quality outcome performance indicator system that I talked about. Membership materials I mentioned, case management and physician notification.

So that is the major ones. We also have provider files, obviously, credential files, a variety of other kinds of files which come together and link into a single source.

We have been encountering data since day one; that is 1982. It has been required by the state. We have been reporting encounters to the state since that time. So the state has a complete report of all encounters going back however many years that is.

It is not until however recently, within the last three to five years, that we have really seen the use of that data at the state level. Collecting and reporting it perhaps to HCFA, but using it in a proactive way around quality indicators and so on has really been a recent development at the state.

I might also say, a fairly recent development of the plan, because we as a state went through that same cycle I mentioned about the promise. We too focused on utilization and it wasn't until really the last five to seven years that we put the kind of emphasis that we need to, and we need to do more, in the quality arena.

Let me just now end with future issues and problems. There is a huge issue coming up for this committee which I think would be incredibly important to be dealt with intelligently. That is the advent of the electronic medical record, the re-use of results data and patient confidentiality all in one loop.

There is no question that it is coming. It is only a function of time. There are states now; the state of Maine is working very hard to try to get medical record availability cross line. We are prohibited at the moment from getting results data from a laboratory by conflicting state laws. We have one state law which says for instance that we are entitled to all medical information of our members as a Medicaid managed care plan. We have another state law that says laboratory information is prohibited to give to anybody except the provider of the service, that is, the doctor, and the patient.

So as a result, I'll tell you how that really works itself out in practice. Probably one of the very best tests we have today, that is very close to outcome, is a glycohemoglobin A1C. Anyone who looks at glycohemoglobin A1Cs, you can actually predict, and within some measure of reliability, take a look at population wide all of your diabetics, who is in control, who is out of control, or who is going to be out of control, based upon the results of that test.

Effective management of that population at a health plan level might be gathering that data, arraying it across physicians, taking a look at which physicians are doing the best job of keeping their diabetics in control, finding out what they are doing and making sure that that happens in those physicians' offices where it is not happening. But we are prohibited from getting that information for confidentiality reasons, from state law.

There are very good issues surrounding that. On a commercial basis, I could make a strong argument with another hat on that that data should not be available, because it is a predictor variable, and under the current rules of the game, if plans could drop people as a result of knowledge that they gained in that particular way, it certainly would not be in the best interest of health care in America or for the patient.

So with that, I'll end and take any questions that you might have, either now or after Kathy gets through.

DR. IEZZONI: Great, that was very interesting. Why don't we have Kathy give her presentation and then we'll see if there are questions for you.

DR. COLTIN: I have a few overheads, so I am going to stand for part of my presentation and sit for the rest, because my battery on my laptop ran out as I was printing my last few slides, so I'll have to go by my notes on those.

What I did is very similar to what Don did. I took the list of questions that the subcommittee had put together, and I went through them and determined which I thought were probably most relevant to a managed care plan and to me personally, in terms of issues that I have encountered in various committees and venues that I have participated in.

I thought I would start with your first one, which had to do with what I thought were some of the more important questions. I started out saying what are the things I would be interested in knowing about or already knew about but thought maybe others would be interested in knowing about.

One thought occurred to me as Don was presenting, that in fact my first few questions revolved around utilization as well, because I think that when people express concerns about managed care, a lot of those concerns are framed around utilization issues, whether or not they will be able to get access to particular types of services. Some of the answers to those questions can be gotten by looking at utilization rates for certain kinds of services.

So I asked questions like whether differences in utilization rates between patients who have coverage through a managed care organization or fee for service coverage are greater or less for patients who have lower socioeconomic status.

Now, one could read Medicaid there, but I am actually being a little broader in my question, and certain it includes Medicaid. And does this vary by managed care organization or by primary care provider? I think variability is really a key with some of these measures, because we don't have an absolute standard, and the only way we can try to assess over and under utilization is by looking at comparative norms and looking at variation. And of course, quality improvement literature tends to support the notion that too much variation is not a good thing in most cases.

A second question is whether managed care organizations in reducing utilization rates, as compared with fee for service, are doing so for recommended services or just for discretionary services. I think that is where some of the fear comes in, that plans may be indiscriminately reducing utilization as opposed to targeting those areas where in fact there is considerable discretion, and where in fact there may be evidence that services have been overused.

Again, are reductions in recommended services occurring when you look at managed care versus fee for service? If they are, are they greater in patients with lower socioeconomic status, who may or may not be as well versed in what they should be getting and able to challenge physician decision making.

Are reductions in discretionary services greater for lower socioeconomic status patients? Again, is there variability either at the plan or the provider level?

Are there differences in compliance rates for recommended services between commercial and Medicaid enrollees in the same managed care organization? So basically, very similar if not identical provider network, same policies, but in fact do you see some systematic differences? Does this vary across managed care organizations, across primary care physicians? And do these kinds of differences persist after you control for the opportunity to treat?

A lot of our measures are population-based, and we believe that is appropriate. But when you then start saying, well, how do we improve a rate, you begin having to dissect it and say is the rate for say cervical cancer screening low because patients aren't coming in at all, or in fact are they coming in and the provider is not doing something he or she should be doing? Because the quality improvement strategies would be different, depending on what is the problem.

So what we would like to know is whether differences between Medicaid and commercial rates have to do with patients not coming in and therefore needing more in the way of outreach and educational interventions, or are we finding any differences in provider behavior when faced with a patient with lower socioeconomic status versus a commercial enrollee, in terms of what services they do or do not recommend for that member.

The other thing is to consider what kinds of services would be necessary, if it is outreach, what kinds of services are necessary to try to bring those rates up on a par with commercial population. Often, those are services that include outreach, transportation, social services, things that in some cases are not covered under the capitation payment that the Medicaid agency makes to the plan.

We heard about the issue that managed care organizations are doing a lot of population management and performing tests or activities that might have been done by some of the public health agencies. But in fact, the reimbursements to the managed care organizations are generally based on historical claims information in the fee for service Medicaid arena, and those in turn reflect the benefit package for Medicaid. So they are really based on medical services.

We've got different funding streams here at the state level, and yet the activities that the managed care organization is taking on in some cases are replacements for activities that are funded at the state through a different mechanism, and the managed care organizations are not being reimbursed to perform those services. So whether or not they will make that investment in order to improve on these measures is a question, and therefore, looking at variability and which organizations do and don't step to the plate I think is an important issue.

Another set of questions has to do with access to different types of services, in particular specialty care services. So are there difference in rates of specialty care initiation or in primary care or specialty care visit lengths? Do you find that in fact Medicaid patients require a higher percentage of complex visits, longer visits and so forth in order to address their needs, to deal with translation issues and so forth, as compared with patients in fee for service? And does this vary, and are they in fact getting the amount of time that they need with the provider. Some measures of this might be obtainable from administrative data, others may require survey questions of members.

Do risk adjusted outcomes for Medicaid enrollees in managed care organizations differ from those for commercial enrollees or from Medicaid fee for service patients? And does this vary by condition or by treatment procedure? And does it vary across managed care organizations or across providers?

A lot of the data that we need to answer these questions are in fact very problematic, and I will talk about that a little bit. This is one of the areas that I think is extremely important, and where the data are in fact the weakest.

Are Medicaid managed care enrollees more or less or equally satisfied with their care than fee for service patients or than commercial enrollees? Which aspects of care show significant differences in satisfaction? Again, this varies across plans and across providers.

Does the quality of care or service provided to Medicaid managed care enrollees by the managed care organizations' contracted suppliers of medical or social services differ from that provided to their commercially enrolled members? And is the MCO taking any steps to correct this? So looking at things like hospital services, most plans don't own their own hospitals, they contract for those services. Or extended care facilities, or home health providers, and so forth.

Not only do you want to look at whether the care that your members are receiving in those settings is up to standards that the plan may have established, but you want to be able to look at whether there is variability for different population subsets. Here again, patients with lower socioeconomic status might be more vulnerable in these other settings as well.

So now you ask the question, do we collect data that would enable us to answer these kinds of questions? Well, I think my answer to that would really almost be identical to Don's, so I'm not going to run through all of the various types of data that we collect. But certainly we do collect enrollment eligibility, encounter claims, et cetera. We also collect satisfaction data and so forth.

What I thought I would focus on more is some of the kinds of things we report out, using that data. So in answer to your question about what we collect, I've actually talked about what we report more, using that type of data.

We do report on all of the HEDIS measures, including utilization rates, membership and dis-enrollment rates, quality indicators. We also have a number of other internal measures, some utilization measures like physical therapy visits, which are particularly important in the SSI population, and condition specific use rates. We will look at asthma care and look at the rates of hospitalization or emergency room utilization for patients with asthma. We are doing the same thing for patients with major depression and patients with congestive heart failure and so forth, so we have a number of condition specific utilization measures. As well as matching quality indicators in those same conditions, so that we want to look not just at the utilization but also at the quality indicators. So we have several internal quality indicators that we look at for each of those same types of conditions.

We report out on satisfaction surveys. We do probably more satisfaction surveys than you can possibly shake a stick at. I was trying to sit down and count them all. One of the things that we are seeing as a result of this is declining response rates from patients. It is extremely important to get information from the patient's point of view, but if you keep battering them with questionnaires, over time you start to kill the goose that laid the golden egg. They are sometimes the only source of information on certain kinds of topics, and you have to really be careful how many times you are going to that well or you are going to start to find that it is drying up on you.

So we do satisfaction with the plan and that is based on just pulling a random sample of enrollees, whether or not they have used the plan's services. We do satisfaction with the primary care provider. We also have done selected specialty satisfaction rates, collecting data at the level of the individual physician and reporting it back to them.

Satisfaction with mental health and substance abuse services, with care received in emergency departments of hospitals or as in-patients, with care received in extended care facilities. We just completed that survey and we're about to start one on home health agencies.

We also participate with the division of medical assistance in Massachusetts, which is our state Medicaid agency, in reporting on targeted quality indicators that have been identified as part of our annual review process, and it actually now has become a semi-annual review process with our state Medicaid agency, where by using comparative data they identify opportunities for improvement in the various plans, as well as looking at responses that plans have submitted to their requests for information around their purchasing specifications.

So for each contract renewal period, the Medicaid agency will send out a list of purchasing specifications and the plan is required to respond and to provide evidence that they can in fact meet those specifications. Based on the quality of those responses, identification of areas where there may be opportunities to strengthen what a plan can or cannot do in addressing a particular specification can result in the development of a quality indicator for that particular standard or specification that then the plan must collect and report on a regular basis, and show that it is able to improve.

Some of the kinds of things that we have looked at include rates of member participation in new member orientations, in whatever form they may take, and for the Medicaid population we have had to develop a number of different forms in which to provide that kind of information to members, health needs assessments, initial visits within three months of enrollment, assessments of psychosocial problems and referral rates to mental health and substance abuse treatment, emergency department utilization rates and so forth.

So there are quite a few things that we have looked at. I thought I would just show you an example to back that up of actual reports that we have done. This one looks at new member screening and outreach. An HNA is a health needs assessment form, all new members enrolled in the plan are sent a health needs assessment form.

What you see here, we have a pre-intervention phase, we had to show improvement in the quarter a year later. What we have done -- actually, this is a new process; this had not been getting done, which is why there were not numbers in the pre-intervention phase. We will now be tracking this on a regular basis and looking to improve.

But what you can see here is that the response rates that we were getting are in the 30s for children, 32 percent roughly, and 44 percent for adults. So this is an area where there is a real data limitation. We go pretty aggressively after this data. We go after it by phone, and after three attempts we send it by mail, and then we follow up the mail by phone, and we are still finding that we often do not get a higher response rate for these needs assessments. What that means is that there is a fairly significant portion of the population where there may be unmet need that we are not able to identify.

We also in looking at those who do respond are seeing a pretty high percentage that have a health need that needs to be addressed. Again, just as Don said in his plan, this information gets transmitted to different individuals in the organization who are responsible for acting on it. So it might be a specialized case manager in the case of someone whose need falls around a particular disease management activity. It might be the primary care physician in the case of a lower urgency need or a more routine kind of issue like a screening interval having been passed, something like that.

But anyway, this is an example of the kind of information that we produce and report to our state Medicaid agency, to look at whether we are improving on some of these kinds of goals. We now are reporting those every six months, so we will soon be reporting on the second half of 1997.

This is a HEDIS report. I think what is interesting about this report, what this shows in the bars is data for the commercial population from 1993 through 1996. But the boxes at the bottom show you the commercial and Medicaid rates, so you can see down here, for instance, how they are comparing.

One of the things that is interesting is, we have broken this data out by different delivery system components within our organization. People have asked a lot about physician compensation, and its effect on quality. This group is a group of salaried physicians. At the time they were employees of the health plan, a staff model. This is capitated groups, capitated at the group level, not the individual physician level. These are groups where they are capitated for primary care but not specialty care. These are joint ventures which are groups without walls that have been organized around physicians who practice around a particular hospital in a particular geographic area. The joint venture is capitated.

These are physicians in the IPA who are paid fee for service. There really are not great systematic differences here that we observe. Here again, these are staff model health centers; the sample is too small. But we are looking at that, and we are trying to see whether in fact some of the incentives that we have built into our payment structures are having an impact on quality. So all of our quality measures are in fact broken out by these types of factors.

Here again you see the same sort of thing for prenatal care, only this is a measure just for the Medicaid population. It looks at frequency of prenatal care and the number of weeks of pregnancy at the time they enrolled.

We also have done the same thing with checkups after delivery. Here is a measure that is produced for both commercial and the Medicaid population. Here we do see that there are some differences that are showing up between the commercial population and the Medicaid population in terms of coming in for a checkup. So this is an area that we have targeted for quality improvement.

Again, you see the same kinds of information for cervical cancer screening. The data you would be interested in for Medicaid are in the table at the bottom. We also have information for Rhode Island Medicaid over here, and GRIR is the Greater Rhode Island Region. So you will see that we have broken out the Rhode Island Medicaid as well for that measure.

Rhode Island does not require us to report HEDIS measures. We did this for ourselves.

I'm not going to show you all of these, but one of the things that --

DR. IEZZONI: I hate to say this, but we are kind of --

DR. COLTIN: We're already up?

DR. IEZZONI: No, not already up, but I want to allow time for questions.

DR. COLTIN: Okay. One of the things that I wanted to show is that in addition to doing a measure like this, which is a HEDIS measure on what percent of children got the recommended well child visits. This happens to be the adolescent one.

We would complement this with some of the focused reviews that are done by the state, where they actually do chart reviews. This information was not done by the plan; this was produced by the state Medicaid agency, and it is comparing the plans around various aspects of performance based on focused record reviews. I chose adolescence so you could see how it related to the earlier slide.

They also have looked at particular selective screening procedures, and whether those were done. Again, these are based on focused reviews. So you have a mixture of data from a number of different sources that can be brought together and combined to provide a broader picture. It is not just, are they coming in and getting a preventive visit, but are the things that should happen at that visit happening?

There are other measures for emergency room visits and so forth; I'm going to skip over all of that, and talk about what I think are some of the data concerns and what we would like or need that we don't now have. I'll try to be really brief on this.

On the sociodemographic data, I would echo Don's recommendation on race and ethnicity data. We have exactly the same situation in Massachusetts that was described in Arizona. We do not get that data.

We would love to have information on the primary language spoken, so that we could target materials more appropriately in the member's language.

We would like to know something about the literacy level or years of education, so that in some of our interventions we could begin to again target them appropriately.

In the area of health risks or needs or health status, I showed you that we tried to collect that information. Our response rates are not particularly high, so this is an area where we would really like to have better data on disease burden, on risk behaviors, on functional status and so forth.

In the area of health services use, the biggest gap we have for Medicaid is pharmacy. The state does not include pharmacy services in our Medicaid contract. Therefore, we do not have data on the pharmacy claims. There are an awful lot of quality measures that really depend on having access to pharmacy data. So we are now working on how we can link up data between the health plans and DMA to see whether we can work together to produce measures jointly by linking our claims data with their claims data for pharmacy.

Lab results is another one that I circled here as a really important one. We have the same problem. Things like glycohemoglobin levels are something we would like to be able to measure across the board and cannot. We can measure it in our sites that have computerized patient records, but only that.

We would like to see some sort of an indicator for new diagnoses that appear on a claim, so a flag; is this a new problem for this patient as opposed to something ongoing. One of the things that we turn ourselves inside out around is identifying incident conditions to target patients for particular interventions or education and so forth. We end up having to try to look back through historical claims and see if they had this before. Depending on how long they have been a member in the case of Medicaid beneficiaries, oftentimes their enrollment periods are not very long. This is a very imperfect way of trying to do this.

One of the most important limitations of the data that we do have in the claims area, I would say claims diagnoses are probably the biggest problem, the number that get reported, the specificity of the codes, both in terms of false positive diagnoses and non-specific disease codes, putting them at the most general level.

This leads to problems in identifying the appropriate target population for measure and intervention, and in evaluating disease severity as it may relate to the appropriateness of clinical processes or the assessment of outcomes, risk adjustment or severity adjustment.

In the area of claims procedure, the biggest problems we have are global codes like the prenatal care codes that don't allow us to use administrative data to identify when the episode actually started or how many visits there were within that episode.

Physician or plan bundling of events, like immunizations into well child visits. The coding isn't the problem, it is the practice, the documentation practice. On the other hand, CPT bundling of certain types of events into a single code also creates a problem. That is particularly true in areas that involve cognitive screening as opposed to screening of blood or urine, where there is probably six codes for everything you might want to do.

We can't identify who has been screened for alcohol, depression, tobacco, whatever; they are all rolled into one code, which is very non-specific. That is the kind of thing that leads to some of the problems we heard about yesterday, where you might give credit for something in an administrative data set because the general code is there, but when you go to the medical record review, in fact what you find is, they were counselled on sexuality, but not on tobacco. So it was all rolled into one code. So that is a problem.

Response rates and response bias for information that we collect from patients is another one I mentioned earlier.

What are we doing about it is a question I don't think you asked, but I will try to answer. We do a number of data validation studies, and we build algorithms for trying to cross check diagnoses and make sure that they are accurate. We feed the information back to providers. We feed imperfect information back to them intentionally to stimulate them to document things better and to want to show improvements. Some of that improvement is hopefully real improvement and some of it is documentation improvement, but both of them are worthy goals.

Data linkage. We are working with the state to try to link our data both to some the DMA files as I talked about in the case of pharmacy. We did that for one of the diabetic -- the diabetic eye exam measure, and we are doing that for looking at low birth weight and a number of other measures that we have identified.

We are encouraging the adoption of computerized patient records, putting them in in the core medical groups that you saw in the former staff model health centers, and helping to subsidize their adoption in groups where -- what we call highly aligned medical groups, where we have heavy penetration into their practice, and where they may be contracting with us on favorable terms.

How are we doing it, how are we trying to achieve improvements in these areas? Contracting provisions; we are putting information and requirements around data quality into some of our contracts. Strategic partnerships with suppliers, vendors and partners, with coalitions. We have something called the Massachusetts health quality partnership, the New England HEDIS coalition and the Massachusetts health assessment partnership. Hopefully you will hear about all of those in April when you come to Boston, because all of them have a role around data collection and data improvement and monitoring. And through efficacy at the state and national level. I participate in a lot of different data related and measure related activities, both at the state and at the national level.

That's it.

DR. IEZZONI: Kathy, thank you. We knew we would learn a lot from you. And you're one of our own, so we can ask you a lot of questions. Yes, George?

DR. VAN AMBURG: I have two questions, one for Kathy and one for Don. Kathy, do you collect race and ethnicity and your commercial enrollees?

DR. COLTIN: No, we don't have it on anyone. The reason for that is very much what we said; it was viewed as potentially discriminatory information. So there were concerns that if plans were collecting this information, that they might be using it for untoward purposes. So no plan wanted to take the lead in really being out there and collecting this information, and then perhaps being the brunt of accusations of that type.

So if it were federally legislated and everybody was doing it, we would jump on that bandwagon, because we really see great value in having that type of information. But right now, it is very risky to try to do that.

DR. VAN AMBURG: Kathy's plan is doing some matching to state files, some for outcome measures, if I'm reading this correctly. Is Arizona doing any of that?

DR. UNLAW: Well, the state files are our files.

DR. VAN AMBURG: No, public health files.

DR. UNLAW: Oh, public health files. In Arizona, I think you will enjoy your trip out there, because I hesitate to say it, but we are a unique state.

DR. IEZZONI: Of course you are.

DR. UNLAW: For instance, all federally qualified health centers who make up a lot of this group of public health services in Arizona are all under contract with us in every area. Only the county health systems are outside the loop for our members. As a result, we do not get the data. As far as I know, the state is not using the data across health plans. For instance, person A was in health plan B last year and now is in health plan C; they are not linking the two data sets together in terms of putting that in one picture.

But I don't think we have the gap problem that was mentioned earlier, except in the area of the county health departments. Frankly, county health departments are trying in Arizona to find a new role, or a differing role. They took a role of providing health care for the poor. Some counties have almost gone bankrupt trying to do that. There has been a movement of getting these people into Medicaid plans either as state qualified or federally qualified, so we don't tend to see that problem.

They don't use the data very much, except in focused studies and in some of the areas that Kathy was talking about, for instance, looking at specific indicators. They do that as well as here.

DR. VAN AMBURG: No, I want to know the reverse. I want to know if you are taking your enrollees and matching them to like the state cancer registry to find out stage of diagnosis for breast cancer.

DR. UNLAW: No, we are not.

DR. VAN AMBURG: And relaying that to your mammogram program.

DR. UNLAW: No.

DR. IEZZONI: Hortensia, I'm sorry, but we're really running short on time.

DR. AMARO: Okay, I just have a couple of questions. You started off reminding us about how the idea that it would be in the managed care provider's interest to keep people healthy, that whole notion we started with. I was wondering -- and again, as somebody who works in mental health and substance abuse and knowing that mental health and substance abuse problems are a major cause of hospitalization and cost to the health care system, do you know whether any managed care providers have undertaken studies to document changes in cost to managed care provider as a result of increasing coverage for mental health and substance abuse services?

DR. UNLAW: I'm glad you asked that question. We just did that. Last year we finished a study that was partially funded by the Flynn Foundation, which is a local foundation in Arizona that funds medical research in a variety of areas. Arizona Physicians IPA participated in the development of something called a REBHA, which is a regional behavioral health authority in Arizona. It covers the Tucson and five counties, four additional counties in the southeastern part of the state.

What we did was, we looked at members that we have in common. And behavioral health is carved out for the most part in Arizona, and we looked at those members that we had in common, and we looked at their primary care. We looked at the different medical diagnoses which they had. We looked at their utilization rates, and we did the same in terms of mental health, and we also had a control group that we used of people just randomly selected from the population to take a look and see, matching them on age and sex characteristics primarily.

So we looked at that. What we found was a number of fascinating kinds of things. They got -- people who were in the mental health system got less primary care overall than people in the control group. People who had diagnoses, chronic conditions such as asthma, diabetes, other chronic care, got less of it overall, ended up in the emergency room more, ended up with more hospital admissions overall. It is always nice to prove out all those things that we used to talk about years ago.

What we are doing about that is, we have a secondary follow-up, where we are co-locating primary care in a demonstration project within the same facility as a large at-risk provider for seriously mentally ill care. We are working at the combination -- it is a huge project in many ways, even though it is relatively small, combining medical records, looking at coordination of care across the two systems.

Even though they are co-located in the same facility, there are still two cultures operating. Getting those cultures together to work together has been an interesting and challenging task. But we are continuing to work at it, and we've got an evaluative design around the project, and we'll continue to do that. If it works, we'll try to see what we can do to expand it in similar other kinds of arrangements.

DR. AMARO: And then the other question I had was for Kathy. I couldn't tell from the data you presented, but I wasn't sure if you were comparing Medicaid to people's higher socioeconomic status in other kinds of health plans. So you have the confounder there of, --

DR. COLTIN: In other health plans or just in our plan?

DR. AMARO: In your plan, but -- well, if you are comparing them to people of higher socioeconomic status, the confounder there being socioeconomic status, where you would expect to see higher rates of disease, so then it makes it a little harder to evaluate differences in utilization as a result.

DR. COLTIN: Most of the slides I put up were looking at preventive care measures, not looking at like hospital rates and things like that. Obviously, we expect and do see differences when we are looking at some of the utilization statistics. But we would prefer not to see differences on a lot of the measures that I have put up there, the preventive care screening rates, early prenatal care rates.

We feel that it is entirely appropriate to use our commercially enrolled members as a comparison group for our Medicaid enrolled members, because we want to be providing the single class of care to members. It just means we have to work harder in that population to try to achieve that.

Now, for some of the measures, we are actually doing very well, and there are actually no differences. There are some where we are actually doing better in the Medicaid population, because they tend to be emphasized that much more in that population. Then there are others where we are not doing as well, and we are trying to target those for improvement.

DR. AMARO: The last question I have is, you pointed out the fact that in most capitated rates, some of the supportive services that are needed to provide good access, like outreach and transportation and good translation and babysitting and whatever other, are factored in. I was wondering whether any of the managed care organizations have done any studies to estimate these costs for different populations, and are trying to negotiate this? Is there any research going on in that?

DR. COLTIN: I think it varies by state in terms of what is in and what is out of the contract. Some transportation is in and some is not, so it kind of depends on what kind of transportation you're talking about. But --

DR. AMARO: But outreach is a good example.

DR. COLTIN: Yes. We have looked at the costs of outreach. We obviously need to do a lot more outreach in the Medicaid population than we do in the commercial population. But the infrastructure supports outreach across the entire membership. So the fixed cost of maybe having an outreach nurse or whatever is spread across the entire membership, because we do have to outreach to the commercially enrolled population as well. It is just that Medicaid members make up a larger segment of people who get to the point of requiring outreach because they didn't come in on their own for a service.

So I haven't looked at the marginal costs of doing that in that population.

DR. IEZZONI: I am unfortunately not going to be able to take questions from the audience, because we don't have that much time to actually physically get lunch, since we are reconvening at 1:30. So we'll be back at 1:30, and then we'll wrap up this session. Thank you, Don, for coming from Arizona. We learned a lot. Hang around because people might want to ask you a few questions.

(The meeting adjourned for lunch at 12:52 p.m., to reconvene at 1:30 p.m.)


AFTERNOON SESSION (1:35 p.m.)

DR. IEZZONI: I think we should get started for the afternoon session. I thank everybody for staying.

Eileen, Michael Collins, and we have an addition to our afternoon panel, David Baldridge. So why don't we start with Eileen, and if the panelists could introduce themselves and let us know where they are from, that way the people who are listening to us on Internet will be able to know who we are.

DR. PETERSON: I'm Eileen Peterson. I lead a group called the Center for Health Care Policy and Evaluation at United Health Care. I am not a Medicaid expert.

I think the reason I am here is because first, I was going to be in Washington from Minneapolis for another meeting, and so it didn't necessitate an extra trip. But I guess in terms of my qualifications to be here, it is because I have spent the last 10 years of my career in managed care understanding and exploring and developing administrative data systems for population health management.

I spent the first few years of that developing systems for internal use across the United Health Care health plans and very much in the operational system developed a quality surveillance system that is still in use in the health plans today, participated in HEDIS originally, the development of that, and now sit in a research group that still does a lot of work in the methods and administrative data, but is focused more on doing policy research and putting findings in the public domain.

One of the things that I think is consistent across those endeavors of mine has been the public health principle of the greatest good for the greatest number, I have learned applies to data as well as services and so on and the need to have good data and to be very selective about what you invest in and how you use that, is very important.

So my remarks today are really going to be in two parts. The first set will be really information that I have gleaned from other people in our health plans and in our corporate offices that deal a lot with specifically with Medicaid, and then some recommendations based on a blend of that information and my own background and experience.

United Health Care serves Medicaid populations in 10 states, hundreds of thousands of people in a variety of health plan models. Some of those plans are very old; they have been around for almost a decade in some instances, and some are very new in different locales.

A lot of the detail that I got from people about the data that we collect and some of the challenges that we face have already been covered by the previous speakers, and particularly Kathy and I share these things on an ongoing basis, some of the trials and tribulations, so to speak. We collect the same kinds of data. We are dominated by the IPA model of health plan in United Health Care, so we use a lot of -- we have good administrative data, fee for service, the same things people have mentioned, pre-certification or authorization data, claims data. We do have very good out-patient pharmacy data, which has been one of the things we have used quite a bit in our health plans.

We have some plans that do collect encounter data, but similar to what Don said: our standards for that data are the same as the fee for service data, matching the HCFA 1500 model.

In terms of initiatives outside of that that we are involved in, we remain committed to the collection of HEDIS data. In Medicaid the biggest frustration for everyone involved in it has been to come up with measures that are good performance measures in Medicaid in that population. The need for many of the measures to have to go to medical records and get that data has been difficult. It is a burden in the sense of the cost of going out and getting that, but also the fact that when you go and get it, it is complete data then in only a sample of your population, and if the content of that data is very critical in terms of being able to intervene and improve population, then you are left with only that covering a small proportion of your population.

So we are involved with academic researchers and people trying to explore and develop new measures, looking at for instance the content of prenatal care and trying to expand in those areas.

One of the things that wasn't mentioned previously was the EPSDT data. This is another area of concern for the health plans, and that is another data collection effort that is -- and again, I have that perspective of looking across states and across health plans that is very non-standard, and we applaud the move that I was talking to Rachel about, that they are trying to develop guidelines to standardize EPSDT collection and map it to administrative data, so that perhaps there could be some synergy and some efficiencies there, that would be very helpful.

One of the new things that our health plans have gotten involved in is health risk assessment data and collecting that for both Medicaid and Medicare populations. As an epidemiologist, it is exciting to me. I think it is really critical if you are trying to get into looking at outcomes and so on, but I am sobered by the fact that there is a lot of expense and effort being put into this, and people don't really know exactly what to do with it. They want to give it to case managers, they want to use it to prioritize who they should spend their scarce resources on and who needs the most attention, but they don't know what to do.

Even as a small research unit in a large organization, it is quite a task for us to help them address that, and so I think standardization of those tools is a big issue. Actually, the creation and the standardization of those kinds of tools and information about how they can best be utilized in a real world setting would be very helpful.

I think those are the specifics that come out of our organization, in addition to all the ones that were mentioned previously. I think I just have five recommendations that I think are a compilation of information from people across our organization and my own experience.

The first is that I think that the data structures that you support or recommend, we would really encourage you to the extent possible to build on the existing data structures that are embedded in health plan operations. I think that when managed care came into being, the whole notion was to link financing and delivery of health care and so I think that the most efficient systems within those organizations will also link those kinds of data.

I think as Kathy demonstrated in some of the data she put up, the impact of organizational strategies and financing strategies are critical to understanding quality. I think we will do a disservice if we try to create a clinical or a quality data system or data structure that is not able to be linked at the person level and the provider specific level for those things. So I think continuing to refine those standards, to add to them selectively, is critical.

I think also supporting investment in the development of methods to enhance that data and to really understand how to use it. In our center, we have spent years and have published on this idea of validating administrative data systems, coming up with algorithms and techniques for using that data and trying to exploit it to the largest extent possible before you have to go out and get additional data, because of all the issues that are involved with that.

That is my first point. The second point is, if you want to go out and collect additional data or require it, you need to have a use defined for it before you request that.

A couple of other people have talked about this. Rachel talked about value. Mary Jo O'Brien talked about how much data is out there. I was at a HRSA meeting earlier this year when they presented all of the surveys that are done at the federal level, and Kathy talked about all the surveys that are done in the private sector. It seems to me that there is an incredible amount of data out there relative to the amount of meaningful analysis and direction provided by analysis that comes out of it.

I think that in health plans, what they respond to is being asked to collect data when it has not been demonstrated what is the need. I don't think you have to have the exact specific report that is there, but I think to have that in mind is very important.

Then the other issue about developing infrastructure and capacity that people talked about, the analysts who look at this data. People have talked about here, shouldn't health plans be responsible for prioritizing and doing all of these kinds of studies and so on. I have to say there is an incredible need for further development in that area for the whole analytic expertise.

One of the things that we have done is to create expert analytic systems and to embed our analysis and our research in software, because it is the only way that you can disseminate it out to people that don't have that training, particularly a population-based mentality that comes out of the medical care model. So when you are talking about responsibility for populations that requires change in thinking.

The third point I think is providing some capacity to instruct plans and providers on how to implement the systems, the coding systems, the structure. I think that the devil is really in the details when it comes to these things.

The people from NCQA talked about audit systems and those kinds of things. I think you can put those in place on the back end, but it would be better to have some organization or some mechanism to be able to help people with that on the front end when they are developing those kinds of systems.

A fourth point is that I think that when it comes to these systems and these data structures, you need to really be able to understand the shortcomings of them, because there will be shortcomings, and that there won't be any perfect solution, and then have a plan to deal with them.

When Dr. Rosenberg was up here and Dr. Weed, I was struck by how well they understood the implications of changing just that one little data element and how it was going to impact other kinds of reporting systems and so on. I just shut my eyes and thought about the hundreds of data elements that we are required to collect and report on and change over time.

I just shudder to think of how little we know about those things and how hard it is to get funding to understand those things in terms of research and so on. I don't have an answer for that, but I just want to raise that issue.

My last point is really the need to demonstrate how all of this data and these analyses and things that we require of health plans, how they can really be used for protecting and improving the health of the population. I think that we can all get caught up in the structures of these things and worrying about accountability and compliance and those kinds of things. I think what would be more helpful in incenting health plans and providers to do these things are the kinds of demonstration projects that people have talked about here, the prototypes, the coming together and the community and those kinds of things.

I think if you have to demonstrate those things, it has a couple of good benefits. It keeps everyone honest, I think. The regulators have to justify what they are asking for, the health plans have to face into what is really necessary. They can't just cry unfair burden. I think it educates everyone and really creates an opportunity for the public and private partnerships that we hear about so much.

DR. IEZZONI: Thank you. Any really quick, quick questions of clarification? Michael Collins, next, and can you introduce yourself and tell us a little bit about yourself?

DR. COLLINS: My name is Michael Collins. I'm the deputy executive director of the Center for Health Program Development and Management at the University of Maryland, Baltimore County.

We are an applied research, analysis and consulting group within the university structure in Maryland. Our primary client now is the department of health and mental hygiene of Maryland, a unified state health department that includes public health and Medicaid functions.

I am here today because we played an important role in helping the state design and implement its current Medicaid managed care program called the Health Choice program, which is just finishing up its implementation, or rather, its roll-in phase, its startup phase, and has now enrolled about 330,000 persons into capitated managed care programs.

It is a relatively comprehensive program that most populations are in, including the disabled. Managed care organizations are capitated for all services except for mental health carveout and a few very tiny other exceptions.

I think the Health Choice program is notable among other things because Maryland made a heavy bet on encounter data. They believe that encounter data is absolutely essential to operation. I think that goes back to our fundamental belief, shared by the policy makers, that there is really no alternative.

This reminds me of the story about Margaret Thatcher, who in her first term was known as Tina, because she used the phrase so often, there is no alternative. We think that in managed care, there is no alternative to collecting information. After all, the theory -- if managed care is now the dominant theory of health care delivery in this country, it is about management, and there is no other industry that would pretend that it could manage itself without information about the production process and delivery of what it does. So there really is no alternative to collecting information about that process through encounter data and other means.

But it is equally important -- and I think I can start by basically saying amen to Eileen's presentation, which is that too often, we see people collecting data but not using data. It goes back to Collins' Law, that use creates quality in data, use begets quality. So data are only as good as they are used.

That also means however that poor data quality is not an excuse not to use data. So you have to use the data in order to make it better.

Maryland is betting heavily on encounter data, because they need it for really two major purposes, first, quality measurement, a lot of which has been talked about today. I was happy as I waited here, being in the cleanup spot. I have had a lot of notes and crossing out things that other people were saying that I didn't have to say. But we use the data for quality measures. We are collecting -- we intend to replicate some of the HEDIS measures. Plans are also required to report HEDIS measures directly.

But we in particular want to -- we think we need to look at some variations on HEDIS measures, particularly around other sorts of eligibility issues. We will be reporting and using as part of the quality measure of the plan basic utilization measures, admissions days per thousand, ambulatory care visits, ER visits. We have done an analysis and made up our own state list of ambulatory care sensitive conditions that will be part of our reporting process.

Eileen mentioned EPSDT standards, and that is something that is part of our group. We have a series of ongoing groups within the program, including providers' advocates, state analysts and so on, about special populations of different types. Those groups are in the process of coming up with some special population specific measures that can be analyzed through the use of administrative data.

The second major area in which we are using encounter data and other claims from the mental health system, all types of administrative data, is in our system of health-based payments, risk adjusted or the payments formerly known as risk adjusted payments. And of course, since this is now about to be a requirement of the Medicare program, it is something that is heading for virtually all managed care organizations. It is impossible to have health based payments without a record of health care encounters, or most of the systems are based on diagnosis.

So the plans know and have known since we began starting that they wouldn't be able to meet quality standards, nor would they get paid efficiently if we didn't have decent encounter data.

Now, having said all that, and as I say, the program started in July, the state of encounter data production in Maryland is really pretty poor at the moment, although I think it is not worse and probably better than most other states that have implemented encounter data requirements as part of Medicaid managed care. The overall record I think is really pretty lamentable.

Maryland working with HCFA made an attempt to make their encounter data requirements. Although they are pretty comprehensive, they tried to make it as much like claims data as possible, based on the UBs, the HCFA 1500s, and so on.

Some exceptions to that are unavoidable in the Medicaid realm, for example, EPSDT indicators is one good example of that. But having tried to standardize the data set as much as possible, Maryland also took a proactive approach and has geared up and offered technical assistance to all the plans in helping them meet their encounter data requirements, technical assistance paid for by the state. Several of the plans have already taken up that offer, and others will be soon.

But in the first round, now the first two rounds of encounter data submission, things have not gone very well. We are having predictable problems, I think. We are seeing in the areas which we would have called the basic areas that you want to look for in encounter data testing, timeliness, completeness and accuracy.

Most states have looked at the first two of those, timeliness and completeness, and over time have been able to reach fairly acceptable standards to them. I think there is a lot more question in my mind as to whether accuracy, in other words, is the number that is in the field a correct number, is being done as well. We have a lot more to learn about that.

To try and characterize some of the current issues, as somebody who is in the position of working with the state and helping the state, and also working with the plans and trying to be a bridge mode and communications channel between them, plans are having persistent problems in communicating with their own providers. I think this is something that goes far beyond data submission as a problem for managed care organizations, but that is the subject of another meeting, maybe.

But in terms of data submission, they are definitely having problems with lack of data, first of all, from capitated providers, from whom they are used to pushing the risk downstream and not really worrying about accounting for every service that was done.

This has not been a very high priority for managed care organizations in the past. I think we can see that in general, IT operations of managed care organizations are pretty undercapitalized. They tend to have small staffs dedicated to information technology initiatives. Our experience in Maryland right now is, even though there are small staffs, they are having tremendous problems recruiting enough people to run their information technology operations.

From the provider, since in Maryland Medicaid the average provider before this Medicaid managed care initiative started was a part of six managed care organizations. That number I'm sure has not gone down in the two years since we did that survey. It is hard to see from the rendering provider's viewpoint how they can go to the effort of keeping up with differing demands on them from every managed care organization that needs data in some other way, or whether it is particularly clear that we have had many people come to our meetings and say they really don't pay attention at the point where they are seeing some patient, whether that patient is in United this month or in Blue Cross or in some other managed care organization.

So there is a definite pattern of disconnects in the chain. Many of them also do not have currently -- at the time that the managed care initiative started, although we have regulations in Maryland, where other people would have contract provisions -- although it was in the regulations about what had to be done, what were the responsibilities of the managed care organizations, many of them did not have in their contracts with their own providers at that point the regulations that required them to submit data in the formats and the content that was required by the program.

An area that I didn't hear discussed today, one that I think is important in the work of the committee is to work with health information system vendors and look at their role in this whole process.

The first answer of the managed care organization that is faced with new requirements for submitting data to the state as well, that is not in our HIS system, or that was in version three of the system, but we're only on version two, or we haven't bought version three yet. These are not really unfair responses on their part. They have to live in an environment where health information systems are very major investments and hard to change once implemented.

As a person who follows some level of the health information system business myself, I know that many of the vendors out there are working hard and rapidly to implement many items as part of their system that could be part of the quality measures -- HEDIS modules, for example, but those are only beginning to be disseminated out into the provider community.

I have not seen a very good connection between those people who are out there in the vendor community and the federal government, let's say as a beginning, about, here is what we are going to require two years from now, so it would be a good idea to put it in your product development plan. I think that is a step that needs to be taken somewhere down the road, the sooner, the better.

I do want to come back for a moment to this question about the issues of what we should be monitoring in what we are getting. All the states seem to be doing a pretty good job of looking at timeliness and completeness, finally.

Accuracy is one in which it is not so. Things have not been done so well. There are many things that could be done without too much difficult on the state level to try and audit accuracy before going to the medical record review level. In other words, there could be more and better automated measures. Amazingly, some states that we have reviewed in our work also have not gone to the point of really having good feedback loops with the data submission people, with the NCOs. They say, we edit the data, we send it back, we say it is right, but we never tell them what was wrong with it or systematic things we are seeing. Closing the loop in every step of the data edit and the data capture process is really important in this.

Then another thing that came up this morning was issues of connection with state maintained data. I think the particularly important part of this that we see as people who -- we maintain the data warehouse and decision support facility for Maryland Medicaid. It particularly links with denominator data.

This is a very difficult issue, because the way in which most states that we have looked at maintain their eligibility records and so on, there is not a simple link between those things and the use of those files as denominators in analytic projects. We probably as the group that maintains our decision support facility spend more time and investment on that issue, the linkage issue, than on any other. So that is something we don't -- and the data are of very limited usefulness without the ability to link to the denominator data.

We also -- the state of Maryland does maintain race and ethnicity classifications, but we are actually planning a project right now to go look at the department of social services, which originally records that data as part of the eligibility process. We are not very sanguine about how good a job -- not that there is anything about them that is worse than anyone else, but this is an area that has had a lot of careful attention. So even in cases where it is being reported, I think it is important to try and audit how that process is taking place, in my experience.

I think I have used up my time, having said all that. So perhaps it would be better to answer questions after this.

DR. IEZZONI: Thank you. Dave, can you give your talk, and then we'll have questions?

DR. BALDRIDGE: Good afternoon. I'm Dave Baldridge. I'm the executive director of the National Indian Council on Aging, a 501 3C nonprofit based in New Mexico.

This afternoon I find myself in a very familiar situation, which is being in a room full of people who know a great deal more about the topic at hand than I do. Yet it is I think one of the ironies of our health care system that here you are with all your degrees and expertise and knowledge, and you are trapped sitting for the next 15 minutes listening to a guy who barely understands standard deviation. Ain't this a great country, I think?

Our mission is to bring about improved comprehensive services for this nation's older Indians and Alaska Natives. That is not too tough, because the 200,000 of them or so have very little in the way of services. They are suffering from generally rural isolation, and many, many other barriers. They tend to not benefit from the health care services that apply to many of the other elder citizens of our country.

So for the last 23 years, my nonprofit, which was formed by a group of tribal chairmen, has looked for ways to better serve our elder population. One of the things that is striking us more and more recently is how interrelated their needs are. They are caught in a vicious cycle.

For example, this year we just got two million dollars more in the 228 Title 6 nutrition programs that are their key service points in reservations. Yet, when reservation roads are so poor, when they average a sixth grade education, when they have got three generations of people living in substandard hut housing on the reservation, when there are no jobs, we find that us going in and trying to just fix one item like a rifle shot isn't working for the.

So we began to think, what could we do to help these people, these elders, that would really make a difference in their lives.

In 1992 and 1994, we held national Indian aging conferences. In '92 we sent a questionnaire to 800 tribes, elders, Indian service providers, and we developed a national Indian aging agenda for the future. It was the subject of a Senate committee hearing, a significant document.

Three of the top five needs that our elders expressed were about health and long-term care. They are very concerned about their health.

So as we looked at the health care system that serves them, we call it the ITU system. The I stands for the Indian Health Service. Health care for Indians began in the 1880s as those Indians living around forts in the Midwest were ravaged by disease. So it was Army physicians who first treated them. But more recently, Indian health care was moved from the War Department into public health, and is now administered by the public Indian Health Service, which has 145 service units scattered through Indian country in the United States.

Health care for Indian people is not just health care for another minority. On the basis of some 800 treaties, Indians ceded 500 million acres of land in this country, and many of those treaties specifically provide for health care for Indian people on the basis of statute, beginning with the Snyder Act in 1922. There are many, many cases of statutory provision for the health care of Indian people. The third leg of the tripod is case law. Again and again in U.S. courts, we have seen health care for Indian people upheld, and subsequently the United States federal trust responsibility to provide that health care.

I have a handout which is Health and Long Term Care for Indian Elders. Chapter 1 has a very fine explanation of the basis for Indian health care in this country.

So it has been provided top down by the commission corps of the Indian Health Service. Fifteen years ago, if you look at the tribes, that number would have been zero. Currently, the big news in Indian health care is that under Public Law 93638, we have more than 300 tribes which are contracting, meaning taking some portion of Indian Health Service monies and providing their own health care, or they are compacting, taking a big pot of money and providing all their tribal services.

It is a massive and very new venture in Indian country. It is a real-time exercise in tribal sovereignty, which is the single most important issue to Indian tribes. We think it is a great exercise.

At the same time, we are really scared for them, especially for the small and medium sized tribes. As they begin to take over Indian Health Service clinics or start their own, they don't have facilities, they have a backlog of three generations of neglect of their health care facilities, if they had them. They don't have trained medical personnel, including hospital administrators, physicians, nurses. In many cases, they don't even have trained clerical help and the labor pools to draw from.

So as we see them coming to the forefront, it is real nice, but they are doing it at great risk, and they have got a very great deal at stake.

The U, the third leg of the Indian health care delivery system, are 40 urban area clinics scattered generally in larger metropolitan areas around the country. They are funded by state grants, whatever they can scrape together locally and some funding from IHS. Some operate as FQHCs or have in the past, and all of them are struggling for money desperately. Many of them serve little more than INR services for the most indigent transient Indian populations.

So my little organization thought we would maybe try to empower this Indian health care system, the ITU system. We perceive there are three big trends in this country that are affecting Indian health care.

The decentralization of the ITU system. The Vice President's RIGO 2 initiative has resulted in an Indian health design team which for the past five or six years has been re-inventing the Indian Health Service's role in health care. They are retreating about as fast as it is possible to in every respect, including the retention of a national data system, which I will talk briefly about in a second.

But ITU is really backing out of direct service provision. They are empowering those 300 tribes by giving them decision making authority, by giving them what money there is, and telling them you are pretty much on your own.

Secondly, because of a Presidential directive, of course, there is a national, even a worldwide move toward the standardization of data. We are seeing that be very effective in many cases. Yet at the same time, we are seeing in federal databases almost no Indian flag data being pulled up. Indians are not benefitting from this national trend.

Third of course, the major trend of state managed Medicare. Where once we had a monolithic Indian Health Service system that could go in and negotiate for durable medical goods, who could negotiate for provisions from states for health care contracts, we now have a very fragmented system. The smallest end users in each of 38 states which host Indian tribes, where Indian tribes reside, 38 states either have or very soon will be enacting Medicaid managed care plans. These states are benefitting from HCFA data. The tribes that reside there are not.

The Indian health care delivery system is traditionally responsible not just for the health and direct care that an HMO might provide, but for public health and certainly for maintenance of facilities. So their responsibilities are very much larger than those of an NCO. Yet, they find themselves now at a new negotiating table in each of those states dealing with new players and dealing with new terms that the Indian Health Service delivery system has never faced -- capitation. The interface between the Indian traditional delivery system and the new managed care system is a very, very complex one, not just for clients, but for the Indian providers as well.

So we see Indian country moving at various degrees of speed toward managed care. We are seeing in some states, Washington, Oregon, California, Idaho, the 42 tribes in the Pacific Northwest, have or are reaching very acceptable managed care contracts with those states. It is working out well. In other states, including three of our top five Indian states, Arizona, New Mexico, Oklahoma, HCFA Medicaid data is not available.

The state managed care plan in New Mexico is very tenuous and teetering. The interface is being very difficult. I think it is unusual that we are seeing very good state tribal cooperation. The tribes worked at that for five years in the Northwest. That is certainly not the case for the rest of the U.S.

So we see these various components of Indian country moving toward managed care, the providers, the IHS, the tribes, the urban clinics, but also our clients all moving not because they have chosen to but because they have to toward managed care with states. They don't have data. They are in a very precarious situation.

So we perceive that it is quite true that data will follow dollars and vice versa, and that these 300 compacting tribes have every bit as much legitimate a need for good health data as do the states and the MCOs that they are dealing with.

We perceive that if we could create a data bureau, that is my demographer Mario from Malta, that we could perhaps access federal data, not just HCFA data, but Indian Health Service, the RPMS patient data system, HUD, housing data, the Bureau of Indian Affairs, state data, U.S. census data, USGS data. None of this is filtering to tribes. I think you would be horrified if you knew how little data they have access to. They don't have it, it is like another world out there. They need it desperately.

So we felt that if we could somehow access federal data, we could perhaps provide it to each of the components of the Indian delivery system, the IHS, the tribes and the urban clinics.

So we this past year, I am very happy to say, competed and won federal grants totalling well over a million dollars now. Our first grant was with the Administration on Aging, based on this floor in this building, to access HCFA data and try to provide Medicaid data to those tribes to meet their needs.

We have been working for some months on developing an agreement with HCFA. It is on the verge of final signature now. We find it fairly extraordinary that the world's largest health reimbursement organization would provide Indian Medicaid flag data to a small nonprofit in Albuquerque, and we are very proud of that, and we are very grateful for the cooperation we received and continue to from HCFA.

It has involved strenuous considerations of the Privacy Act. We passed HCFA review boards, Indian Health Service national review boards and the IHS national Privacy Act coordinator. It has been a very complex set of arrangements, and very difficult for us to gain the cooperation at the level we needed from HCFA.

They are going to supply us all the Medicaid data for those 38 states or any state that we request, beginning with North Dakota. I've just come from a meeting with four tribes there, to try to learn what their data needs are. They are thrilled to death, and they hardly even know what questions to ask about health data. But we are going to try to interpret it from HCFA, give them back analyses.

So we will for every tribe in this nation, hopefully within a couple of years, begin to supply them with federal and state data that will help meet their needs, a queryable health data service at no charge to them. Also on this floor is based the Administration for Native Americans. Gary Kimble, the director, sends out some $50 million or so of grants each year. They go to tribes. It took us four tries, but this year we succeeded, and we have got a sister grant to supply identical data from HCFA or IHS, wherever we can get it, to the urban Indian clinics. We are going to provide each of the 40 clinics with a GIS map of their city.

In Albuquerque, by the way, we have just completed a GIS project. For the first time, we think we have applied this technology to Health and Human Services. We chose a population of Indian elders suffering from complications of diabetes. We were able to map the Indian ghetto and find the IHS clients in that neighborhood, compare their health status with their socioeconomic status, factors from the U.S. Census in '90, and to their proximity to services.

We think it is a really ground-breaking project. I did not bring the final report; I should have, but it has gained a lot of attention.

Our mapping arm of our project, the earth data population center, just won a first-place Ezri award for the application of GIS in the United States, so we have a very first-class technical team.

With the IHS, we won a competitive grant for five years. We are going to revamp their entire data system. It is one of the best data systems of its kind in the world, the resource patient management system. It has got incredibly good data going back many years on ICD-9 codes. It is not user friendly; it is very difficult to access, data rich, information poor. It takes skilled epidemiologists to penetrate this difficult system. We are going to convert it to a GIS. It will become PC based, networkable, user friendly, queryable, all those things that it is not, hopefully for the first time.

In year three, we are going to create an interactive atlas of American Indian elder health, led by the IHS baseline measures work group, a group of physicians. We are going to map the top 10 diagnoses affecting Indian elders by every zip code in the nation. We are very excited about this project, and hope that we can begin to assure the survival of this very fine data system.

I think you all will have to interpret your own message from what I am saying, because obviously my perspective is that of a user and of a service provider and an advocate. We are very concerned that the state data is not going to help. Our two target tribes originally were Cherokee Nation and Chickasaw Nation, both in Oklahoma, and they both have good data systems; they both are progressive tribes, very cooperative. Yet, as we met with Sooner Care, we learned that that program has many of the problems that you have heard. The data is very inconsistent, it is spotty, it varies from person to person, county to county, month to month. There are no racial flags. All of these things mean the data is pretty much useless, so we have shifted our focus now to North Dakota.

We don't know where all this is going to lead, but we do feel that the states certainly do not feel the federal trust responsibility about Indian health care. We are very cynical that MCOs, HMOs and the states will continue to pour money into data when it is not economically profitable, just as it is difficult for an HMO to serve rurally isolated residents of any ethnicity.

So this interface between states and tribes in terms of data is certainly a critical one. We think there will be some good models that will evolve and that some very good possibilities are there. Yet, I hope this if anything shows to you what huge gaps there can be in health data for ethnic populations.

Thank you.

DR. IEZZONI: Thank you, Dave.

DR. BALDRIDGE: It is a lot to think about.

DR. IEZZONI: Yes, it sure is. Elizabeth, do you have any questions?

DR. WARD: David, you are under an aging entity, but it sounds like the data work you are doing is for all of IHS, is that correct?

DR. BALDRIDGE: Yes. It was a problem that my board had to wrestle with. We are an elder advocacy organization; what in the world are we trying to do, getting all of Medicaid data from HCFA?

We are trying to learn a more comprehensive view about aging, that it truly does affect all generations. And certainly we are all progressing toward elderhood, and especially in Indian country. The inter-generational factor is not just something romantic, but literally, we have grandparents raising grandchildren to a huge degree, and three generations living in one household.

We feel that Indian elder problems are interrelated not just in terms of transportation, education and category, but also, they are vertically integrated age-wise with the rest of their communities.

It is also I think a truism that once we focus on Indian elders and pull out statistically data for age 55 and older, it is only a matter of a few keystrokes, relatively to gather that same data for all Indian age groups. Really, the work is getting access to the data for any age group. Once you have it for 55-plus, it is very easy to get the same data for younger populations.

DR. IEZZONI: Hortensia?

DR. AMARO: A number of us have highlighted through the course of the committee's work our concern with the impact of not only data reporting requirements for managed care providers, but the changes that are coming in general with increased reporting requirements and the sophistication of the systems, and their impact on what I call the resource poor providers. And certainly, providers of the population that we have been listening about in your testimony fall within that.

I was wondering, have we had anybody in the -- when we have gone out, reporting on Indian Health Service providers or other providers of the Indian population on the issue of technology? Or have we not had any testimony on that, and the problems that requirements might represent?

DR. IEZZONI: I have only been involved with this committee, Hortensia, for about a year. It has a very long history, the committee, so I don't know.

DR. AMARO: Just in the last year.

DR. IEZZONI: No, we have not.

DR. AMARO: So we haven't heard specifically --

DR. IEZZONI: We have not.

DR. AMARO: -- on how the broader changes that are being considered might impact the providers that you are talking about. So I think it would be useful -- tell me if I'm wrong -- for us to have some kind of statement or articulation about concern about that.

I've been really interested in hearing from people and how the requirements are going to impact them, especially if we don't put any resources into helping people build capacity, providers build capacity to have these systems in place so that they can report data out as required, and the data can then be useful to them.

DR. BALDRIDGE: Thank you. We would be thrilled with that. In the Indian Health Service, it is estimated by many people that their funding is discretionary, and that it is about 50 percent the level they need just to provide adequate primary care. We are quite concerned that one of the first things to suffer will be data acquisition by the tribes. They just won't be able to afford to.

DR. AMARO: Right.

DR. BALDRIDGE: And for the first time, we are seeing them interfacing with states. So hopefully they would be fairly measurable. Thank you.

DR. AMARO: Okay. The second question I had is, earlier this morning we heard a lot from states saying that they didn't have race and ethnicity available, from providers saying that they didn't have race and ethnicity data available to them, and that they would like to have that.

I was wondering whether you know if the Indian Health Service has advocated or requested that this information come down to providers. Is there any effort in that -- has there been any effort in that among those providers to get race and ethnicity data?

DR. BALDRIDGE: The IHS has a good legislative staff, but to my knowledge, they are so overwhelmed with other, larger issues that I don't think they have paid much attention to this. We are trying to help them pick up that cudgel a little bit, because we think it is going to be very, very important.

At a national level, the IHS is even discussing contracting its entire data system. We feel that would be devastating. As the tribes break away from Indian Health Service and begin to interact with states on their own, we are afraid they will stop contributing to a larger IHS data system unless there is some good inducements.

DR. AMARO: Well, there is a whole set of issues that are distinct, that we need to really be aware of.

DR. IEZZONI: And I will confess honestly, Dave, do people who self identify as American Indian or Native American, are they all associated with tribes? Or are there many people who self identify in that way who are not associated with tribes, who you from your organization are also interested in? Or are you only interested in those who are associated --

DR. BALDRIDGE: There are more than 550 recognized tribes in this country, and hundreds of Alaska Native corporations. There are also state recognized Indians, executive order recognized Indians and many Indians who no one recognizes except themselves.

DR. IEZZONI: Right, right.

DR. BALDRIDGE: The states report to me that their intake procedures vary from self report of the client to perception of the intake person, or nothing at all. So it is extremely inconsistent across the board.

The Indian Health Service lets each tribe determine its own membership qualifications, and the IHS uses tribal recommendation. More than half the Indian population in this country is now urban, and they tend to not live in ethnic neighborhoods. They are very difficult to assess. Many traditional Indian people feel that the IHS has always taken care of them and by golly, they are going to keep doing that, and I don't need to report if I'm Medicaid eligible. Other Indians, as the gentleman, Don Unlaw from Arizona said feel that they would be discriminated against by reporting ethnicity, so they hide it.

It is a very difficult, complex question. California, we are told, does not utilize ethnicity, and so Indian urban clients are assigned to the nearest HMO automatic enrollment, rather than perhaps an Indian provider that they may prefer. We feel the Indian providers are in as much or more jeopardy than the clients with the managed care.

DR. IEZZONI: Interesting. I'm not even going to ask -- and this is kind of a technical question -- if there are CPT codes for some of the services that some of you who are native providers might view as very important to have information about. Are there coding systems for Indian Health Service for some of those --

DR. BALDRIDGE: Is it the ICD-9 type coding?

DR. IEZZONI: Well, or CPT for procedures.

DR. BALDRIDGE: I believe the IHS system is very sophisticated and probably does have those. I don't know, but I would suspect that it does. However, the degree to which the tribes adopt it may really vary, and that is a real unknown. These demographic changes are brand new.

DR. WARD: I think what you are talking about is what might be considered complementary services, that might be native healing?

DR. IEZZONI: Yes, alternative medicine type of services.

DR. WARD: Alternative medicine, that is certainly a minuscule fraction of what actually is provided in terms of health service. The RPMS system is as sophisticated as any of the other medical information systems. It is ahead of many of the other systems, and it has many of the same coding built into it in terms of CPT and diagnostic ICD-9 coding.

DR. IEZZONI: Interesting. Well, it is certainly something that we should as a committee think about, how we might want to address this further.

I have a question for Eileen. You weren't here yesterday, but a number of the speakers raised concerns about the implications of proprietary data. Many managed care organizations, even though they are doing work for the government basically, through Medicaid contracts, are in fact proprietary organizations. Some of the speakers yesterday expressed concern about the willingness of proprietary organizations to share not only data, but especially methods, also that they are developing, and to share ways that they go about evaluating quality for themselves, so the rest of the world can learn.

Can you just speak to that a little bit, about how you might feel about that?

DR. PETERSON: Well, I can just speak for our organization. As I said, we were one of the original architects of HEDIS. What we did was take algorithms that we had developed for our internal quality system and basically put them in the public domain in HEDIS.

Our center was created to provide in-way access to our database. We have contracts from CDC and Food and Drug Administration, and we partner with academic researchers to come into the database.

So I think that in the past, there has been more the notion of protection of proprietary methods and things, but I think as standards progress and these things become more accepted standards, there will be kind of a falling away from that perspective. For those of us that come from a public health background, I think we are very excited about that. I think it opens up a lot of possibilities.

I think that right now, one of the big issues though is the privacy and confidentiality issues surrounding the data, and what are the expectations and the responsibilities of managed care organizations to protect that. I think that there needs to be some more consensus and understanding around that, and that would clarify some of those issues, I think.

DR. COLLINS: Can I follow up on that? I am interested in what those groups are doing now. Before this job, I was associated with one of these proprietary heath information groups.

I think that on the analytical side, we are actually seeing a lot of movement in the right direction. Ten years ago or eight years ago now, when I got into the health information business, there were still a considerable number of groups that were selling essentially black box systems of measurement or one type or another, whether they were severity adjustment measures or so-called quality measures, and I really think that is almost inconceivable now.

So the analytic methods behind quality measurements, there won't be any more black box methods, of, we have a quality measure, but we won't tell you what is in it. What is I think going to stay very strongly proprietary is the implementation of those measures into software, in which groups that are selling their own software packages, where now really the value added from the vendor is integrating these various measures into one package that they can sell, as opposed to what the content of that particular measure is.

A lot of companies are now coming out with the data module of their software system, but they are not coming out with the proprietary X module, that they won't tell you what is in it.

DR. PETERSON: Just to follow up on that, I think that you have to go back into thinking about historically what is the private sector good at. I think the private sector is good at finding opportunities where innovation is rewarded. So I think that as you increase standardization and make more things public, then the private systems will fall away from those kinds of things, and what you are really seeing, I think what Michael said very well, is that the implementation and the embedding of these things in software and making them easy for people to use, providing access, I think those are very good areas to get the private sector focused on, and that will increase the dissemination of the methods and the data standards that you want to have out there.

DR. COLLINS: I think the point though was made recently, and I think it was in David Blumenthal's recent article in JAMA, in which he makes a very good point which is that as more of these measures and so on come to light, it is a very expensive process to put health information to use, whether it is a quality measure or some other kind of measure, and that he believes it is very likely, he says in his article, that privately capitalized health systems who are able to do this. That is both an opportunity and a threat from his point of view. I think that is important to look at.

DR. IEZZONI: Yes? Please come to the mike and identify yourself.

DR. MELMAN: Mike Melman from HRSA. I was wondering whether any of you had thoughts on how to collect enabling services data. When we have done case studies of Medicaid managed care plans, they can report on the scope of enabling services, transportation, outreach, translation, but they can't report at all on the utilization, who utilizes it, how much is utilized, what is the nature of the utilization, what is the intensity.

These are hallmarks of safety net providers. If you are interested in whether they do anything different an what different effects it has on outcomes, then you need to track it. But it is not clear to me whether encounter data is an appropriate mechanism, should we be collecting this in enrollment forms. The only systematic effort I have ever seen is actually the Indian Health Service when they had the community health representatives. They collected extremely detailed data on how many of these services they were providing and where they were providing them.

DR. IEZZONI: Kathy, you can answer that one, too.

DR. COLTIN: Well, I don't know if I can answer it. I can try.

A lot of the data collection systems that we have for some of those things are really ad hoc. Take for example child care services, which are available in most of our health centers. Tracking who is using them is not so difficult. We have logs, people have to sign in and you have to know where the mother is, so that if something happens you can reach them, what doctor are they seeing, what is the appointment time.

But that stuff doesn't necessarily get automated. It just remains in logs and they use it to create tallies of what is the service volume and so forth, what are the peak hours, so that staffing is appropriate and that sort of thing, but it is not necessarily tied back to the individual mother, who left their child there in the form of some sort of a claim. Or any other type of database.

I think that that is true for most of those services, translation services, other types of services like that. They are tracked, but not in ways that are readily tied to an individual user.

DR. COLLINS: We have had discussions about this with MCOs around the rate setting process. Of course, they are making -- that Medicaid enrollees need more enabling services than their commercial populations.

Since these items are not -- well, for the most part not paid for on a bit by bit basis, encounter data is really not a very good approach to collecting information about them. Some combination of logging, and then kind of looking at the -- since groups like FQHCs, of course they were part of this discussion, they were keeping track of the capacity for this, that they are creating.

They don't know exactly how many people -- or maybe they know how many people are using it; they don't know how much each one costs. But they do know how many people they have on a payroll in these items. So there is some sense at which you can create some kind of population forecast for costs, but an individual record based analysis is not a very promising approach for this.

DR. AMARO: I don't know of this being applied, the gathering of this data at the level of the managed care provider, but certainly in terms of for example some of the clinical programs that I have in the health department, we collect that information on every client. So we are able to look at what the composite of services that an individual got, for example, the number of people that are recruited into a program by outreach workers and things like that.

But I think when those systems have been developed, they have been for individual programs, interventions or demonstration projects, and there is experience there in how to do that.

DR. MELMAN: Well, I just want to say, there has been a horrendous public policy problem. When there was an enabling services grant in the health care reform, we tried to estimate the cost and utilization and the rationale for using it, and it was -- we just had to go to individual sites and look at that kind of data.

When you get into the discussions like around child health home that we did with the child health insurance program, it is difficult to make the case for these kinds of services, because nobody is tracking who needs them, who uses them.

DR. AMARO: We have been able to use the data we have gathered for those support services to calculate our capitated rates for, for example, substance abuse treatment costs for pregnant women. Then we negotiate with payors. So we are able to include that in the model, because it was part of the information we got.

So I think it is very useful, even when you have a capitated rate, because you can go back and look at your estimated -- what you project as costs related to those pieces of service and then what you are really spending on it. So I think it is good for planning.

DR. IEZZONI: Yes, Rachel?

DR. BLOCK: I have a question for Kathy and Eileen, and actually I'd like to ask Dave if he has given this any thought.

It relates to what I usually call the small numbers problem. HEDIS specifications, other systems like that, tend to rely on through their methodology the need to have enough of a base of a population that you can measure it.

One of the concerns that a lot of folks interested in special needs populations have had about HEDIS -- and actually, I hadn't thought of it in these terms before, but if you were looking not so much from the tribal perspective, but looking at it from the health plan perspective, advocating for their efforts to do quality improvement for the population when in any given location, they may have relatively small numbers, assuming they could identify them.

So I guess I would like to ask Kathy and Eileen what they are doing within their own health plans in terms of identifying and trying to come up with alternate means of assessing quality improvement efforts for populations or health needs that show up in their plans, that represent otherwise small numbers that wouldn't get picked up on the HEDIS radar screen.

DR. COLTIN: Well, we probably have more small numbers problems than you do. I think everything depends on what your level of disaggregation is, what is the meaningful level at which to look at the data.

In our case, you saw in some of the data that I presented how we broke it down into different kinds of aggregations of groups. We have broken the data down by particular care sites and by physicians. When you are trying to look at quality improvement efforts, depending on the nature of the problem, any one of those could be the target.

So you may have a terrible small numbers problem at the level of the individual physician. It may not be a whole lot better at the level of the center. As you aggregate up, the numbers sometimes do get better, and you can track your improvement overall, but you won't see significant differences down at the level of the individual provider or the individual site. You may see some changes, but you can't test them for significance.

In many cases, we don't even think statistical significance is what really matters. It is more a question of being able to say, what level are you able to achieve and did you achieve it, whether you have got a small numbers problem or not.

DR. PETERSON: Yes, I think we have had slightly better luck with this, because we can go across multiple health plans. We just actually submitted a paper to Pediatrics about looking at the special health care needs population, kids, just really describing the prevalence and care patterns and so on, and did some work with some people at Hopkins, comparing it to Washington State Medicaid population, actually.

But it is still a problem, even when you go across Medicaid plans across states, because the benefits are different, the fee schedules are different, what is covered and what is not in there. It makes comparisons very problematic.

I think that is an issue for Medicaid HEDIS as well: do you really have apples to apples. Even when you are the organization that contracts with the providers, do the providers, community health providers -- are they represented accurately in administrative data. I think it is the same issue; do they have the infrastructure as providers to be able to report these measures and have all the technology and everything that makes it feasible.

So I think that is an issue that relates to both rural providers and traditional community health providers. So I think that is a challenge.

I think it is dealt with at the case manager level more than the population statistic level, for people with relatively rare conditions, adults and children.

DR. COLTIN: I don't know if people are that familiar with Medicaid HEDIS, but we do break a lot of the data down by the particular rating class within Medicaid. So we have looked at some of these measures by whether they are AFDC equivalent or SSI equivalent.

When you get into the SSI population, the numbers get particularly small. Yet, when you are trying to look at some of these things across populations or across plans, you end up having to combine. Then you have a problem with whether you truly have comparability, because depending on the mix of those patients in the different plans, we see very, very different patterns.

I have some slides I didn't have a chance to show, looking at what percent of the population used mental health measures, and looking at the AFDC and SSI populations across the plans. They really are quite different.

Then, our state was one of four, Ohio, Washington and Minnesota, that contracted with Mathematica to do a comparative look at the HEDIS data from 1993, which was the first year that any of the plans started doing this in any of the states. The problems then are even worse, because now you are comparing states that actually even might define their categories a little bit differently and have different benefit packages and so forth.

So looking at mental health penetration across states is even more complex than looking at it across the plans within a state.

DR. BALDRIDGE: Indians are less than one percent of the national population. Even a high population Indian state like North Dakota is five percent. That is one out of 20. If you took out all those Indians who were still going to IHS, it might be one out of 100 HMO clients that might be American Indian.

I think this consideration is probably at a level of sophistication we haven't addressed yet. The IHS docs are telling us that where the effort is really needed the most is where the rubber meets the road, which is families, how do we deal with Alzheimer's when Grandma is washing the dishes three times after lunch.

So we have embarked with HCFA on an educational program to develop educational materials to help the clients, but we haven't tried to deal with keeping statistics within states or HMOs, sorry.

DR. IEZZONI: Have we run out of questions?

DR. AMARO: I was just going to add not a question, but it is a comment around the small numbers issue. This isn't the only place. In dealing with some of the national data systems, we have this problem also. Some of the strategies that have been used is to collapse data over years and things like that, to at least start to get somewhat of a picture of what is going on. So it is similar to your comment about collapsing up as much as you --

DR. COLTIN: The problem about collapsing time and quality improvement is that that is exactly what you are studying.

DR. AMARO: Right, it certainly doesn't allow you to use it for those purposes, but it may allow you to just get a general picture of how that population might compare to other populations over a similar number of years, if there is nothing else you can do.

DR. IEZZONI: Do I hear any more questions in the room? No? Well, with that I want to really thank the panelists for a provocative and interesting discussion. We have learned a lot from you and we have learned a lot from these two days. Why don't we adjourn? Thank you all.

(Whereupon, the meeting was concluded at 2:55 p.m.)