[This Transcript is Unedited]

DEPARTMENT OF HEALTH AND HUMAN SERVICES

NATIONAL COMMITTEE ON VITAL AND HEALTH STATISTICS

WORKGROUP ON QUALITY

June 25, 2004

National Center for Health Statistics
3311 Toledo Road
Hyattsville, Maryland

Proceedings By:
CASET Associates, Ltd.
10201 Lee Highway, Suite 160
Fairfax, VA 22030
(703) 352-0091

TABLE OF CONTENTS


P R O C E E D I N G S (9:00 am)

Agenda Item: Call to Order - Bob Hungate, Chair

MR. HUNGATE: This first thing I would like to cover today is that I think we should probably try to finish at noon. So, hearing no objection, that's the first unanimous vote of my career at this job.

My sense is two things. In his remarks to the full committee, Ed Kelley made a statement about the distinction between health and health care, saying that they are equivalent concepts, they are measured differently, they are different things.

We are talking a lot about health care. I have the feeling that our primary mission really ought to be around health in the long haul. We are part of the National Center for Health Statistics. Health care has a big impact on health. And we are really talking in the eight candidate recommendations, about health care more than health, I think, but that's not explicitly true.

The second thing that hit me in the middle of the discussion is that we have this preponderance of emergency room doctors in the group. I'm trying to think about whether I understand that mentality. I think that mentality would say there's an episode here if the measurement system, which we are grappling with, and you could almost say that this is a visit to the emergency room by the health care measurement system.

We are either going to help this system in the context of our work, or do nothing, or harm it. But I think that we need to consider that at the end of this session, that we have some output that goes on to something else, a tentative conclusion of some sort. Something that is going to go forward to the next discussion.

Yes?

MS. GREENBERG: I did check about the National Uniform Billing Committee. They are meeting on August 3-4, or the 2-3, that week, and then again in November. But right now the plan is for the committee to vote on the final UB04 at the November meeting. But if this group is able to come to any tentative conclusions, or summarize what they heard, it would be a good idea to take up the offer of reporting to them at that meeting in the beginning of August, particularly for example if you wanted to weight in on the indicator for secondary diagnoses, which seemed to be the least controversial, and was referred to as low hanging fruit.

We have to get John Lumpkin's approval. I guess we would have to say this has not come to the full committee yet. So, Simon has an opinion on this.

DR. COHN: I just think that we are sort of missing a step. I thought that we were actually going to have them come and meet with us. Isn't that in September?

MS. GREENBERG: But that's only like the chairs.

MR. HUNGATE: That's only two people. I think if we can get to a level of summary of a hearing that we are comfortable with as an accurate characterization of that summary, then I think that task will largely fall to Anna.

MS. GREENBERG: We'll have a transcript, and we will have minutes prepared.

MR. HUNGATE: Right, so there is some substance to work from.

MS. POKER: Could you please say it again, because I was just looking at the agenda.

MR. HUNGATE: I understand. We're just saying that the task of summarizing what we do is largely yours.

MS. POKER: And by the way just to say, I thought there was a consensus so far on the recommendations. What differed was the area of emphasis for different groups, which makes sense, because their work is such that they would find different areas of a little bit of more importance. But you're right, it was low hanging fruit that were identified. And I think the recommendations aren't widely accepted. That was my perspective.

DR. CARR: I think that's what we want to talk about today.

MR. HUNGATE: Let me come back to that, because I thought about do I want to try to say what I concluded? And I don't think I do. I would be willing to, but I have the feeling it would be better if we went around and let everybody say what they took out of it. Have the discussion. I don't think I should start that way.

MS. GREENBERG: One thing that struck me, and I was thinking about it, because you can get kind of lost in the minutia or certainly the data elements. But the thing that was sort of remarkable or struck me, as someone who for 15 years has been asking the question, and Kathy has certainly spent more time at it than I have. But are there some data elements that one could add to administrative data that would really help with quality? I mean we asked that question when we revised the UHDDS back in the early nineties.

I felt that despite differences on different elements, and different emphases and all that, a general acceptance or support for the fact that some additional information could be added to administrative data to assist with quality assurance. And that this was a route that should be pursued, because just that by itself, to me, I thought was important. Some people may not have even heard that, but I did hear that.

MR. HUNGATE: Okay, the way I think we can use our time this morning is to spend this time between now and 10:15 am, before we take a break, in going around and collecting impressions. It will probably take us that long to get it all collected.

Since we have only two testifiers, that should be a little bit shorter than it appears on here. And I think we can look for whether we get new information in those presentations that is germane, and whether it alters any of what we got beforehand. So, I would hope that individuals could come to their own kind of tentative sense of where we are going with this, if you will, a hypothesis of what we are going to come out with, and then confirm that in a sense in the next panel. And then we'll come back to it, and see if we can come to some consensus.

Does that make sense to you, Simon, from your experience in chairing these activities?

DR. COHN: We're record I presume, and I do want to just go on record as sort of indicating that I think it's like way premature to start making conclusions based on what has basically a panel of those that we have self-selected as people who most want the data. But I'm happy to have the conversation, but you just need to be aware of that.

MR. HUNGATE: And I would go on record as that's a predictable position for you. And my position is predictably different.

DR. COHN: Oh, do you feel we heard from a greater group of people than that?

MR. HUNGATE: No.

DR. COHN: Okay, that was all I was referring to.

MR. HUNGATE: But I think that the structuring, the describing of what the business case is will leave some unanswered questions which will only be covered by that next step of information. Now, the issue that I think is going to come up inevitably somewhere in here is the difference in face validity required by the physician, the 99 percent I think it was Chris referred to, versus the 75 percent that the purchasers are saying they are happy with.

MS. GREENBERG: Why don't we just go around?

MR. HUNGATE: I agree. Let me start to my left and go around.

Agenda Item: Workgroup Discussion

MS. HANDRICH: I made a chart where I summarized what I perceived to be the most significant feedback from each of the groups for each of the recommendations. And I concluded that the recommendations that the quality measurement organizations were predictable in that they want everything, or as much of everything as they can get.

But I felt particularly with Chris Queram, whom I know very well, that his praise was tepid, because he made it clear that although the recommendations were often in alignment with the recommendations that have been prepared by the group he represented, the recommendations of his group are much more specific.

So, then we heard from purchasers, and later from providers, this constant kind of concern about the need for more specificity with this or that recommendation. I ended up concluding that the only recommendations that at this point I would feel comfortable going forward with would be 3 and 7, the diagnosis modifier and the development of standards regarding functional assessment. And that there were concerns raised about a general nature of the other recommendations, that if they move forward without lots of caveats, might cause mischief.

Further, I concluded that perhaps a good resolution for this committee would be to have this report go through another iteration where we simply summarize the feedback that we heard from all the testifiers, and not work any more on the ones that we're not comfortable with moving forward, except as we all decide as a next stage, whatever. So, that we wouldn't spend say another year trying to refine the recommendations even more.

So, those are my conclusions about the testimony and where we go.

DR. STEINWACHS: And I wish I had been here.

DR. CARR: Well, a myriad of questions came up yesterday, and it struck me that in fact although we are focusing on some what seemed to be focused one, there are some major questions that haven't been answered. So, some of them for example, are the quality definition in a public domain. Is it a bottom-up approach? In other words, are we saying what can we say with the data that is currently available?

Alternatively, there are others raising the issue of shouldn't it be a top-down approach? What is quality? And even in terms of how we define quality, is it top performance by percentile, top performance by an absolute number, or the process of continuing improvement? So, that's what is quality.

And then the other question that came clear is when and where are we looking at it? Is it inpatient only? Is it outpatient? Is it both? Is it an episode of care? I think those are fundamental questions that we have yet to come to grips with; not just us, but the whole community from whom we heard.

I think there are challenges that became clear from the different testimony that your best data capture is direct from the provider, but high accuracy, very high cost to have physicians entering data instead of seeing patients.

You can have data that is coded by a coder. It's uniform, but it's subject to interpretation. You can have charter reviewers. It's reason intensive, time consuming, subject to interpretation. Or you can have electronic transmission, which is efficient for electronically captured element.

Just to follow-up on things that Stan was raising yesterday, when we say a lab test or a vital sign, the questions that came to mind are which one? The first one, the last one, all of them? If it is all of them, what field are we thinking of capturing it on, on a claims form? There is the issue of insignificant variation. So, normal range variation among different methodologies, physiologic stresses -- a person with a fast heart rate, high blood pressure from anxiety.

And there is also a concern that wasn't so articulated yesterday, but is there then an implicit requirement of when it's measured? So, if I go to the dermatologist, and I don't have a blood pressure, is my claim going to be rejected?

So, a third area was where does this data reside? Actually, hospital administrative databases currently are a rich source, which are all patients across payer. That data can be manipulated internally, or it can be sent out to an outside vendor for either internal reporting, or external reporting, like ACC, Premier, and so on.

A third area is the issue of when we send external data, CMS has national data, but only on their Medicare population. States have Medicaid information. And the payers have their proprietary information. But these are all fractured databases.

One of the things that struck me, based on how I spent Wednesday morning, before I came down here Wednesday night was the experience of taking a Leap Frog safety criteria in our institution, sitting down as a whole, scoring ourselves by their criteria. It was the most structured quality activity we have done in a long time, because it made us think. It make us score ourselves. And we'll send back to them a score. It won't have the data elements, but we'll have a work plan for what we need to do for safety.

So, I'm raising the issue that focusing on how you get the data element in may be the wrong way. So, I'm wrapping up, but in terms of the roles in the quality pursuit, it sounded to me yesterday like AHRQ and NCQA are defining quality. They are getting at what is it, both top-down and bottom-up.

We have now a new day with the electronic health record being developed. And we heard the importance of having fields developed within the electronic health quality that are critical quality elements. They are not surrogates, but are the actual elements properly defined and easily capturable for electronic transmission.

So, it comes to the issue of what is the role of the NCVHS. I agree with Peggy that recommendation 3 seems like a no brainer. Everybody wants it. It has worked in California and New York. It would be a good thing.

It doesn't sound like anybody has taken up the mantle of defining the performance status, and that's something that I think we would support in terms of creating a definition. I believe this has been a negative study, and we can help by saying that we would want to retract 1, 2, 4, 5, 6, and 8. And that we would rather use NCVHS to fill the gap of that interface with the electronic health record.

That the thinking about what are the fields, what's the flexibility, how can the electronic health record move us toward what John was talking about at our last meeting, the quality transaction?

I think the biggest thing that I heard yesterday in the morning, everybody loved everything, apple pie, motherhood. In the afternoon, the people who have to implement it were speechless at what was being put forth, and very concerned that there was not a clear articulation of how we would be better.

So, as we look at the cost/benefit, I think we have to say, number one, when we suggest something, how will quality be better? And number two, what will it cost in time and reconfiguration? And number three, what is the durability of these changes? After we have invested a couple of years, is it going to be satisfactory, and is it going to be flexible going forward for the continuous evaluation of quality?

So, to summarize, I agree very much with Peggy that I think we heard two very valuable perspectives yesterday. The afternoon, I thought, was a bit of show stopper. And living that experience of what it means to get the people in the organization to find the data element, capture it, enter it, transmit it, and then all the rework when it doesn't work. If we don't have a clear idea of how much we are going to be better because of it, I say it's meddlesome, and I'm in favor of retracting it.

MR. REYNOLDS: I would agree with what has been previously said. I would use some specific testimony to make a few other points. Trent stated that any administrative data changes would be a bridge only to DHR. So, as CMS they set the EHR up as where this data is going to reside.

Karen mentioned that hospitals are being asked for much quality data now, and I know they are reporting to states, and they are reporting to others, a lot of quality data now, which does not come through any administrative process. It goes directly from that institution to the state. So, you've got data flowing already.

The American Hospital Association is worried about adoption, worried about cost, worried about picking it up, and I think that the two speakers before me aptly put how do you get it, where do you get it? And if you make it part of the financial transaction which is the administrative transaction, any errors or anything to it are going to be sent back, and are going to interrupt the smooth flow of payments, which is one of the things that the whole HIPAA thing was focused at, was to keep things moving, especially in prompt pay environments.

As Justine just said, Stanley said he wasn't enamored with the vital signs. And so, as you look at all these things that are going on, you have these situations where -- and I mentioned yesterday I had drawn this picture of our current flow. And most of our flow of information now, although everybody takes some of it out and does quality with it, is really the financing of health care. And that's a lot of what the 837 is.

And a lot of the quality that is going on is from other types of databases. I know in North Carolina, before the hospitals were sending it directly the hospital association to build a full database, and then anybody could get to the database that wanted to.

So, as you think of playing off the two comments, if the EHR, and especially if you listened to Dr. Brailer, who we listened to the other day, and we kind of agreed to support and where he is heading, if you use the electronic health record to capture the data, you put the data in a situation that everybody can get at it.

I was using the example, if you take anybody, any of the payer types for example, if we had 10 heart surgeries done in a hospital, and 2 of them went bad, if you're only looking at the quality data of ourself, which is all you're going to get through the administrative transactions, you would say they had a 20 percent fail rate. Whereas, they may have done 600, of which they had 2 bad, which might pay them for high performance, actually, if you looked at their overall performance.

So, when you look at the current legs that are transferring administrative data, they stay within kind of a structured environment, rather than being shared. So, I think the modifier is something that I think is clear. I think the other things fall into the realm of how do we capture quality data, and what is the home for it, which would appear to be the electronic health record as its moving.

Had Dr. Brailer not been put in place, had some of these other things not been put in place to fast track that, then I think we're just all kind of throwing it off somewhere. But that's a thought, so that would be my input.

DR. COHN: A lot has been said that I agree with. I think I would just sort of briefly reinforce the concept that I think we all heard wide variation in testimony yesterday. And certainly, where you sit in some way impacts where you stand on some of these issues.

I wanted to point out a couple of things, as well as potentially a framework on which to think about all of this. One was that I heard that there was an ongoing discussion about quality versus cost and value, which is I think really one of the fundamental conundrums. And I was really very pleased that AHRQ was going forward doing some work looking at these specific issues.

And I would bookmark that as something that the workgroup or the full committee -- somebody within NCVHS needs to both potentially support, find out more about, track, and as results come back, begin to incorporate it into NCVHS thinking. One of the things I was hearing was this concern about the quality versus cost and value issue.

Now, fundamentally, I agree with Peggy and Justine and Harry and others about this issue that 3 and 7 are very low hanging fruit. And from my view as I was listening to things, I tend to think of things as well, there is in these recommendations there is sort of obviously low hanging fruit that we should be talking about, or just supporting, or somehow that needs to go forward as very specific recommendations.

And certainly in functional status, Marjorie and I were talking afterwards about the fact that there have been some furtive attempts to sort of move things forward since 2001, but there needs to be something that tells us what it is we need to capture. And it's really beyond just a terminology selection issue.

It's really more what is functional status? How might you want to use it? What is the business case? What sort of questions might you ask? And then the terminology upon which to answer that stuff. And somehow, it sort of got stuck back in 2001, and CHI really didn't move it forward. But having said that, I think that's something where we somehow need to try to see if we can push that one forward, as well as the other piece.

Now, there is another piece here that is called to my view sort of mid-term values. And I don't know whether they fall into claims transactions or claims attachment transactions, or whatever. But I tend to think of those as maybe being some of the other items. But I think what we need is the cost and value equation coming out of AHRQ to help us with that.

And then of course, there is the long-term piece, which is the EHR, which I mean even though I know Brailer wants to have it done tomorrow, and Mark McClellan wants to have it done yesterday, when I started working on this in 1991, I thought that I would have it done in two or three years. When I came on to oversee Kaiser's EHR activity in 1991, I thought I would have it done in about 18 months, and then I would go back to practicing full-time emergency medicine. It didn't quite work out that way.

So, anyway, that's my comment.

MR. HUNGATE: Yesterday you said you something about that just getting the EHR isn't alone going to do it. I wish you would articulate a little more what you were thinking about in terms of what is it that is going to have to happen to make that? There were some things you that you implied needed to be done now in order to make sure that when we had the EHR, it would do the things we wanted.

DR. COHN: Do you want me to comment on that a little bit further?

MR. HUNGATE: I think it was an important observation.

MS. GREENBERG: It was really Peggy who made the first comment. And it was really a critical comment I thought.

DR. COHN: It's a fundamental issue that Peggy O'Kane was commenting on that if you don't start from first principles in terms of development and evolution of your EHR with a strong quality focus, as well as being influenced as a major factor as you evolve, and I know Kathy I'm sure agrees with my comments, you don't wind up being able to get the quality information out of it.

And I used the example of well, I had done this great emergency department system in the mid-eighties. Now of course, I didn't know about NCQA studies. At that point I barely knew about JCAHCO stuff. But by what you select to capture as discrete data elements, determines a lot about what you come out with.

I mean it's one thing to talk about administrative data, but even in these systems the difference between coded data versus something that is just lost in free text is the difference between efficient capture and use of the data, versus back to our chart abstraction. And chart abstraction is easier in an electronic environment, but only slightly.

DR. STEINWACHS: It's the interpretation that fails sometimes.

DR. COHN: Yes, exactly. For example, we were talking about the arrival in the emergency department, versus the admission time in the emergency department, which are two very different things, and clearly for example, if you are doing thrombalytics, well, admission is interesting, but you often time are given the thrombalytics by the time you have actually gotten the patient admitted to the emergency room. It's just an obvious example there.

That's why I was sort of thinking that a lot of these obviously should impact somebody's view of how they develop an EHR, and what we develop the EHRs into. That was really my comment.

MR. HUNGATE: I think the development of that is part of what this group might talk about a little bit. To me, that's a deliverable from our total work, is to make that clear, and articulate it. And how we choose to do that I think is part of our discussion.

MS. JACKSON: Having contacted these people offices, in the short time it took in pulling this hearing together, the intensity of their presentations and their focus on the recommendations, it seems to reinforce what I was getting in talking to them, that they appreciated being here. They appreciated this forum. They listened to each other. Some went from one panel to the other.

And the range of nexus it seems to me of the issues being covered. It seems to be a great time, and people are looking forward to some statement, something definitely coming from this group.

DR. EDINGER: I have a couple of comments. One of which is I think I agree with Simon, the electronic medical record would help us a lot. But I also agree with Simon, it's like waiting for the messiah to come.

Twenty-five years ago I remember going to a meeting with Elmer Gabrielli(?) and one of the members of this committee, Clem McDonald, and sitting there talking about some of these issues, and developing the LOINC and everything else. And they were working on that before I started joining that workgroup. So, in 25 years we are still waiting. Well, we got LOINC at least, but we are still waiting for something to put it into.

I guess part of the problem is if we wait for the electronic medical record, we may wait for a very long, and we still need something to do in the intermediate time. I think the stay with Mark McClellan is probably also dependent on the outcome of the election and next year's political realities. This year's enthusiasm is sometimes next year's less enthusiasm.

I have worked on projects before which were very high in profile, and now nobody knows they exist any more. One of them I should mention, because I do bidding for lab services. I worked on in CMS. Linda Monu(?) was at AHA. We worked on it over 20 years ago. It's now back again in the spotlight, and these same problems arose. They decided to implement in certain states, and the members of the delegations of those states weren't too happy about it.

No, I think we need to work on the issue a while anyway. I don't think we can wait for the someday electronic medical record. I agree with you too that if we don't develop the elements that go into that record, and you try to retrospectively put them in there, as FDA found out with some of its device databases, once you put it up there, and then try to retrospectively fit and collect data, it doesn't work very well.

Our agency and a lot of others are working on it; how long some of those efforts will take. I think they need help though in some of their efforts, and they have again said, we're looking for guidance too in what elements and which conditions. So, I don't think they are mutually exclusive efforts.

I think some of the issues like doing the nursing homes like Barbara Paul mentioned, I think Beverly is not necessarily typical of all the nursing homes in the United States for various reasons. I think capture of data in OASIS and the MDS system, various people have different ideas about the quality and utility of the information.

But if you wanted to say for the nursing home, maybe capture lab data. That would not be quite a simple as it may appear, because some of that data comes from independent labs, some of it comes from the nursing homes' own facilities, and some of it comes from physicians with arrangements with the nursing home. Not all of that is electronically available all the time. Some of it is very retrospectively entered into the systems.

So, I think there is a lot of work that needs to be done in all of these. I spoke with some of the people like Nancy Boston separately, and one of the issues is how much does it cost to put in? Who's going to pay for it? It's nice to talk about the electronic health record, but I was telling Nancy about 30 years ago I worked in a medical center, and we went to the hospital administrator and told him about how we wanted to throw in computerized automated systems. And he told us come back in another few years, and we'll talk about it.

We came back six months later and told him, well, I can put one of these things in the outpatient department, I can put one here. Look how much money we can make. He said, this is the greatest idea you ever had, this is brilliant. Let's go ahead and buy it. So, I think to make a good business case for it, I think it's very difficult with the provider versus the insurer versus the regulatory agencies. I think they have different viewpoint.

I think you've got to sell it to the hospital administrators and the people in the hospital who make the financial decisions. You've got to make a good case for them. I'm not sure that case has been made yet. If you give financial incentives and tell them we're going to give you a lot of money to buy it for you, we're going to pay for all the administrative costs and staffing and all that, I think they would be thrilled.

The taxpayer is basically paying to put systems to DoD and VA. So, it's a different story. It's our money that paid for it, versus telling them find it someplace. It's like telling them don't do a couple of aircraft carriers, and don't buy a bunch of tanks, and put into the electronic health record, I think the response would be very different. And I think it's the same thing with our hospital administrators.

So, I think we need to work more on the business case. Also, we haven't really brought to the table, the consumers. I'm not sure their view of what is needed -- it may be different than everybody else's view. We are all consumers in the sense that we all use the services.

But in a sense we're all part of the health system, or have been at one time or another, so maybe our view is a little more tarnished or different than people who have never been part of the health system, and basically just view it as a patient or a consumer of health care. Maybe we should get some input from them.

Part of the problem is I guess their level of knowledge of the intricacies of the system are different to find through consumers and major consumer organizations, but that's something to still explore. Maybe patients want something different than any of us think they want. I don't know.

But I think you're right. I think the flow we should probably attack, and we have to work a little longer on some of the other issues.

MR. HUNGATE: Let me hold you until your platform time.

DR. HOLMES: I heard what I would consider to be slightly more optimistic reactions than have been talked about. And I think that were there was perhaps less enthusiasm for some of the recommendations, for example in particular the vital statistics and the lab, was from the people who were from the provider's side, as well as for example a couple of the folks from the quality measurement side.

And I think that came from a misunderstanding of the focus of this committee and this workgroup. And the misunderstanding was thinking that we were about defining quality measures. And we are not, and we never have been. The purpose of this quality workgroup and of our recommendations is to say what are the data elements that we need to support quality measurement. It is not that we are in the business of defining what quality measures.

So, for example, for us to get involved in a discussion of well, okay, if we add lab values, what should those be? And what conditions should we focus on? That would involve us in a series of activities that I don't believe is the purpose and focus of this workgroup.

Rather, we are to say we need a placeholder. We need to suggest a placeholder for data on lab values and vital statistics in a standard transaction form, which is primarily probably a claims form or a claims attachment. And then have other entities, those who are involved in quality measurement, define what those elements should be that go into the placeholder, if you will, on a transaction form.

And actually, if you look back for example, at Stanley's testimony, he was very positive about the addition of lab values and vital statistics in terms of being able to support quality measurement for providers. That physicians would actually be happier, because it would enhance our ability to risk adjust, and provide true comparative data on physician performance, which is currently, as we all know, kind of a hot issue.

So, that as I say, I think that if you look back at the testimony, that we come away with a bit of a more positive spin than some of us have been hearing thus far.

MS. GREENBERG: I would say I definitely have the advantage of being the third to the last here. I have just been taking notes, and I agree a little bit with everybody. This is just among friends here, but I have to say I'm kind of stunned by Justine's suggestion that the workgroup retract the majority of the recommendations, particularly because of their more generic nature, as you suggested.

I think it would be devastating to the quality improvement world if the workgroup did that. That doesn't mean that I'm saying we should just put blinders on, go ahead and say, make it happen, recognizing all the problems. But I frankly think it would be a real step backwards for standardization, and for the committee having any credibility really in the quality data arena if the committee were to just retract the majority of its recommendations, after having heard over a period of about six years, so many of these concerns. That's just my feeling.

DR. CARR: Except that the world has changed. The electronic health record was not an option for six years.

MS. GREENBERG: I do believe we have been talking about the electronic health record since 1959. I think I just saw something on the HL7 Website which attempted to give credit to Clem McDonald for coming up with the idea of the electronic health record, and other people said, Clem is wonderful, but it goes back to --

DR. STEINWACHS: He's not old enough.

MS. GREENBERG: Actually, it may go back to Florence Nightingale, but we won't go there.

When I hear people who have actually gone and looked at people trying to use electronic health records, and what the business is, et cetera, I fervently hope that -- and I'm very happy this administration is putting a focus on this. I think it's terrific that they are.

But I just think for us to put all of our eggs in that basket: (a) is not a good idea, because we have got the Quality Chasm report, et cetera. I just don't think people are prepared to wait as long as it's going to be before we have universal electronic health records, let alone several really good examples out there.

And furthermore, I don't think the purchasing community is going to put up with this. We can brush off the purchasing community and say, oh, yes, they all loved it. But they are paying for health care. And I think that's non-trivial. And I think that this is an opportunity to try to get some really intelligent thinking on all of this, because otherwise one way or the other, the purchasing community is going to just say -- yes, I did sort of hear from them, just give it to us, for God sakes.

I don't necessarily agree with them, because some data can be worse than no data. And data can be dangerous, and I would agree with both Peggy and Justine on that. But on the other hand, they are really frustrated not to be getting any information that they feel is really going to help them, and help their consumers and their customers. And I don't think they are willing to sit and wait for the electronic health record to be ubiquitous.

Obviously, I have come to the conclusion certainly about myself, that one hears to some degree, what one has already concluded. But on the other hand, I was very sobered by a lot of the caveats and concerns that were mentioned. And so, part of me is just despairing that we could ever improve quality in my lifetime based on some of the caveats that I heard.

But then I tend to be an optimist by nature. So, I come back to not so much what I heard, but what is the role of the National Committee? And here I think that -- and that's my responsibility as the executive secretary, to ask that question first and foremost. And here I really do support what Julia said, that it is not our role, and it never has been our role to agree on the right quality measures.

And in fact, I sort of heard some of the quality group asking us to do that with them -- definitely with them -- but I still think it is beyond our capacity. I don't think we have enough of the right people, the experts on the committee. And we can't get all those experts, because we have only 18 members, and we've got to do e-prescribing, and we've got to populations, and we've got to do all these other things. But I think the NCVHS has a really good bully pulpit, and that's I think retracting most of these I think would be so devastating to the field.

And I think that one area that the committee has always been a leader in, and should continue to be pushing for is this idea of standardization. You heard from the AHA, yes, a lot of push back from the providers. But at the same time, and I hear this all the time at the NUBC, they even say, we would be willing to produce some of this, but we don't want to do it this way for this one, and this way for that one, and that way for this one.

Not only is that killing us from a burden and cost point of view, but also you don't end up with data -- as Harry's point -- you don't end up with data across settings and vehicles and everything else that you can do anything with.

So, I think they were all actually crying out for us to push standardization. And I think the train had definitely left the station on the need for that, and on the fact that people are going to be doing pay-for-performance, they are going to be collecting information. I think we can do something important related to standardization.

I did hear that administrative data is an acceptable vehicle, but the only vehicle, and it's a limited vehicle, but we can accomplish something with administrative data, and more than we are currently accomplishing.

I was very excited about the AHRQ study. I absolutely agree that we should be supporting that, and partnering with them on that. And I was sobered by AHRQ's not at all being of the same mind as some of the other quality gurus. But part of this is political and everything, but nonetheless, that's the environment we live in. And I thought that was sobering.

I absolutely agree that 3 and 7, I think are no brainers. I think though, Harry, as to your point, if we can't share administrative data, we won't share electronic health record data either. It's no different. Every hospital has its own electronic health record. So, it's the same issue of coming up with mechanisms such as they are working on in Massachusetts and other potential LHIIs to be able to share data in a way that does not compromise privacy, and that does not compromise the solvency of organizations.

But I would hope -- and I kept probing about this idea of just enabling the collection of the data, and they kept saying, no, that's not good enough. We need to know what data. Well, I agree we need to know what data, but I don't think we're the group that can say what data. But I think we could encourage a process, and support a process that even if say the four lab values that just make a huge amount of sense like hemoglobin A1c, and maybe a few others in some pilot tests on this, some of which are already going on, and support those.

Because if you tell the purchasers we're just waiting for the electronic health record, we are retracting all this, I think it does not encourage them to work with us, or to work with the community in a sensible way. So, that's my thoughts.

MS. POKER: Hi, everybody. Yesterday I was really blown away by the panel and everybody who presented here until this morning, until I heard all of you, and I'm more blown away. This is really a wonderful group to be a part of, and being very new I think my vision is somewhat limited, and probably not to the same depth.

And again, I need to define where I'm sitting, so you know where I stand. And I'm part of the patient safety team. And we have been looking very carefully at the IOM recommendations of the last report, the patient safety one. And in it, on page 5, paragraph 4 -- as you can see, I've read it often -- they actually use a term that patient safety is synonymous, even though I'm not quoting them, with quality.

And I guess looking at it from that perspective, that's where I guess I'm sitting. And I would even hazard to suggest, and I think if have offline that maybe we should called it quality/patient safety group, because that's one of the goals that we want to do.

I guess my understanding of the goal of this group, of the subcommittee is to support quality. And Bob had told me on the very first day I met him that he hopes -- and I hope I'm not taking you out of context, Bob -- that he hopes that quality will drive the EHR versus vice versa. And I think that's a very important idea to consider, because we are going to be populating the EHR, or doing something in turn because there isn't an EHR yet, hopefully quality is going to be driving it. Now, our only problem is we haven't defined what quality is other than everybody is in agreement to that.

What I heard yesterday, and again, my hearing may be somewhat limited, I thought I heard people pretty much accepting most of the recommendations with some very important caveats, that hopefully we can add or whatever the group thinks, into the report, because there was some very important points put out.

One of the ones that really struck me, and I heard also -- Justine kind of talked about it, that was about not only looking at outcomes. And I think one of my fears is are we going to be starting to look at quality only at outcomes? Outcomes I think we need to look at, but we also need to look at the process, whether we are looking at physician performance, or whether or are looking like Nancy Foster was talking about, hospital performance. I think that's going to be a real important part of quality.

These are some of the things I walked away with. I could not identify what were the lowest hanging fruit. I thought they were all pretty much taken, so that shows you the caliber of my hearing maybe.

What I also heard from the lady who came from Beverly, and I also heard it from Marjorie, which I was very taken with also, is maybe the focus of this group doesn't have to be only in the EHR. That's one of the things we have to think about.

But I thought when Marjorie talked about bringing long-term care, and talking about especially when the lady at Beverly talked about long-term care population is really the one who gets hospitalized. And when you think about it, are they the 20 percent of the population who is using 80 percent of our resources. Maybe that's a good place for us to start zeroing in on as far as identifying what are the data elements or some of the data elements that they would need to serve. They have the high chronicity, et cetera.

These are just some of my ideas that I got from these panels. Thank you guys.

MR. HUNGATE: Don wanted an option. Now he wants to talk. Now he's got some information.

DR. STEINWACHS: Let me just make a couple of comments and sort of summarize what I think I'm hearing from you. One is I am on sabbatical right now, even though it doesn't quite feel that way. It's not going to last longer than the end of the month.

But I have been spending some time trying to formulate a new agenda for the next five years for stuff that we do with addressing quality of care issues for people with severe mental illnesses, schizophrenia, bipolar, and so on. And one of the things that struck me, and I thought I would just share with you is that I was trying to figure out how to organize this presentation.

So, I put together my sense of sort of a hierarchy of quality concerns, and again this was having to focus on a specific population. But I had safety at the top of that, sort of Anna's issue. First, it seems to me we cannot do harm. We ought to be having systems to protect that.

The second, and again, this reflects bias maybe was evidence-based practice. So, it talks to process outcomes, but areas in which we know that.

The third was coordination of care. And this sort of talks to the issues, we keep saying what do the EHR and patient health record, but most of the people who have one serious condition, have another. And so, part of the safety issue, as in many other things is this coordination of care. But it also talks to the patient perspective. It's not just the physician who has to figure it out, patients and families.

And the third was risk factor management, whether it's obesity in America, smoking, and so on. And at least for me, it was sort of helpful as I thought about those four, and I guess in part the reason I'm raising it with you is to think about well, what are we doing in making it possible to capture relevant information maybe across that spectrum that goes from our concerns with obesity in America, to not killing people in the process of health care.

The discussion I heard made it sound like our recommendations were sort of being divided into two groups, and you can correct me if I'm wrong. One is I do agree with people who say that the criteria used for mandating the addition to an administrative record, those criteria to me are very different, and have a higher standard than the criteria that says let's make it possible to capture something, but not say that you have to.

And so, I saw many of our recommendations trying to say let's make it possible to capture things. We aren't mandating the capture, but we're saying it has to be built into the structures so it's enabled. Where you say you have to capture something. Then the cost/benefits, and you can talk about the business case and other things, I think have to be laid out very clearly.

And so, I was tending more toward Marjorie in the sense that I thought one of the things we were really trying to do here was to make it possible to capture critical items. We still have to look at the costs and benefits on that. I'm not saying that there aren't those that go with that. But that, to me, is a different set of criteria than mandating.

The other point that I don't know if it came out in the AHRQ presentation, but I have found very useful recently the IOM report that AHRQ commissioned on the 20 "conditions" or 20 things around which we ought to be doing quality measure. Coordination of care is one of them, but there are major disease categories and so on.

And again, if we feel we have to put down something that begins to talk more to the business case, that maybe a reference, but at least to talk about what are major conditions around which you would think about trying to make that.

What bothers me the most I guess, and I am one who gets up and talks to employers, so I use the business case, is that fundamentally it seems to me the business case is not the patient's perspective on that. It's not necessarily the population perspective on this, because usually the business case is -- I co-chair the benefits advisory committee, which people expect at Hopkins to cut benefits. I'll be dead if we cut benefits, so I may not be back.

But Johns Hopkins University has a very different view on benefits. It's how do we recruit and retain a workforce? That doesn't necessarily talk to the health of the children and the population that is there. And so, I would love to see us in whatever we do I guess, trying and do two things. One is the business case that says how do you talk to the key decision-makers out there about their interests.

But it seems to me we have a responsibility to keep coming back to the population. And within that also are concepts of patients, and what do they need, and how does what we do facilitate their getting access to that. And Bob has been eloquent about that.

The last little point was I enjoyed your conversation about EHRs. I'll share my bias. I have assumed for a long time, since I have lived through all these recurrent efforts to get the EHR going, and then things sort of fizzled a little bit, and then it goes a little bit further, is that the administrative systems would be the ones sharing what clinical information, in an interoperable form, before the EHRs were sharing information.

And so I, for a number of years now, sort of felt that the more we could do to make it possible to capture key elements, and we have done that by adding diagnosis and other things, that you would begin to build a structure where a patient could have a summary of their electronic data. There are other things we could do in patient health on communication, the coordination of care record, you could be elements of that off of administrative data.

And so, what we do in administrative data may actually help push those things. That's not our primary purpose here, but it seems to me there is another reality that is out there.

Thank you.

MR. REYNOLDS: When I mentioned the EHR, as I was talking about that, some people played off of that. Let me be a little more succinct. Marjorie, I very much agree with what you had to say.

If you look at the potential options to be considered, and that's one of the reasons I'm here today. Is we have driven right to the 837. And if you look at the list here, and that is kind of what my drawing was trying to say and other things, you've got the 837, which is the claim. And just to give you a little picture of how health care financing is going forward, you are seeing more and more case payments.

And on a case payment, you don't necessarily transfer all the detail, because the industry is allowing the provider to give the care. So, you are actually reducing the amount of true care information that flows for purposes of financing. That claim record is all about financing the information.

Whereas, if you look at the attachments where you are talking about in specific situations there is more data needed to understand what went on, whether that is a spot, or whether another transaction, that true does take everything that we heard yesterday, and have it presented in a way that this is the quality about that episode. This is the quality data about what went on, wherever it goes. And wherever it goes is back to your point. We're not going to direct where it goes.

But I guess I see us driving quickly to recommendations -- and every recommendation here has that, but we keep focusing only on the 837. So, I personally don't have a bit of problem if the committee were to say that the appropriate mechanism based on the things that are coming out of standards, because attachments is the next one coming out. And if you look at the attachments, which I have actually been studying, trying to learn more, so I could actually sit in these hearings and know what's going on, breaks things down.

It may not break it down as far as it needs to go or whatever, but the point is it adds more data for specific things, whether it's ambulance or whether it's this or whether it's that. And so, I guess all I'm feeling is I went too far in saying electronic health record maybe. But I think we went too short in saying the 837, because I just think it's a different game, it's a different process.

MR. HUNGATE: I need to have a few minutes.

DR. COHN: This is very short.

MR. REYNOLDS: I want to finish.

DR. COHN: Oh, I'm sorry. Please finish.

MR. REYNOLDS: Purchasers absolutely will not stop. But it's also important to understand that purchasers are not necessarily coming through the standard health care channels to deal with the quality. Facing GE from the other side of the table a number of times, they go directly to the hospitals and doctors where their plants are, and they take anybody who is handling their health care out of play. They go deal with quality.

So, again, back to this whole idea of what is this pipe, and how this works, it's not necessarily a claim-based situation. It's called I am going to make a difference, because I am paying. Back to your point, I am paying and I am going to make a difference.

So, I think that's probably what I should have more appropriately said up front. And sitting here as a member of standards --

MS. GREENBERG: Can I just say one sentence? That's why number 3 is sort of a no brainer, because it is the one that really does make sense to put on the claim, because the diagnoses are already on the claim, and it's already in the standard.

MR. REYNOLDS: I totally agree.

DR. COHN: I think what I was going to describe just very briefly is the fact that the role of this workgroup, I think as we have commented, is to not do quality indicators. It's really data development activities, which is really the role of the National Committee.

And I think we are getting to the point where we, you, whatever, need to understand some of the tools of the trade. And so, for example, claims attachments, as Harry was describing, is something that this workgroup really needs to be briefed on.

I was also going to suggest, and just because of my own awareness of what's going on out there, and I know Kathy is aware of it, for example there are CPT category 2 codes which are performance measurement codes, which actually do sit on the 837, which at least to my understanding, actually begin to address some of these issues.

And so, you probably need to know about your view of some of this stuff. Some of these things are really low hanging fruit, but your view of what becomes easy and hard may change as you discover all the tools out there. I'm trying to point you in a couple of directions, and all I'm saying is that it's one thing to talk about really low hanging fruit, but we may as we go forward, re-evaluate what's really low hanging fruit.

And it may be part of the role of the work group is point out saying, gee, that's a really good thing. We want to write a letter supporting that, as opposed to having to invent it. And that was really just going to be my comment.

MS. HANDRICH: One thing I wanted to add is from the NCQA testimony, the conclusions that Peggy summarized in her presentation did not include her very last dot point, which I read last night.

MS. GREENBERG: I noticed that myself.

MS. HANDRICH: The rationale for the adoption of -- this is from NCQA, where more is better, whatever. The rationale for the adoption of recommendations could further be enhanced if it is clear which additional performance measures could be computed with the inclusion of such parameters, if it is determined how these measures relate to national, regional, or other important health care quality goals, and if the health benefit gained by improving the importance in areas for which these newly deployable performance measures is quantified.

MS. GREENBERG: And I think partly that's what AHRQ is trying to do with their study.

MR. HUNGATE: Great to all inputs. I congratulate this group on its engagement with the content. We come away with different views of it, and we represent constituencies and our own intellectual knowledge and understanding.

It seems to me that our unique position/character is to look across this system, recognize the tensions between the purchaser community that is saying we don't have any performance measures, and the provider community that says we've got tons of them. That we're measuring all sorts of things, if I am correct in my articulation of what I think I heard.

So, there is a disconnect. And it seems to me that our task in a sense is to work on how do we reduce the disconnect, not in what we are taking evidence on, but in how we affect the system that is taking place. And it seems to me that's a two fold thing. Part of what I take out of this is it says we don't just say we are dealing with the 837.

We're saying we're dealing with a need for the collection of information that is valid information, and valid means that it has meaning to delivering the physician, the patient, and the purchaser. So, that it's information that is useful to all in some way that is evidence-based, that stands the test of a broad system.

And I heard, as Julia did, more support. I was especially impressed by Stanley's comment of two thumbs up on lab data. That surprised me. But I started to think about it, and I think it reinforced what someone else said, and I don't remember who, that when we get measured on things that we think are the wrong measures of our performance, it just increases frustration.

And purchasers are going to make some decisions, many of which are wrong measures. And so, if we can help the movement of right measures as judged by providers, in what we do here, not by picking them necessarily, but by maybe talking about how we visualize the system making its choices, by articulating the things that have to be dealt with, then I think that's our contribution.

And that isn't per se, saying that we take this candidate recommendation and move it to a full recommendation, or that candidate recommendation. It's saying we kind of take the grouping of them and say how does this content with some kind of structuring, where the flag present on discharge diagnosis is low hanging fruit. So, we can put that in the forefront and say this can be worked on most effectively first.

And functional status I support. But you know to say studying it, okay, everybody could agree with that. That's easy. That's not a hard decision. So, I don't take that as a big win. We have said it before. And I have seen publications in Health Care Financing Review that purport to be study of the measurement of functional status. Well, what are we talking is research that is other than what's there. I don't know.

So, I get a little frustrated when we feel good about a recommendation which is really not very much. And so, I come back and take two cuts at the information, one of which is say, okay, let's take the evidence-based label that everybody says they want, and let's take those specific example measurements that have been articulated as being evidence-based, and I think there are maybe four of them in selected lab results. There may be two, there may be four in vital signs and objective data. The flag is clear.

Down in the category of recommendation 5, I think there are some things about thrombalytics and prophylactic antibiotic use in surgery for deep wound infection management where there are science bases that are valid. There isn't an agreement on how to get that data to be useful.

But there are these specifics where I think there was more commonality and support for that being a measure of quality. Now, that's my sense of it, and we might be able to articulate the spectrum of that, and articulate that it's not us that will determine those measures. It's a combination of NQF, AHRQ. There are other folks that will do that.

But I think we could say there is promise in these measures if there are mechanisms, whether it be claims, claims attachment, whether it be the continuity of care record, or a new quality transaction, I don't know. But there is a piece there that could be a construct to say here is the agenda for the reconciliation of all these things in the way we see it.

Now, does that make any sense? Have I said anything sensible?

DR. CARR: So, okay, my role will be devil's advocate. I don't mean to be disrespectful of the history of it. But I was very taken with what they said yesterday. What is our vision? Because if we are saying today that everybody has to do a mad scramble, like we have 30,000 operations, and we don't have collected the time that antibiotic began and finished.

We are trying to do it with vascular. We have recruited four students, five pharmacists, and ten nurses to get it on one subset of patients. So, now multiple that across multiple divisions. We are asking them to do a quick scramble for something that is asynchronous with the electronic health record.

If our goal is to make all these things so many that they are all just going to say fine, the only thing we can do is go out and buy an electronic health record, I guess that's where it will go. But I don't think that is a respectful way to do it. And I just don't know how you can go across the country, like we heard from American Hospital, if we put this out, it's naive to the resources that are being required. People do not have date and time for these details.

MR. HUNGATE: I'm not recommending doing that.

DR. CARR: But by putting that on there and making it available, it then opens that highway to whatever vendor wants that done.

MR. HUNGATE: If I'm trying to diminish discordance between the purchaser and provider community, it seems to me I've got to find ways to make progress in measurement that help do that. And I think that's time phrased -- Stanley said be careful about the implementation, and I agree with that. But I'm using these as samples. I'm not trying to say this is the measurement that needs to be done. I'm trying to articulate how we make progress.

DR. CARR: But I see two things. I see Leap Frog and payers and GE have made huge progress in substantive areas, and they have a recognition on the Web page, and anybody can go and see who is Leap Frog compliant. And as people strive to do that, processes are getting better. It's not constrained by how are we going to get this data element. But it's a thoughtful process that takes all of the recommendations from IOM and NCQA and all of that, and it is delivering something that people believe in, and it engages the care givers, and there is a recognition piece.

So, I say quality is being moved ahead by that initiative in a way that is leaps and bounds. And I still think the electronic record has more traction today than it ever has before. And I think because of that, we need to be flexible and reflective of the fact that we can pigeon hole onto a claims form, or we can to steps back and say a decade has passed, a generation has passed. We're in a different place.

So, let's let the Leap Frog and all them continue to put their Web page, their recognition, and their process measures out there, and let hospitals engage. And let's work toward accelerating a quality claims thing that will be quantitative, but that people won't build that structure once in their institutional experience.

The amount it's going to take to capture the A1cs is huge. Blood pressures, unbelievable, that is going to be huge work on the part of an institution. And the end of it, you don't have an electronic health record.

MR. HUNGATE: Part of what has to be articulated is to determine how much that huge work is. It seems to me that's what AHRQ is talking about doing. And we need to help structure that discussion so that the right decisions are made. I'm not trying to say -- do you see what I mean?

I don't think these cross-measures are necessarily the best ones. I think they are making a difference.

DR. CARR: They are a 1,000 points? The first three were simple, but the last 1,000 points, if that were our agenda for a decade, we would be phenomenal at the end of it. We couldn't do any more than that.

MS. GREENBERG: Is every hospital doing this Leap Frog?

DR. CARR: Well, that's the thing. Some people are saying it's too hard to do. But I don't know. I heard people yesterday saying just filling out the form in Bridges to Excellence already made places better by organizing, structuring, focusing, and providing evidence-based. So, I would say how do you engage more people in Leap Frog, rather than how do you create a field on the claims form for A1c, and find a way.

DR. HOLMES: Remember too, Leap Frog only pertains to large employers, and patients who are either the dependents or the workers. So, you have a whole population that is not included in that. And for example, the Bridges to Excellence, as we heard yesterday, it only pertains at this point to four cities in the United States. So, it's a very time consuming process, and it will never cover all the population.

MR. HUNGATE: I think we better break, 15 minutes. We'll start at 20 till with the next panel.

[Brief recess.]

MR. HUNGATE: I would like to welcome our next two person health plan panel. Let's start out by going around the table and introducing ourselves so that Jeff knows who we are, and we'll appreciate your telling us who you are.

DR. HOLMES: Hi, I'm Julia Holmes, and I work here at the National Center for Health Statistics. I work in the Division of Health Care Statistics, and we run the large establishment-based surveys such as the National Hospital Discharge Survey. And I have been involved closely with the development of the National Health Care Quality Report, and the National Health Care Disparities report, led by the Agency for Healthcare Research Quality, and I'm staff for this group.

MS. POKER: Hi, my name is Anna Poker, and I'm from AHRQ. And at AHRQ I work on the Patient Safety Team, and also write for the quality report and the disparity reports as well.

MR. HUNGATE: Bob Hungate, chairman of the Quality Workgroup. I go under the label of Physician Patient Partnerships for Health. That's just me, but I also chair the Group Insurance Commission in Massachusetts.

MS. HANDRICH: Good morning, my name is Peggy Handrich. I'm a member of a committee. I'm from the Wisconsin Department of Health and Family Services, with a special interest in Medicaid, and the state level collection of health statistics and health data.

DR. STEINWACHS: I'm Don Steinwachs, Johns Hopkins University. I chair the Department of Health Policy and Management, and do health services research.

DR. CARR: I'm Justine Carr. I'm physician in health care quality at Beth Israel Deaconess Medical Center in Boston, and I'm a member of the committee.

MR. REYNOLDS: Harry Reynolds, Blue Cross, Blue Shield of North Carolina, member of the committee.

DR. COHN: Simon Cohn, Kaiser Permanente, member of the committee.

MS. JACKSON: Deborah Jackson, NCHS, staff.

DR. EDINGER: Stan Edinger, AHRQ. I work with quality improvement and patient safety, and also staff to the committee.

DR. KAMIL: I'm Jeff Kamil. I'm the Corporate Medical Director for Blue Cross of California. A little bit about Blue Cross of California. We have about -- I think we're the largest health plan in the United States. We have 7 million members overall, about 6.5 million of them are in California. We compete directly with Kaiser. We both claim to be the biggest. One of us is right.

We take care of all sorts of customers, MediCal, individually insured members, all the way through the second largest health purchaser in the United States, which is CalPers. So, we have experience with the quality needs and expectations of purchasers of all types.

I'm glad to be here today, and will answer your questions as you need them.

MS. COLTIN: I'm Kathryn Coltin. I'm Director of External Quality and Data Initiatives at Harvard Pilgrim Health Care in Wellesley, Massachusetts, also a managed care organization.

MR. HUNGATE: Okay, the way we want to try to work this is to ask you each to make 15 minutes of comments, and then we'll hold questions until you're both done. Then we'll get into a general discussion.

Kathy.

Agenda Item: Health Plans and Insurers, Panel 4 - Kathy Coltin, Harvard Pilgrim Health Care

MS. COLTIN: Well, thank you for the opportunity to comment on the candidate recommendations on behalf of Harvard Pilgrim Health Care. Harvard Pilgrim serves almost 800,000 managed care enrollees in Massachusetts, New Hampshire, Maine, and Rhode Island, and prides itself on the quality of care delivered to our members, in collaboration with an extensive provider network comprised almost entirely of independently contracts medical groups, physician hospital organizations, independent practice associations, as well as hospitals and other facility-based providers and ancillary service providers.

Harvard Pilgrim also owns and manages one medical group in Nashua, New Hampshire.

The administrative data submitted by individual providers in the form of claims or encounter transactions, when combined with the enrollment transaction data provided by public and private plans sponsors and individuals are used to support virtually every business function that distinguishes a managed care organization from an indemnity insurer or TPA.

If the intake and processing of electronic enrollment and claims encounter transactions within a health plan can be thought of as the respiratory and circulatory systems of the organization, then data warehousing and analytical tools would represent the central nervous system.

The particular data elements that have been proposed for development and inclusion in the standard administrative transaction in the Quality Workgroup's candidate recommendations would enhance operations in several critical business areas from both an efficiency and effectiveness standpoint. Let me talk a little bit about the business case for the candidate recommendations, as you requested.

The business areas in our health plan that would be most likely to benefit from the inclusion of these data elements are; quality measurement and oversight; medical management and clinical programs, including both population health management and disease management; financial planning and analysis; product development and marketing; and provider contracting and reimbursement.

I would like to speak briefly to the potential benefits of these data elements for each of these areas. In the area of quality measurement and oversight first, health plans in Massachusetts are required to submit HEDIS measurement results to the state department of public health under a provision of the Massachusetts Managed Care Reform Act. Similar reporting requirements exist in each of the New England states where Harvard Pilgrim does business. Several other states have similar regulatory reporting requirements for submission of HEDIS measurements.

Submission of HEDIS measurements to NCQA is required for accreditation, since 30 percent of the health plan's accreditation score is based on HEDIS results. This creates an added incentive for plans to improve their performance on the HEDIS measures. NCQA accreditation is a condition of doing business with many large national employers. It is also a factor in the recommendations of health benefits, consultants, and brokers to small and mid-sized employers.

Several large employers and health benefit consultants have tied performance guarantees to HEDIS measures that involve substantial financial penalties for failure to meet performance targets. In general, the HEDIS measures that represent approximate clinical outcomes are preferred for such incentives, because they are closer to the ultimate outcomes of interest, reduced morbidity and mortality and their associated costs.

Yet, improvement on these measures is far more challenging, not only because outcomes are influenced by factors other than health care, but because the cost of measuring performance of medical groups and IPAs and PHOs within the provider network, not to mention individual physicians, is prohibitively costly and burdensome to both the plan and the provider offices even when based on limited random samples. Further, sampling would not identify all members at risk of poor outcomes who could benefit from intervention.

The availability of lab test results for the HEDIS measures of glycohemoglobin control in diabetes and LDL cholesterol in patients with diabetes, and those with coronary artery disease would greatly enhance the ability of health plans to target members and physician groups that could benefit from quality improvement interventions.

Likewise, the availability of blood pressure measurements would aid in improved identification of patients with hypertension, and the control of blood pressure levels, another HEDIS measure. In each of these conditions, such improvements could prevent secondary complications and their associated costs, and decrease the risk of mortality.

Similar benefits could be realized in other health conditions where measures have not been implemented due to the unavailability of laboratory data or vital signs needed to identify the target population for measurement. Examples include measures of follow-up of abnormal laboratory findings, and weight management in obesity.

In the area of medical management and clinical programs, I'll talk first about population health management. Harvard Pilgrim currently uses predictive modeling software to identify members at high risk of future health crises based on enrollment demographics and historical claims or encounter data.

These tools have resulted in considerable cost savings by enabling Harvard Pilgrim to target case management services to high risk members, and to help avert health crises that might otherwise have required hospitalization or intensive resource use. They have also enhanced member satisfaction.

The addition of a limited set of pertinent lab values and vital signs and/or information about functional status could potentially enhance the predictive value of these tools, and help to realize even greater benefits due to preventable morbidity. However, the added value of capturing additional lab values or vital signs for the purpose of risk adjustment and predictive modeling needs to be balanced against the cost of collecting these data.

In the area of disease management, Harvard Pilgrim offers several disease management programs to members with chronic illnesses, the most common of which are diabetes, asthma, and cardiovascular conditions. Disease management interventions are costly, and need to be targeted to the members most likely to benefit.

Those who are already well managed are less likely to need intensive disease management interventions, and can benefit from simple, educational information and support tools, while those whose risk factors are not well controlled require more intensive care management services.

Many of the risk factors that could be identify those members in need of disease management can be derived from lab results or vital signs such as blood pressure or body mass index, which in turn are derived from height and weight measurements. The absence of such information means that more resources must be expended on case identification, and that interventions could be misdirected or under provided to those who could benefit.

The next area is financial planning and analysis. This area examines medical cost trends, and develops models for setting health insurance premium levels. They also evaluate the contribution of various medical management and product design changes on observed medical cost trends. Changes in medical costs can be due to many factors, including changes in the population case mix or disease severity levels of patients.

The availability of selected test results, vital signs, or functional status could potentially help to identify changes in case mix or disease severity that may have contributed to observed changes in medical costs. However here again, the value of these data elements should be balanced against the cost of collecting them.

In the area of product development and marketing, managed care organizations are increasingly developing so-called consumer driven health plans. These are products which rely on providing consumers with information to enable them to choose health care providers and services on the basis of both cost and quality.

This is particularly true of tiered network products. Harvard Pilgrim will be launching a new tiered network product in January 2005, that tiers primary care physicians on the basis of the cost performance of the contracted medical group, PHO, or IPA with which they are affiliated, and provides information about the quality of care in the groups that are included within each of the cost tiers.

A similar product has been offered in Minneapolis for the past three years, and various permutations of tiered networks or value networks, which are more exclusive, or the high performing providers are increasingly being offered across the country. Consumer generally prefer outcome measures to process measures. But as previously discussed, the measurement of outcomes at the provider group or physician level is prohibitively costly, given the current data limitations.

Some tiered products focus their tiering on hospitals rather than physicians, but the data limitations are just as problematic. Many health plans are offering decision support tools such as Sabemo(?), Health Grades, or Select Quality Care, that compare hospital care on a number of quality dimensions, including the AHRQ quality indicators.

The addition of a flag for secondary discharge diagnoses that were present on admission would improve the accuracy of the measures presented in these tools, and reduce the likelihood of misdirecting patients on the basis of flawed measurements.

Likewise, the addition of operating physician could potentially enhance some of the proxy measures of outcome based on volume. However, caution must be used in attributing accountability for actual outcome to any individual care giver, as the systems in which they practice exert an important influence as well.

In the area of provider contracting and reimbursement, Harvard Pilgrim began implementing pay-for-performance incentives in parts of our provider network in 2001, and has expanded these programs over time, such that 80 percent of our physician network now has a pay-for-performance incentive based either on clinical quality, patient safety and/or patient care experiences.

Implementation of pay-for-performance incentives in our hospital network also began in 2001, but has only recently seen modest expansion. I'm going to talk a little bit more about pay-for-performance, because that your second question.

The IOM has called attention to what health plans and providers have long known, that providers who deliver high quality care have been paid the same as those providing marginal or even substandard care. And under some payment arrangements such as DRGs, they have actually been paid more.

While some providers have been able to negotiate higher reimbursement levels on the basis of reputation, the legitimacy of such payment differentials has not been established. Pay-for-performance strategies have the potential to rationalize reimbursement to the benefit of patients, high performing providers, and health plans. These strategies are being adopted by health plans at a rapidly increasing rate.

A recent survey indicated that about 65 percent of health plans either had or were planning pay-for-performance incentives tied to quality measures. Now, incentives tied to utilization measures have existed for a long time, but tying them to quality measures is a relatively new occurrence.

Given the importance of HEDIS measures, most plans have implemented pay-for-performance incentives that have included HEDIS measures among those tied to such incentives. Plans have the same motivation as purchasers to tie such incentives to outcome measures, but data limitations have resulted in most pay-for-performance incentives being tied to process measures.

The availability of lab results and selected vital signs in administrative transactions would greatly enhance health plans' ability to implement pay-for-performance incentives tied to clinical outcomes such as control of hypertension or hypercholesterolemia. Most early pay-for-performance incentives with hospitals were tied to Leap Frog standards. The availability of publicly reported hospital quality measures, such as those soon to be published by the Joint Commission, should advance the implementation of pay-for-performance incentives for hospitals.

The ability to generate JCAHCO core measure from administrative data could potentially reduce measurement differences and promote standardization, but must be balanced against the costs of data capture. Several of the core measures depend on arrival or admission times and procedure times, but these are clearly difficult data to capture on a routine basis. Improvement of the AHRQ quality indicators to account for conditions present on admission will probably do more to promote such incentives.

Patient safety measures that use volume as a proxy for better outcomes for certain surgical procedures could be improved by using surgeon volume. This would require the collection of the operating physician for the principal procedure on both inpatient and ambulatory surgery transactions, but again, the cautions I mentioned earlier need to be taken.

You asked about risk adjustment. Among the data elements included in the candidate recommendations those that would be most helpful for risk adjustment vary somewhat by care setting. For hospitals the most valuable additions would be the flag for secondary discharge diagnoses that were present on admission, selected laboratory values and vital signs at admission, and perhaps functional status at admission.

For physician office visits, selected laboratory values, particularly those that measure organ functioning, and selected vital signs, particularly blood pressure and height, weight, or body mass index would be the most valuable additions for risk adjustment.

Regarding the mode of data collection, I have high confidence in the quality of the data that could be collected for lab results and vital signs in hospital transactions, and good confidence in the collection of these same data elements on physician transactions. The quality of lab values on lab claims is actually very high, although the data are not coming in on claims, they are coming in on a separate data feed, which in itself is an issue, because there is no standardization regarding the way that those type of data are obtained when they are available.

I also have high confidence in the quality of the data elements for operating physician and secondary diagnosis flag on hospital transactions. I have high confidence in the quality of date and time information for admission, somewhat less confidence for arrival time, and much more limited confidence in the quality of the time information for procedures.

I believe it is premature to speculate about the quality of data on functional status until a coding system has been adopted, and the collection of data using this system has been adequately pilot tested.

The adoption of electronic health records or electronic medical records will facilitate the collection of all of the recommended data elements by enabling more accurate and less burdensome transfer of electronic data from EMRs to claim transactions and/or claim attachments. However, it may not improve the completeness of the data submitted, as this will depend upon the development of protocols and interfaces for generating claims transactions from electronic health records that can affect the quality of the administrative data so generated.

Harvard Pilgrim's experience with one very large multi-specialty group practice that submits electronic encounter transactions that are generated from a top tier electronic medical record system is that they are less complete with respect to secondary diagnoses than those submitted by other groups that submit claims transactions based on paper record systems.

When the provider is seeing the problem list, they need not enter for a given visit, everything that may have been pertinent to that visit, and they tend to enter for a given encounter only the problem that they are focusing on. If the protocol for generating the claim only looks at that one problem, and ignores other problems on the problem list that are pertinent, the claim will be incomplete, and that tends to be what we see, and that's the explanation that they have given.

So, this results in a distorted assessment of the case mix and severity levels of their patients, and can disadvantage them in the calculation of measurements that require risk adjustment. It is absolutely true that there are different values that could be assigned to the different recommendations. And my order of preference personally would be 1, 3, 2, 4, 5, 7, 8, 6.

You also asked about correlations of recommendations. There are different ways to look at this. I think that recommendations that pertain to hospital transactions are correlated. That would be 1, 2, 3, 4, 5, and 8. And measures that pertain to physician transactions are correlated. That's 1, 2, 6, and 8. The other way to look at it is that lab values and vital signs are related regardless of setting, but I really think that when we are talking about data capture, setting is a very important factor to consider, because where and how the data are collected will vary.

The recommendation to conduct research on functional status coding could potentially be pursued independently, but recommendation 8 would depend upon the outcome of that research.

I have a couple of parting comments that you didn't ask about, but I'm going to tell you anyway. I don't believe that there is a one size fits all solution for implementing recommended data elements in a standard HIPAA transaction. While some data elements might be accommodated within an existing 837I or 837P transaction through a modification of the implementation guide, such as operating physician or admission time, which are already there.

Others might better be collected as part of a claims attachment, for example arrival time. Or in the case of hospital-based data elements, through the public health data standard. In the case of lab test values, any of these options might work.

One potential issue with claims attachments will be the current priority attachment types, which may or may not be the most appropriate vehicles for some of the data elements. If a new quality attachment were to be created, its order in the priority list for attachments could be an issue. If the public health data standard were to be the focus, its applicability to outpatient need to be evaluated.

The public health data standard may be the best vehicle for capturing recommended data elements in the hospital setting. But the vast majority of health care is delivered in outpatient settings, particularly physician offices and clinics. At present, quality measurement within these settings is extremely limited, and public reporting of these measures is voluntary and focused primarily on qualifying for various recognition programs, which tend to attract those physicians who are already delivering recommended care.

Only a small minority of physicians in any given market have submitted quality measures to such programs, although incentive programs such as Bridges to Excellence have increased participation rates. By virtue of their access to large administrative sets, health plans and self-insured plan sponsors are in a strong position to shed light on the comparative performance of a much broader and more representative segment of the physician community. But the only data they can base their measures on are administrative data.

The adoption of electronic health records will not change that situation. What it will do is enable more accurate and less burdensome transfer of electronic data from EHRs to claims transactions. However, it may not improve the completeness of the data, as I mentioned earlier.

Thank you for the opportunity.

Agenda Item: Health Plans and Insurers, Panel 4 - Jeff Kamil, MD, Blue Cross of California

DR. KAMIL: Just a little bit about -- I'm from Blue Cross of California. We as I said, about 6.5 million members. About 1.4 million of those members are in HMO products, and another 1 million are in HMO products from Medicaid programs. The rest of about 4 million members are from our PPO programs.

We have had pay-for-performance in place in the HMO program for about six years, and in the last two years have substantially increased the amount of money that physician groups can obtain from pay-for-performance in the HMO system to approximately 10 percent of what one would call capitation in the HMO program. It takes a strong incentive for physicians to really work for pay-for-performance, and I'll come back to the importance of that momentarily.

In the PPO program we have been experimenting with pay-for-performance in approximately six counties in California that cover approximately 2,500 physicians. These physicians are primarily in primary care, but we have experimented with quality measures in specialty physicians over the last two years as well.

Your question about the business case is the critical answer, and I think all of your questions really fall back on the business case. Is there a business case for quality measurement? Because without a business case, quality measurement and improvements in quality probably are going to be much slower.

My experience in California in pay-for-performance, especially in the HMO programs, and speaking specifically about Blue Cross' program, but also additionally the addition of the Integrated Health Care Association. I saw Peggy O'Kane mark that on her map that she showed you where things were happening.

But in California approximately five health plans have come together and agreed upon standard quality measurements, and these quality measures are agreed upon by approximately 150 medical groups, which cover, to look at the Blue Cross network, about 26,000 physicians.

There has never been a time in the history of health care in California where physicians have invested more money in systems to improve care. This has happened probably due to two features, one, the lower cost of systems to improve care. When I speak about systems to improve care, I'm talking about electronic medical records on an ambulatory basis. Electronic health records is another way of speaking about this. These have been becoming increased investments in medical groups over the last two years to a remarkable extent.

To the extent that the largest association of medical groups has actually asked health plans to come together and help them develop a comparative data registry that can compare performance of a number of medical groups simultaneously. So, the medical groups can actually compare each other's performance.

None of this investment, I believe, would have occurred without pay-for-performance programs. Pay-for-performance programs are dependent upon having quality measures. Quality measures are dependent upon having valid, measurable items that can be compared across provider groups.

I think the most striking evidence of pay-for-performance may be in IPAs, physician groups who are actually not connected together. Independent practice associations, as you probably have experienced, in California's model are groups that put together primarily to contract with health plans. And these IPAs can contract for membership. Our largest IPA probably has about 30,000 Blue Cross members, and probably over 200,000 total members contracted with it.

These IPAs have demonstrated their ability to compete with integrated medical groups on any quality measure. And enabling them to purchase systems to acquire and compute health care data, and transfer that data for their own independent measurement and comparison is essential.

In the PPO programs, which we have probably a unique program in California. We have used administrative data to collect approximately 12 quality measures. We use these measures to provide independent physicians -- these programs are focused on individual physicians, not medical groups.

So, in these programs we collect data on individual physicians, and score them and provide their comparative scores across their specialty or across their geographic area, where they can compare themselves to physicians of their own type, in other words, internists comparing internists in San Mateo County, or they can compare themselves to physicians in any county in California.

It is hard to reach individual physicians about quality measures. Individual physicians have a hard time at looking at data and comparing themselves. That's not something they are used to doing. So, you have to essentially put it in front of them, for them to look at this data, because they are bombarded with information from health plans and from purchasers, and from their individual members.

Notwithstanding that, we have delivered checks to individual physicians of up to $5,000 as a bonus, which these physicians have been highly surprised about, because physicians aren't used to getting bonuses from health plans and PPOs based on quality. And the delivering of these checks accompanying their physician scores has caused a significant positive feedback from individual physicians to Blue Cross of California.

Health plans are the nexus between what your providers are telling you. There are already enough measures. We probably don't need any more measurement. And purchases, which were probably telling you that they have not enough measures, and they need more measures. I think the truth of the matter is we haven't fully utilized the measures that we currently have and deployed them.

For example, there are very few measurements in PPO programs. But there are plenty of measures to measure physicians against in PPO programs. And the mere fact that those events have not occurred, on other words, measurement of physicians in PPO programs, nor have physicians accepted these measures as measures that are valid performance measures. And most patients in our health plans are in PPOs, speaks to the gap between the measurements that currently exist, and the use of those measures for individual physician improvement and incentives.

So, I have already told you that I think the business has already been made. That there is a business case for pay-for-performance, and quality measures are essential for that pay-for-performance system. And improvements to systems of care can only occur through the acquisition of data registries, which are dependent to some extent on electronic health records.

Without electronic health records, pay-for-performance systems probably are not going to be ultimately successful at the individual physician level, nor will performance actually be able to be improved. But through pay-for-performance systems, physicians have a striking increase in their desire to acquire these systems.

WellPoint Health Networks has delivered to physicians -- at least created an opportunity for physicians across the country to acquire either computer-based system or hand-held physician order entry systems for ambulatory care. And in California for example, 7,000 physicians have taken up the opportunity to acquire electronic equipment in their office.

The mere fact that 7,000 physicians have decided that they need a basic computer system in their office is a statement in fact of the lack of these systems in physician's offices. So, pay-for-performance systems are very critical for physicians to invest in their own practices. Physicians have very little incentive to invest in their practices and change their work flow.

It is important to develop quality measures that go beyond HEDIS. It is important to do that, because there are other important factors that can be used in managing health care that are not in terms of HEDIS quality measures. For example, looking at a physician's ability even to order lead levels, and to react to inappropriate lead levels in children is an example of one measure that having lab values would be important for a health plan and physicians groups to review.

We also would use laboratory data for our disease state management programs. It is important to understand that disease state management programs are very expensive. And to have disease state management programs that focus on patients who really need care, and work with physicians who can improve their care through the use of laboratory data would be highly valued by our health care system.

We would also use laboratory values for predictive modeling, and to risk adjust payments to physicians and medical groups. One of the areas where physicians and medical groups are starting to begin to speak to us about is their ability to receive capitation payments based on the risk of the patient population that they care for.

Also, having lab values actually in hand would be able to help health plans work with physicians who do not have electronic health records to improve health care. We can have lab values put accessible to physicians on the Internet, where they could pull down lab values for their individual patients if they wanted to see them.

So, there are a number of ways where health plans could use the current technology and Internet technology to use the acquisition of laboratory data to improve health care. Having laboratory data would also improve care management, for the reasons I have already stated.

Patients who do not fit into disease state management programs or traditional quality measurements clearly have -- the additional value of having lab data would help us work with physicians to make sure these patients receive care at the appropriate time and place.

I have already spoken to you about physician payment. Clearly, if more health plans used available clinical data to pay and bonus physicians, physicians would invest more readily in systems to improve health care.

The other business case for this data is the mere fact of improving relationships between health plans and providers. Traditionally, health plans have paid physicians, and we have never paid them enough. But having quality measures to talk to physicians about is actually a common group where health plans and providers can come together to determine how to improve health care, and paying for performance actually creates that common ground between providers and their patients and health plans.

We would also use the data that you have listed for systems to measure quality in hospital care, and provide that information to our members. Currently, we provide very high level information to our members about hospital care. And the measures that currently exist for hospital care are quite limited to a few disease states.

Hospitals are very complex organizations, and they take care of many different types of patients. And so, having more valuable data on both procedure types, and outcomes related to these procedures would be very beneficial for patients to at least have the opportunity to understand where they can receive care that they are most likely to benefit from without complications.

Hospitals are very interested in having value data to compare themselves against. They really do not like the individual health plans coming up with the individual measures to measure them independently. And it's very confusing to them and to the public to have one health plan value a hospital as very good, and another health plan to view them as very bad. Objective measures of performance would very helpful in hospitals being able to compare themselves with each other.

Blue Cross of California and WellPoint would use quality data to develop networks. Employers are asking us to develop networks based on efficient providers. Efficient providers are providers who deliver high quality at low cost. We clearly have the cost information, but we lack quality information.

Having quality information, especially about hospitals would enable us to talk to hospitals more concretely about their value. Currently, we work in hospital markets where hospitals are now very much in chains. They have reacted to health plans by forming organizational systems.

About 10 years ago in California one-third of all hospitals were in chains. Today, two-thirds of all hospitals are in chains. Some of the largest health chains are Sutter, Catholic Health Care West, Tenet(?) Health Care System. Some chains have demanded concessions on price that may or may not be worthy of what we are paying them based on the value they deliver to our customers.

It would be very important for us to be able to look at hospitals, and hospitals to agree upon measures that would really differentiate them in terms of quality, so that we could differentiate them based on price. That would enable that to take place, and may in the end, lower health care costs.

Clearly, having quality measures may actually lower health costs both in the hospital land in the physician land. That would enable us to be able to provide accessible health care for more members, especially as technology advances, and there are increases in technology.

If I had to list the various factors 1-8 that you listed, I would clearly think that 1, 2, 3, 4 were the most important, almost in the order they were listed. I question whether physicians will accurately reflect health status. I think health status needs to be reported by patients independently, and I question the value of having health status reported by the providers themselves.

However, clearly having lab values, listing the providers who perform surgical procedures in hospitals, identifying conditions that existed pre-procedure would be highly valued and used for quality systems, ratings, and potentially payment.

I think that's it.

MR. HUNGATE: Very good, thank you very much. That was very helpful testimony.

Discussion, questions, follow-up elements?

DR. STEINWACHS: I enjoyed both of them, mainly because it all fed into my own predisposition. So, now that we know there are two places in this country where you can go and get -- I'm not sure computers are always the answer -- you can get actual work that contributes to improving quality performance, what does it take to get this to catch on more broadly?

Because part of the business case I think that both you talked to are certainly well thought out within your organizations. And Jeff, you commented that when you go into certain other kinds of PPO arrangements or other things, it just doesn't exist. And in part, maybe you are suggesting they haven't thought through the business case, or they've been reluctant to get involved in that.

But it would help me to get your insights. What do you see as factors that would help make the business case more broadly in the positive ways in which both of you identified this?

MS. COLTIN: Well, I'll take a stab. Just thinking about the Massachusetts market, one of the things I didn't speak to in my testimony is the collaborative efforts that are occurring among the health plans to share the data that they have. Any one health plan I think, as Mr. Reynolds appropriately pointed out, only has one narrow view of the performance of an organization or an individual care giver, and may not even have sufficient sample size to understand anything about a large number of care givers.

And so, by pooling the information, and making it available to all of the health plans, and in turn to the physicians, we have created far more value from these data than any one of us as an organization had individually. We know how the medical groups, the PHOs, the IPAs, whatever, where the numbers start to get a little larger and a little more reliable, are performing with respect to our members, compared to the pooled population.

And if we see differentials there, we start asking questions about why. Does it have something to do with our contracting relationship? Does it have something to do with the patient mix? Is there some other factor, and what can we do?

The other thing -- that they have been receiving, because I think as Dr. Kamil pointed out, they get a lot of this data from different health plans, and getting their attention and getting them to look at it and compare themselves. One health plan is using one set of measures, possibly a subset of HEDIS, but not the same subset of HEDIS, possibly not scored the same way against the same benchmark, certainly not formatted the same way. They kind of learn how to read one, and then they have to start again and learn how to read the other.

In our case, one format, one measurement system, one way of reporting the data, one set of interpretations. And they get this data, and they get it sent under all our names, and it gets their attention. And they actually love it, and some of them have implemented their own pay-for-performance within the IPA based on this, without our having to even do anything with it. So, it can trickle down and spread through that kind of just an information dissemination situation.

The other thing I think is in terms of being able to make the business case for collecting this information. The business case has to be made on both sides, the receiver of the information and the giver of the information. And one of the things that I was reminded of as I listened to Dr. Kamil is we actually give a lot of this information back to the IPAs and PHOs.

We populate disease management registries for them. So, we'll say here are all your diabetic patients. And here are the dates of their last tests according to our records. And they are either overdue, or here is when they are next due, so you could use it as tickler system.

We would love to be able to put lab results into those for them, because most of these physicians don't have electronic medical records. The ones in the big integrated delivery systems may have them, or some of the more sophisticated large medical groups, but the vast majority don't. And they really appreciate getting this information back.

They have it if they went to the chart, but they don't have a place where it's kind of all pulled together. So, the value of sending us the lab result, if they could actually get it all pulled together and send it back to them -- here is your asthma registry, here is your diabetes registry.

And one of the things we are talking about with the other health plans is can we all agree on a common format for those registries, so they can then just append each health plan's registry and create a single one, even in something as primitive as Excel?

So, I think when you start to create value for the organization that is sharing the information, the cost of them of doing it is more tolerable, because they are getting something in return.

DR. KAMIL: I think you have to make the business case. And the business case unfortunately is always going to be an economic business case, or a quality business case that is profound. In the programs we have developed, we have tried to insert an overt business case. So, in our quality measurement system and our quality incentive systems, we have put in measures not on utilization per se, but let's say generic prescribing.

So, we have tried to make the business case carry its own weight, so that we can see increases in generic prescribing which will pay back quickly, at least hopefully pay back quickly, the investment in these systems.

It is very important to understand that whatever is collected has to be very pragmatic. We cannot afford to have quality metrics obtained that are very expensive or unreliable. And we can't use those systems. So, anything that is reported has to be very pragmatic.

In our PPO system and our HMO system we really have very low administrative costs for these quality programs. And the only way to keep that cost low is through administrative data. And the administrative data has to be easy for providers. It has to be not extra work. If it becomes extra work, if physicians have to do something extra, they will be really very resistant to doing anything. The worse thing you can do to a physician at least today, is ask him to change his work flow.

If you ask him to change his work flow even a little bit, you get unbelievable resistance, especially in unorganized settings. But even in organized settings the reason we don't hear it as much is because someone is hearing it. Someone is hearing it.

So, I think to have performance systems grow, they have to be administratively easy. They have to have a return on investment. And people who are actually putting up the capital for health care, or at risk for health care, or accepting premium for health care at risk have to get a return. I don't believe these programs will exist. If they exist solely on quality, it will be harder for them just to bite. They have bite both economically on a patient satisfaction and quality framework.

MR. HUNGATE: Questions?

DR. EDINGER: I'm curious, over time do you feel that there will be problems as the pay-for-performance or value-for-performance, whatever systems evolve, and providers express concern that over time it might be instead of paying for performance, you get payment for non-performance so to speak, or exclusion for not meeting certain performance goals. And at that point, there might be more resistant. Do you see any of that as a problem over the longer term?

DR. KAMIL: Well, I think any time that any provider thinks that they are going to be excluded, there is a problem. As you know, health care is special. It doesn't run like any other business, and providers have rights that seem to be beyond normal business relationship rights. So, yes, I definitely think that there will be a problem.

DR. EDINGER: I'm just wondering how might that impact on the kinds of data you collect? As you maybe push for data to indicate that there might be more resistance on certain parts of it. The data might be used for other purposes than just giving somebody a $5,000 bonus.

DR. KAMIL: Well, I think as long as the data collection is part of normal health care, you won't run into that program. If you are asking them to do something that they are not ordinarily doing, I think you are going to run into that problem of collection, because there is an incentive not to collect the data, and therefore, it won't be collected. So, I think that is true.

And any collection efforts that require any extra work is not collectible. For example, if you try to get quality from a hospital and they already have the data, you ask them to fill out your own form for that data, they will not do that. There is massive resistance for even doing anything extra that they have to do. So, if they have to do it for the license, if they have to do it for their own business case, then they will do it.

If they have to do it just for you, it is unlikely to be obtained, even in systems -- we dominate the California market, and we cannot demand data.

DR. HOLMES: I wonder if you all could comment on how your provider community, both physicians and hospitals, have -- and whether you actually have for example, risk adjustment mechanisms built into your pay-for-performance? When I was at Blue Cross/Blue Shield of Michigan, we were trying to institute some sort of provider performance profile, and we hadn't even gotten to the pay part yet.

Physicians and hospitals were very reluctant to look at the data and participate, because of the fact that they said we had an insufficient, inadequate risk adjustment system. And of course their patient population was different than everyone else's patient population.

MS. COLTIN: Our pay-for-performance programs that are built around clinical quality incentives have only to this point, been based on process measures. And so, we have not needed risk adjustment, nor have the physicians asked for it. They felt comfortable that the process measures are reasonably applicable to their patients.

They are evidence-based process measures. There is tremendous consensus I think in the community about the measures that we are using. And that's why we chose them, rather than creating a lot of homegrown measures. We do with one large organization, have a couple of homegrown measures, but we developed them jointly, and they too are process measures.

If we get into wanting to use outcome measures, which I think we would like to do, and if we had laboratory data, we would probably start doing. Then I think it will be necessary to take account of differences in the patient populations. I think that's a reasonable thing to do. And it will also afford some flexibility in how the incentives might be designed.

If a provider has a sicker population, setting an absolute target may not make sense. Whereas, looking at some percent improvement from where they start might be much more reasonable. So, I think that it would be a combination of being able to do reasonable risk adjustment, and to design the measure in such a way that it accounts for the fact that there are likely to be some differences that even the risk adjustment isn't necessarily able to account for.

DR. KAMIL: I totally agree with what Kathy just said. We are getting pushed by purchasers to find these efficient providers. Finding efficient providers would require some risk adjustment methodology to be able to truly identify them. I think the purchasers on this basis are way ahead of our ability to deliver what they want, but that's only my personal opinion.

But I do think that risk adjustment is important from a health plan's perspective if you are really going to segregate networks. If you are going to segregate a physician group -- include or exclude a physician from participation based on anything that has to do with cost, then risk adjustment for legitimacy needs to take place.

If we are going to identify physicians and say they are more effective or efficient, clearly risk adjustment is critical. If we are going to ask whether or not physicians take care of patients appropriately and provide services that they should be providing, then risk adjustment, those are usually process measures, and risk adjustment is probably not as important.

It's interesting for risk adjustment that at least in our market physicians are happy if you say you have done it. But I don't know that we know the best way to do it right now. But that seems to be very important to them.

MS. GREENBERG: Thank you to both of you.

Dr. Kamil, I have a two part question. One, you said that you use administrative data for about 12 quality measures. I wonder if you could just give us an idea of what some of those are, or the administrative data that you are using?

DR. KAMIL: We try not to invent any measures. We're not measure invention people. Nor would we invent measures that are very credible. So, we tend to use standard performance measures that are HEDIS. And we adapt it for the PPO program, made adaptations to those measures so that they would work on administrative data at an individual physician level.

We did develop a few measures that seemed to be important. Where there were no measures we developed a colon cancer screening measure that just measured the percentage of individuals from 50-55 who had any form of colon cancer screening marked on a claims, figuring that it should start during that period. And any indication that it didn't start was an indication of a potential lack of quality.

So, that's an example of a measure that we developed. But, we try to stay away from measure development outside of whatever the National Quality Forum would develop, or NCQA would develop, or someone else like the Center for Medicaid Services, they have developed a couple of quality measures which we steal. So, we really don't want to develop measures.

MS. GREENBERG: Thanks. So, basically, those established measures for which administrative data are available.

You said you asked for one change, one iota change in work flow or whatever, you are not going to get the data. And yet, you supported several of these recommendations, including like lab data being number one. And the recommendation is actually that that would be collected through administrative data, some mechanism, maybe a claim, maybe the claim attachment, maybe a separate transaction. But it's not a recommendation for electronic health records. So, I think we are clearly seeing that these are very much linked.

Because even when you have electronic records, you are going to have to put it into some kind of a transaction, because people aren't just going to send their whole electronic records, I don't think, to the plans. I don't think HIPAA would even allow that.

So, what is your vision as to how you would get this laboratory data onto an administrative record, given your observation that you can't ask the providers to do anything other than what they are currently doing. I'm missing something here, I guess.

DR. KAMIL: Well, we currently already do get laboratory data. But we get laboratory data from the labs. We don't ask the physicians or the providers to give us lab data. They are not going to do that. It's not going to be on the claim form.

MS. GREENBERG: Okay, so what do you get it on?

DR. KAMIL: We get it directly from laboratories. Now, it's very hard to get the data from the laboratories. Laboratories don't want to give it to you, because it's work to them, and they haven't really been asked to do it. And each lab seems to have not a standardized way of delivering the data, although we have tried in California to have standardized methodologies developed for the transfer of laboratory data.

But to answer your question, it has to come from the person who has the data. And the laboratory data is not going to be synchronous with care. So, the laboratory data, you are going to care for the patient here, and the laboratory is often going to come up at some later date, that day or week from now. So, that data has to come from the lab directly.

We already get at PPOs and HMOs that pay claims directly, that a lab event took place. So, when a lab event took place, we can ask the labs to give us the actual data. And I failed and I wanted to mention this, the two most important things that need to be developed is a common physician identifier, and a common patient identifier.

If you just had a common physician identifier and a unique member identifier, a lot of this other stuff would be much easier to deal with. So, I think those aren't on your list, but for us to identify the physician who performed the operation, by having that physician identifier, that just goes much easier, and makes integration of the ordering physician and the laboratory data, and the patient come together. We do that. It's not expensive to do that.

MR. HUNGATE: More broadly, they are on our list.

Let me put a couple of qualifying comments here, and then let Justine ask her question, and then mine. We are going to start to lose some of our participants in about 10 minutes. So, we've got a tight window to move to the next activity.

Justine.

DR. CARR: So, Marjorie asked the same question I was going to ask. And I think it's important for this committee to understand that you are supportive of lab data, but not if anybody has to do it manually. And I think that is a key thing that we need to understand. And the fact that you have relationships with these labs and can get it, the burden is somewhere, and it's not on the provider.

But whether that kind of relationship exists everywhere, I think your point is extremely well taken, that a physician is not going to now sit and enter into a separate database, yesterday's hemoglobin A1c.

MR. HUNGATE: Let me ask my question.

MS. COLTIN: This is specifically on the lab from our perspective. And that is we too would go first to the labs for the information. But some of the IPAs or particularly PHOs where they have the relationship with the hospital, use the hospital lab. And so, if the hospital lab is the lab provider, we would go to the hospital to get the lab data. And it would come in on a bill for the lab services, or it would come in as an attachment to a bill for a lab service.

DR. CARR: But I think we need to think specifically about what venue that result appears in, and how it gets from here to there.

DR. KAMIL: To build on what Kathy just said, our biggest lab does 25 percent of our laboratory work. Our second biggest lab does 6 percent. The next biggest lab does less than 1 percent. So, there is a lot labs, and there is as lot of data.

MS. COLTIN: A lot of different formats, and a lot of nothing.

DR. KAMIL: And there are some physicians who have their own labs in their office for whatever reason. And so, there is just a lot of stuff to put together.

MR. HUNGATE: I hear you. My question is a little different. Listening to NCQA, and listening to JCAHCO, and NQF was not here, unfortunately -- they planned, but didn't get here -- this committee has no business developing measures.

I'm not clear in my own mind what is not covered by JCAHCO or NCQA as an arena for the political discussion and the building of a business case that needs to take place between plans, institutions in the process of getting information to be standardized. So, it seems to me that those two are in effect, the de facto standardizers of information.

MS. COLTIN: I think you have to include the National Quality Forum in there as well, because there are all these little independent measure developers who submit their measures. When the NQF decides it's going to develop a measure set, or it's going to endorse rather a measure set, or consider endorsing a measure set in a particular area, like one they are working on right now is cancer care. Well, NCQA doesn't have any cancer measures yet, neither does JCAHCO.

So, they are out there scanning the field and saying, who has developed measures that have been validated, that are based on evidence and so forth, and reviewing those. And then if they find that there are a good set of measures, they will endorse them. And we will look then to the NQF and say we want to implement cancer measures in our network, we would look at those measures, whether or not NCQA or JCAHCO picked them up.

DR. KAMIL: Exactly. NCQA is sort of the Pac Men of measures. They gobble up measures. CAQH, the Coalition for Affordable Quality Healthcare, which I was surprised that we actually developed a couple of measures the NCQA took from us. We are very proud of the development of some of these measures.

And there are many good measures out there that haven't been vetted or described clearly. And there will be many more measures as it's clear that there is evidence that a major clinical improvement occurs from a drug or a therapy.

MS. COLTIN: One thing you might want to consider is AHRQ has developed the National Quality Measures Clearinghouse. And so, they are at least standardizing what we know about each of these measures. What it measures, what evidence is it based on, how was it developed, who developed it? And one of the fields they have in there is whether measure has been endorsed by the National Quality Forum.

And I think that's important that some organization be it NQF or NCQA or JCAHCO has really adequately vetted these measures. I mean JCAHCO and NCQA have a public comment period in the same way that the NCQ does. And in fact, they have a set of criteria for saying what's a good measure. And there are a lot of measures floating out there that may or may not meet those criteria.

So, being able to go to at least one of those three sources that has a good process for evaluating a measure, and saying it's a good measure, and then vetting it publicly I think is the right way to begin to look to standardization, and to look to what data elements are needed to create those measures.

MR. HUNGATE: Thank you.

Other questions?

MS. GREENBERG: I just had one question about that. To what extent do the criteria for their endorsing a measure have to do with the availability of data? I'm thinking of like with Healthy People 2010. There are some things that they would really like to measure, but since they know the data aren't there, like at the state level or whatever, they just don't even articulate the measure.

MS. COLTIN: Well, in my experience they are not mandating production of the measure. They are saying it's a good measure. So, they don't have to contend with what JCAHCO and NCQA have to in terms of the whole burden issue, and the push back around collecting the measure. And so, several of their measures are chart review-based measures, and difficult to collect, and certainly could potentially be collected far more efficiently were the necessary data elements available in electronic transactions.

But the unavailability of the data in electronic form isn't necessarily something that hinders a measure from going through that process. I think it may hinder measure developers in some way, in that measures don't always get developed in some areas. How does one develop a measure of obesity? How do you go in and find the target population without some mechanism?

You could go to Weight Watcher clinics and places where you are likely to find high concentrations of overweight people, but it's a challenge. So, you don't see measures in some of those areas, because you can't identify the target.

Agenda Item: Next Steps -- Plans for Future Hearings

MR. HUNGATE: Let me reverse my question, since we are about out of time, and ask back to the committee, where do you see that we have a value-added in this process beyond what NQF, JCAHCO, AHRQ, NCQA -- I'm asking us, is there anything that we need to do? Because it seems to me that's where we might have a unique contribution, and I'm not sure it's clear to me what it is.

DR. COHN: Let me give my view on this one. What I described as data development activities, I don't the role of the NCVHS is not to develop quality measures, but it's along this issue of this element doesn't exist, this element can't be transmitted, whatever is sort of like that. That is I think really our role. So, I don't think our role is to try to validate NQF pieces or anything like that, but it really is that fundamental data development issue.

MR. HUNGATE: So, how do we capture that effectively in a way that it informs what the Quality Workgroup does, because we have to go forward and try to make a contribution in this business. That's what I'm trying to grapple with a little bit here.

DR. STEINWACHS: Well, let me see if I'm picking up on it correctly, Bob. It seemed to me that both Jeff and Kathy talked to a set of recommendations that we have on the table. The next step is how do those move? And those all talk to some combination of either potentially mandating certain items in the sense of saying a specific item as to a pre-existing diagnosis, whether an admission was there or not.

Or making it feasible to for instance on administrative data, routinely capture and standardize laboratory results. So, I would think that our next step, having heard this testimony and synthesizing, but the plan in September was to hear from the groups. Say this is the kind of business case we heard and the rationale for each of these recommendations, and it potential importance in health care in America and advancing the quality agenda.

And get them to talk to us about what are the issues as you talk about trying to incorporate this within the standards for electronic claims transmission or attachments. So, I thought that our job was really to pull this information together before that, and then listening to them.

MR. REYNOLDS: One possible modification is to include in that group, we heard the testimony of different -- whether it's the public health record that Kathy mentioned or some of the others that are already in existence. Because again, if you think of the providers, if they are already doing it in states, or they are already doing it for other reasons, and I'm naive as to whether that works, but we have heard some ones that were mentioned a number of times.

But whether or not hearing from them would be worthwhile, so that if they've got a vehicle that they are currently reporting through, then back to Jeff's comment. You are not changing their process. You may have to go somewhere else, but you are not changing their process or capturing it if they are already doing it. And whether or not that needs to be added to the list or not is another question.

MS. GREENBERG: I have actually talked to Bob Davis, who developed for the consortium, and maintains the Health Care Services Data Reporting Guide about being at the September hearing, so, I think that would be good. I think we have to decide -- I think there is general agreement that we should go ahead with the September hearing, bringing in these groups.

But what about invitation from NUBC and NUCC, which is even sooner? Do you want to meet with them, and who wants to do it? And what do you want to talk about?

MR. HUNGATE: I'll ask what the sense of the committee is on that. But before I do that, I want to come back to one more piece that I think I heard from our testimony, the need to talk to the vendors of practice management systems as well. Did anybody else hear that as a piece? I did in terms of what the available information was as being if it doesn't come from my vendor from my system, I don't have it.

DR. COHN: Yes, I think that's premature. This is an approach we have taken in sort of the standards view of the world, is that you sort of start by deciding X is important. You find out how in the world of the standard these things all work, and then you build from there. Given really that the vendors are all looking to some sort of a standards-based solution to all of this stuff generally. And so, trying to ask them if they are collecting things in a non-standard fashion currently is useful. But I don't know what other conversation you can have with them about that.

MR. HUNGATE: I'm exploring. I like parallel processors, because they move faster. And I'm trying to think, are there any things that we can do that help this facilitate improving the information.

MS. GREENBERG: One thing that would be interesting, and actually with the standards group I would think, and maybe it's not the same hearing, but is to talk with vendors of electronic health record systems to see the extent to which they are building in the functionality to say report quality measures, and what the challenges are and what the barriers are.

And one big barrier we know is all of these different quality measures. And that's kind of a variation on Bob's question, is wondering if you think, Kathy and Jeff, that these four or five groups or whatever who are out there and vetting measure, et cetera, do they have the -- it seems to me the answer is no -- but do they have the capacity to move the field towards a more standardized set of measures?

So, you've got 100 vetted or 1,000 vetted measures. Those are good measures, but it doesn't help the provider that much if you are reporting 20 to this one and 30 to that one, and 15 to that one. Is there a process out there that is trying to move towards some type of consistency? Because we heard that yesterday from everybody. And also, if we only had one set of measures, we would be happy, but it's the 30 that we have trouble with.

MS. COLTIN: It's difficult to give an absolute answer to that. I think that there are areas where measures are needed, and don't yet exist. So, there will continue to be new measures developed. The issue is when an area is well covered. There are conflicting, duplicative measures out there, and the need is to say this is the best one. This is the way to do it, and to achieve some consensus around that.

And my understand is that was the role of the National Quality Forum is trying to play. And so, I would let them play that role, and I would use them in that capacity. And to the extent that there are areas that haven't been through NQF review yet, but that are on the IOM's top 20 list in areas where we really want to encourage measurement to occur and development, you can either take a wait and see approach, wait until the measures get to the point where they can be vetted and are through there, or you can identify the most promising ones, and figure out how to enable them. But I think that's something you have to consider.

MR. HUNGATE: I think in a sense that poses a good question for us, and one we will have to pursue. The recommendation from the NCVHS has been that HHS take a leadership role on development of the NHII. We have not made a similar kind of recommendation on quality measures I think.

MS. GREENBERG: HHS is part of the Quality Forum.

MR. HUNGATE: So, there is a mechanism, and so what is it that we should do is part of what I'm trying to get at.

MS. COLTIN: I think it would be helpful for you to follow-up on some of the offers that you got yesterday. I took notes as I was sitting there when people were making suggestions about how they could help, or what the future was. And it seemed to me that what I heard, and what I would expect the NQF would say as well is that they could tell you what measures they currently have, that have been vetted, been through this process, that depend upon data elements that you are considering, so that you would know what's riding on this right now.

And most of them, in the process of developing and vetting these measures, have developed the business case for the measure. They have talked about morbidity and mortality gains. You've got the value side of the equation. You don't have necessarily the cost side, what is it costing to collect the information now using chart review or whatever. And what it cost to transition to an electronic mechanism.

There, I think what AHRQ is doing with its task orders can provide you with some really good insights on that side of the equation. And I would take that and run with it.

The other offer I heard them make is that there are areas that are important areas. They relate to the IOM top 20, and they want quality measures in these areas, but they can't develop them because of data limitations. And they should tell you about those as well, because to the extent that they could in fact develop a measure for obesity management, if in fact they had a data element to identify patients who need intervention, and if there was evidence of actions that could be taken, then I think that you want to know that as well. So, that's how I would proceed.

DR. EDINGER: Well, if you think it might be helpful to have somebody come down and do some presentations on what the various forms, what the data collection elements are, enlighten some of the committee members about the problems are.

And also maybe have somebody like Jack Needleman -- several of you know Jack. Jack has done some work with the basic Medicaid databases, and looking at how you can get a hemoglobin A1c measurements in quality improvement. We could have somebody like him to go through some of the problems in collecting this kind of data, so we are more informed on these issues.

MR. HUNGATE: A good suggestion. I'm trying to think about the process that this committee has to follow now to deal with our agenda, which partly is taking these inputs, feeding them back into a proposal for the next hearing in effect, so that our next content discussion relates to what we say as input for the next year.

MS. COLTIN: I think the other thing you should hear about is what Simon mentioned, these category two performance measurement codes.

MR. HUNGATE: Yes, that's part of the hearing.

DR. COHN: You've got that. You've got learning about the 837, the DSMOs, talking to them.

I guess, Marjorie, to answer your question about presenting the NUCC or NUBC, I don't know whether it matters whether one formally presents to them, or whether one -- address sort of what's going on from their perspective about these eight items. Which is I think the agenda there, just to find out if they are doing anything.

MR. HUNGATE: I think we could shrink the list a little bit right now, actually.

DR. COHN: Well, I don't even think I would start talking quite yet. I think you are still data gathering. And I think one of the problems in trying to make conclusions before you have data is that sometimes you come to the wrong conclusions.

I think that you will find that there are surprising things going on in some of these areas that may cause you to rethink -- with low hanging fruit, there are things that need a little push or a little help to become successful. And we may discover that there are other areas that may cause us to rethink about what's low hanging. It's just a suggestion.

MS. GREENBERG: What about 3 and 7? Do you wait to take them, 3, 7, 8. Do you want to wait to take them to the full committee until at least after the September 14 meeting?

DR. COHN: I guess once again, only speaking from my own conservative -- I think it might be useful to hear from those that are responsible for the standards, their views on these things to see if indeed they are a no brainer, they are going to do it anyway, in which case we just need to say go forward and prosper. Or whether there is some barrier that then we can offer either some support, help them overcome the barrier, or whatever.

And I think if we were to come forward with a letter talking about those things in September, we wouldn't have -- at the beginning of September, we wouldn't have that insight. So, we might find ourselves blind sided. I think it will help inform the letter.

MS. GREENBERG: I would agree. Number 3 is only a hospital issue, and we could go and discuss it with the billing committee I guess in August, or we could wait until after the -- take it to the November meeting. The thing is if you want to support this on the UB04, you can't wait to get an endorsement from the National Committee, I don't think, at the November meeting.

DR. COHN: Maybe I'm misunderstanding here. I though that we were generally conceptually in favor of all of the items we were bringing forward. We had recommended that some of them were things that could be acted on more immediately than others, and that some might actually have to wait until the electronic health record. Isn't that where we are?

MS. GREENBERG: I'm saying this modifier for diagnoses specifically has the potential of adding it to the UB04, which is going to be approved at the November meeting.

DR. COHN: Okay, I hear what you're saying. I guess the question is whether it needs to be discussed with the NUCC in August, or whether when they come here, we talk to them about it here in September.

MS. GREENBERG: Yes, because there are two NUBC -- this is an NUBC issue, because it's just for hospital data. So, there are two NUBC meetings, and there are two NCVHS meetings. And the NUBC meeting is like August 3-4, and then the NCVHS is meeting September 1-2. Then there is another NUBC meeting and NUCC meeting within a week of each other in November. I don't have my calendar to say which is first.

Since a committee recommendation has to come out of the full committee, it can't just come out of the workgroup or a subcommittee. So, if you actually want to say to the NUBC, the National Committee would support this, then you've definitely got to take it to the National Committee in September. I think you have to get some kind of reading back from the National Committee.

MR. HUNGATE: And that will be before our next hearing.

MS. POKER: Is it possible to just tell NUBC that this is what the subcommittee is doing, but it hasn't been formally approved yet. Just be aware of it. And they can take it wherever they want. Is that a feasibility?

MS. GREENBERG: Well, you can, except it would have more weight if it was endorsed by the full committee.

MR. HUNGATE: See, I don't have a sense of the importance of our weight in those discussions.

MS. GREENBERG: It's significant, actually. It was the NCVHS that really got the e-code element on the UB02. They looked at core health data elements of the past years. It's not the only factor, but it has some weight I think.

DR. COHN: I don't have a strong feeling about whether or not. I think that regardless, since Marjorie is going to actually be at the NUBC in August, we need to have her send a strong message to them, which I think the committee supports, about that this appears to be low hanging fruit, this appears by everybody to be pretty obvious.

MR. HUNGATE: About our process, what we are trying to do.

DR. COHN: And one of the things we would be asking in September, is there some barrier to this. I would defer to Bob about a letter about the whole committee's panel or a recommendation. Of course I get confused about what that means exactly.

MS. GREENBERG: Well, whatever. I will be there.

MR. HUNGATE: And the dates are before our retreat as well.

DR. COHN: What I see from a very pragmatic basis is that it depends on what happens at the NUBC, whether or not we feel like there is a need to create a letter to go to the full committee in September. If they hear from Marjorie, Marjorie comes back to the Executive Subcommittee and says they all agree, it's going to be in there, no big deal, that's one action. If they sort of go, well, I don't know, then Bob may need to quickly have another draft and the subcommittee approve it.

MS. GREENBERG: That's fine. I guess it would be really great if before that meeting, we can have some synthesis of what we hear here.

MR. HUNGATE: Yes, you got that right.

Okay, any other comments? I want to congratulate everybody on an excellent meeting, good discussion. I don't think we could make more progress in this amount of time than we have made in this context. So, thank you all.

[Whereupon, the meeting was adjourned at 12:15 pm.]