Best Practices for Reducing Transfusion Errors - OBRR/CBER/FDA Workshop

Printable Version (PDF, 360 KB)
Note: Documents in PDF format require the Adobe Acrobat Reader®. If you experience problems with PDF documents, please download the latest version of the Reader®.

FOOD AND DRUG ADMINISTRATION
CENTER FOR BIOLOGICS EVALUATION AND RESEARCH
AND
OFFICE OF BLOOD RESEARCH AND REVIEW

Natcher Conference Center
Building 45
National Institutes of Health
Bethesda, Maryland

Thursday, February 14, 2002

8 o'clock a.m.


CONTENTS

A.M. Moderator: R Lewis

Welcome and Introduction, J. Epstein

HHS Patient Safety TasK Force Goals, AHRQ's Research Portfolio, J. Battles

CMS Activity In Patient Safety, S. Kellie

Sources of Transfusion Error

Death from Transfusion: Sources of Error, K. Sazama

Transfusion Errors, J. Linden

Error Surveillance in Blood Donor Infectious Disease Testing, M. Busch

Major Causes of Transfusion Errors, H. Kaplan

P.M. Moderator: J. Linden

Addressing Systems

Nurses, Pathologies and the Laboratory:
Working Together Outside the Blood Bank Walls,
K. Sazama

The Systems Approach Analysis of Error:
Application to Transfusion Medicine,
M.S. Bogner

The Organizational Culture Necessary to Identify and Correct Systematic Errors, D. Marx

Mistake-Proofing Your System, J. Grout

Problematic Risk Assessment, J. Grout

Problematic Risk Assessment, D. Marx


PROCEEDINGS

DR. LEWIS: My name is Richard Lewis. I am the Deputy Director of the Office of Blood Research and Review with the FDA. I will be moderating the first session this morning, but to welcome us all here, Dr. Jay Epstein, who is the Director of the Office of Blood, will be attending all day today and has offered to introduce our session.

Opening Remarks

DR. EPSTEIN: Thanks, Richard. It is my great pleasure to welcome everyone to this workshop on best practices for reducing transfusion errors.

[Slide]

This is a cosponsored workshop that has been hosted by the Department's Agency for Healthcare Research and Quality and the Food and Drug Administration, both of which are heavily involved in this issue of transfusion error and management of error.

I just want to take a moment, perhaps of whimsy, to note that this is also Valentine's Day--I wish you all a Happy Valentine's Day--and to remark that this is a very fitting day to discuss issues related to transfusion. After all, transfusion and blood donation is recognized as the gift of life and, indeed, it is, therefore, a gift of the heart. Unhappily however, as we all know, transfusion, like all other medical therapies can cause harm including harms due to medical error.

[Slide]

We have an interesting history in examining error in transfusion medicine. FDA began an initiative, which we called the Quality and Safety Initiative for Blood, in about 1991. One of the outgrowths of that was contract support from the National Heart Lung and Blood Institute to fund development of systems to track and report medical errors related to transfusion.

Some time later, the Institute of Medicine published a more global report on "To Err is Human: Building a Safer Health System." This was published in November of 1999, in which it was reported that as many as 98,000 people every year die as a result of medical errors occurring in hospitals.

In the report, it was recognized that the problem is not that there are bad people in healthcare. It is that there are good people but they are working in bad systems. That is to say, the systems, themselves, dispose toward the errors or at least fail to prevent them. Therefore, it is the systems that need to be made safer. In order to become mindful of where the problems are and develop rational approaches, it was emphasized that there should be development of better systems for error recognition and reporting.

[Slide]

In the area of transfusion medicine, we have been aware of the significance of errors for rather a long time. As you all know, the FDA has had systems in place not only for reporting adverse events related to transfusion but also for reporting errors and accidents that we now call product deviations.

But perhaps the most telling finding is that when we look at the death reports which, of course, are required reports to the FDA, we have discovered that one of the lead causes of reported fatalities is ABO incompatibility. ABO is a well-established science where we can prevent mismatch.

Dr. Sazama is going to tell us that looking at historical data this represents about fifty percent of the reported deaths. These deaths include errors of testing as well as errors of giving the wrong unit to the patient and that roughly they are half in the blood bank and half on the ward. Jeanne Linden, in one of our early sessions, will review medical errors in transfusion in general and Mike Busch will present some brand-new data measuring the error rate in testing.

[Slide]

The departmental response to the IOM report led to several efforts, the first of which was the establishment of the interagency coordinating task force, which went by the acronym QuIC, which was to try to figure out the most rapid responses needed. In the area of issues related to blood, it was recognized that we, first of all, needed to expand the mandatory reporting requirement related to product deviations, previously called errors and accidents, so that they would apply to the registered as well as the licensed blood establishments. This then resulted in an accelerated time frame for publishing a final rule on biological product deviation reporting, which is in 21 CFR 606.171 which was implemented last May. Sharon O'Callaghan, from the FDA, will speak about observations since we have had this more comprehensive reporting system.

Additionally, the department's Advisory Committee on Blood Safety and Availability met to discuss medical error, and made a number of recommendations in support of efforts to achieve the highest possible quality standards for blood collection and transfusion, and the department established a Patient Safety Task Force under the agency for Healthcare Research and Quality. Jim Battles, who is one of the leading figures in that organization and in this area, will describe for us what is going on and the structure of a grants program that has been brought to bear to try to move these issues forward.

[Slide]

FDA participates in the Patient Safety Task Force, which is composed of agency representatives of FDA, Centers for Disease Control, CMS and the leadership from the AHRQ.

[Slide]

So let me just end with a brief note on the collaboration and on the agenda. I think that it is important to thank the individuals who have been responsible for this event. These are many of the same people who are also involved with the work of the Patient Safety Task Force, and they are Jim Battles from AHRQ, Kay Gregory from the American Association of Blood Banks, Harold Kaplan, now at Columbia University, and Jim Linden with the state public health department in New York. Let me also thank Mr. Joe Rocheck who has done yeoman's work arranging the logistics and support for the meeting.

[Slide]

Briefly, the scope of the agenda for the next two days--we are going to spend some time looking at systems, and we are privileged to have Drs. Grout and Marx who are well respected experts in this field, to add to our knowledge. There will be a lot of discussion about reporting systems and, in particular Hal Kaplan will have the opportunity to describe the medical error reporting system for transfusion medicine, the so-called MERS-TM which has already been developed, as I say, under the NHLB contract support.

We will talk about one of the Secretary's special initiatives, which is to expand the use of barcoding to affect potentially all drugs and perhaps also devices, and other new technology fixes that are just over the horizon.

So, with that brief introduction, I am going to turn the podium over to Dr. Lewis again who will be the moderator of the first session, and I hope that we all enjoy a very exciting day; we have some very good presentations to look forward to. Thank you very much.

DR. LEWIS: Thank you very much, Dr. Epstein. Let me just ask one thing of the speakers, to remind you of our full agenda and that we are going to try to move along to stay on time as much as we can so that everybody has an opportunity to present all of their facts.

To lead our discussions today, I am pleased to present Dr. James Battles. Dr. Battles, as you have heard, is a leader at AHRQ in the Patient Safety Initiative and in particular, has a long-standing interest in errors in blood transfusion. Dr. Battles will tell us both what the Patient Safety Task Force has been doing as well as some of the research projects that AHRQ is sponsoring. Dr. Battles?


HHS Patient Safety Task Force Goals

DR. BATTLES: Good morning. It is a pleasure to be here and see all of you coming out for the CBER-AHRQ party on best practices in transfusion.

[Slide]

I am going to talk about the Patient Safety Task Force which Dr. Epstein mentioned. This is an activity of the Department of Health and Human Services, and Secretary Thompson created the Patient Safety Task Force to bring together AHRQ, CDC, FDA and CMS to begin to coordinate activities within the department relative to patient safety, and particularly to look at the issue of the various reporting systems that are both required, voluntary--euphemistically talking about voluntary for the federal government--in coordination across the agencies and within to improve the existing federal reporting systems both for the front end and the back end of the integration of the data once it is in. CMS is also having a companion piece of looking at the Medicare patient safety monitoring system, and Shirley Kellie will talk a little bit about that following my discussion.

[Slide]

I think we all recognize some of the major problems between issues of public accountability and learning from errors as it affects any reporting system, and that balance between the regulator and its requirements, and how do we get information to learn from it as well. So this is a constant issue that permeates any of our discussions of any of the reporting systems.

[Slide]

This is one of the things that has been helpful that we have been looking at, and some of you may have seen this from previous presentations about what do we mean about different types of events. We usually have the disaster or no harm, or some patient is actually harmed by some iatrogenic activity. But we also have a lot of no harm activities where there, but for the grace of God, pure luck or the robustness of human physiology there was no manifestation of an injury to the patient but the event actually happened; and then a lot more where the item was tracked and never did reach the patient because somebody recovered. I think this continuum of the concept within the iceberg of the types of events helps us to know which goes to which bucket for accountability for no harm and learning for no harm and near miss.

[Slide]

The concept of the iceberg first was introduced by Heinrich, back in the '40s, talking about automobile accidents, and for every major injury there are 29 minor injuries and 300 no injuries. So, you can imagine that the pool of events we have to work from is quite large, and if we take some of the data that Jeanne Linden will present and force that against the iceberg, there is an awful lot of opportunity to learn without having patients be harmed.

[Slide]

One of the problems that faces people who have to report is that the current safety reporting is this major activity of providers, hospitals, facilities all having to report to multiple agencies, with multiple things on multiple occasions. So, we have state required systems, various federal agencies, creditors and then a whole host of ad hoc reporting systems.

[Slide]

It is kind of like this spaghetti wiring diagram of where all of this goes, and the problem is that when you try to take the data that we have--and I apologize to Shirley because we didn't revise that; I guess I owe a dollar basically because every time you use the old word you get charged--but one of the recognitions is that we get all these data and it ends up in these data silos and then when you try to look across the data you can't. We even tried within the Patient Safety Task Force to look at one area of end-stage renal disease and we had lots of data but when we tried to put it together from the different links from the CDC, CMS--in this case in the device area of CDER--it didn't fit. It couldn't couple together. So that is the kind of thing that we are trying to solve.

[Slide]

Our vision is to create a knowledge system for accountability and exchanging of information, and to protect patients is our goal. So what we have to do is integrate the data so it is useful.

[Slide]

So these are some of the concepts. It has to be a knowledge system. Ultimately, it is learning from the data to make changes to make healthcare safer for patients. That is what it is all about. It also has to be locally useful. The data that comes has to come back and have meaning for those people who make changes on the ground every day in blood centers and hospital transfusion services, and integration is essential. We are looking at it as a modular approach for evolving and expanding as it grows.

[Slide]

Again, looking at that concept of harm and no harm and missed data, there are certain responsibilities that have to be accounted for, for harm data and these are some of the things that we are beginning to look at. These are the never events, things that should never happen; sentinel events; and usual events. One of the things is that we are trying to get some sense of some numerators and denominators and also look at some best practices, some bench marks in lessons learned. Then, of course, no harm and near miss data.

[Slide]

Increased information about potential risks and hazards, we need to begin to identify those. And, we need to prioritize information based on its relative risk and, hopefully, reduce that risk over time and be able to monitor it. So what we want to look at is to increase and maintain reporting levels.

[Slide]

Some of you may have heard this before. As we begin to look at reporting data really we want the detection rate to be the first step in error management. If you don't know it exists or don't let anybody know that the risks happen, you can't take any action. So, from a management point of view, both at the local level and within the Patient Safety Task Force, from an organization point of view at all levels we want error detection to be high. So the goal in error management is really to increase error detection and reporting rates.

[Slide]

An indicator of that sensitivity is the number of events reported as an indicator of an organization's detection sensitivity level, which we have classified as DSL. High reporting rates for organizations represent an indication of their sensitivity to detecting things that have potential or actual risk, and we have to look at what structure in an organization and at the federal level to encourage reporting.

Some of you have seen this, the DSL over time, the lessons learned from other indices. We want that level to be high. Remember, this is information. What we want to change over time is the risk and the severity of events reported. We know from other industries that this begins to operate. Over time you should have a major increase if you have done everything correct in your reporting, and over time the severity of the events will change, and we are beginning to find that out as people begin to actively increase their reporting.

We have indications from the VA, and even at the clinical center here at NIH they have found an 800-fold increase in reporting once they restructured their reporting, and they are beginning to see some of the shifts in the risk both to patients and risk to the research protocols. So we are getting evidence that this kind of model works but it is important in a conceptual way. When Congress says, "what did you do with the money we gave you?" We have to be able to say, "oh, look at the increase in reporting. This is a good thing. Thank you, Senate, for giving us the money and look what we did with it." We found out a lot more information. We have to be careful in these early stages that we create the right expectations.

[Slide]

In the first steps we used in a progress of reaching the vision of this integrated reporting system we began, in December of 2000, to get preliminary input from our users. It was very important, we felt, to get a lot of stakeholder issues because having a system that got user involvement both in the design and conceptualization was essential. Granted, we are the federal government and we can tell everybody what to do, but you know how that works, not very well.

Then we began to create some of the issues and outline some visions in a planning meeting in March of 2001. Last April we had the data reporting summit which outlined the directions in which we were going. Once we got the user input and some general directions of where we wanted to go, we had to get some details. So we issued a contract to do an implementation planning study, and that was awarded to MEDSTAT who is currently developing the plan, and we hope to have that report at the end of June.

[Slide]

Then, from there, we will take their input and issue a contract to develop the system. So there is funding in the budget both for FY02 and FY03 to move this forward, and there is a commitment at the departmental level to move forward. Secretary Thompson is quite excited about the fact that agencies within his department are actually talking together, playing nicely together and working on common goals.

[Slide]

So we hope to begin to create this system where users will provide a common interface. The data then will go to where it is needed for the purpose it is needed. There will be no change in regulatory requirements of any of the agencies, but a way to commonly report and eventually data will come into a common federal data pool which then can be shared. To do that, we have to make sure that at the front end and at the rear end the data can be matched. Our goal is to make sure that it is easy to report, and then we want to share that data.

[Slide]

So these are some of the interactions. We also want to have the federal system be able to link to states and to voluntary reporting systems within that portal. We really want to make it an interactive system.

[Slide]

What are our next steps? The next step is to review the MEDSTAT report; issue an RFP by July; and we have to award the contract for further development no later than September 29 or the magic witch and the money goes away. So, we have our work cut out for us. And, we will begin the system design on some pilot basis during fiscal 03, and then full operation in fiscal years 04 and 05, again, depending on funding levels.

[Slide]

I would like now to talk a little bit about what is being done at the federal level in terms of patient safety research.

[Slide]

Jay mentioned the IOM report. Really it started things moving back in November of '99 with congressional action. Of course, the big numbers were the 98 to 44,000 and then when you compare that to automobile injuries, workplace death and injuries and aircraft. What is interesting about this slide is that there is kind of an inverse relationship in the amount of funding to the amount of accidental death or injury.

The other thing that I think is good to note is that it is not just transfusion anymore. I think transfusion as a field has been a leader in terms of safety and now the rest of healthcare has some things I think to learn from the transfusion field.

[Slide]

There is a lot of interest in other countries. Great Britain followed "To Err is Human" with their own report, "An Organization with a Memory." If any of you are interested, if you have read the IOM report, the British report is very interesting and has a lot of similarities but some very important differences. There is similar activity and studies being done in Australia.

What is interesting is that there is a desire to keep this as an international activity. Last week we had all our grantees together, including representatives from Great Britain, and we were actively engaged in how we can bring our patient safety research agenda and theirs in line so they are compatible. They are very interested in sharing reporting, how the information from the SHOT system and other things can be integrated in the larger system and integrated with things that are going on. So there is a lot of international interest in this area.

[Slide]

I think one of the important things we have to keep as a goal is that the goal of patient safety is to reduce the risk of iatrogenic injury to patients. The way we can do that is to minimize hazards which increase the risk of injury, and that requires identification and taking action, and I think a lot of the topics that are being talked about today are about how we do that, how we take that action.

[Slide]

A little bit about AHRQ's mission within the department--our focus is on patient safety and quality through research.

[Slide]

We have benefited in terms of all the pressure. Congress did give fifty million dollars to do research funded in FY01, and we scurried around actively to try to spend that and get it out in the field.

[Slide]

Congress wanted the funds to be used to develop guidelines for collecting of uniform data related to patient safety; establishing a competitive demonstration program for healthcare facilities and organizations; and determining ways to improve provider training in order to reduce errors.

[Slide]

You can see that Congress was fairly specific. They never give anything without strings. I think the important thing though is that some of the messages that people delivered to Congress about what was needed--they paid attention and so it came back in our authorization legislation.

[Slide]

One of the big things was health systems and providers participating in demonstrations to utilize available and appropriate technologies to reduce probability of future medical errors.

[Slide]

AHRQ is one of many federal agencies particularly associated with the QuIC, as talked about. We also wanted to make sure that the research agenda was user driven bottom up. We wanted to take a cooperative approach. I think one of the exciting things about the QuIC was the interrelationship to the various federal agencies. The Patient Safety Task Force within HSS allows interaction among the DOD and VA and it is truly exciting on a working basis. It is quite unusual in some sense to see agencies of the federal government actually playing together; getting work product out together and enjoying it in the process. So I think that has been a really important lesson.

[Slide]

Here are some of the things that we have done in our reporting summits, different summits and activities to get input. AHRQ was fortunate. We had some patient safety grants programs in the planning and we released that RFA two weeks after the IOM report and we looked like we were responsive. As any of you in the federal government know how long it takes to get an RFA, we certainly were willing to take credit for being responsive.

[Slide]

So we funded some systems best practices and we spent the bulk of last year trying to get the money out. One of the things that helped shape the research agenda clearly was our re-authorization language and the congressional intent, as well as the national advisory committee for AHRQ, and then, obviously, ongoing interactions with our partners.

[Slide]

One of the things that was important was the agenda setting research summit which began to identify what are some of the research items that we should pay attention to, and this is the kind of laundry list which helped us in terms of the type of mix of requests for a proposal and how we determined which proposals to fund to meet these kinds of questions.

[Slide]

We issued a total of six specific patient safety RFAs, requests for applications. Probably the largest one was systems errors demonstration projects. Centers of excellence--we recognized that there was a need to create a stable source of funding for those centers where there was a concentration of patient safety expertise. We also recognized that there was a need to develop a new capacity within the country in terms of doing patient safety so we have a program for developing centers. Then, the use of informatics, working conditions and patient safety research and demonstration education.

[Slide]

For the demos projects, which is probably the largest component, Congress was quite specific. They said nearly half of the money that we give you, you have to spend on demonstrating reporting systems. So we funded in various ways 24 demonstration projects, totalling 24.7 million. So you can see that reporting was a serious issue, at least in terms of Congress, and we are very excited about our portfolio of reporting programs, and I think we have some representatives who were funded through that here, from New York State and Columbia. I don't know if anyone else here is funded under that.

[Slide]

The CLIPS project portfolio is looking at the application of informatics in a variety of ways. We have funded 22 projects for 5.3 million on the applications of technology.

[Slide]

Clearly, the issue of working conditions has been a very important issue, and was one of the congressional mandates that came through, that we should be beginning to study the relationship of work and its effect on safety and quality. We have funded eight projects, totaling three million, to examine specifically patient safety aspects, and another twelve projects which look at quality and patient safety in working conditions. So, what is it in the nature of work that represents a hazard to patients? We are really excited about projects on working conditions.

[Slide]

Again, we recognize that there are a number of centers across the country that have the capacity to do really good patient safety research already established. So in our centers program we ended up funding three centers of excellence across the country. I think in this room a representative of one of our centers is the one in Houston, Texas and hopefully, Kathleen, if you are not, you should be involved in the activities going on. We had a total of 16 demonstrations of the developing centers looking at a variety of different aspects of patient safety.

The major focus of the developing centers was developing capacity to provide the funding for an infrastructure to begin a patient safety research agenda, and then some demonstration projects to demonstrate that capacity.

[Slide]

The other aspect would be the seven projects for dissemination and education. We funded seven projects totaling 2.4 million in that area.

[Slide]

I think one of the bottom lines that we recognized in the relationship to patient safety is, as Tip O'Neill once said, "all politics is local." Really patient safety, when it comes down to it, is a local issue. We, at the federal level, can facilitate; we can help fund research and we can share data, but it is states, individual hospitals and healthcare organizations and institutions that make patient safety everyday and what we have to do at the federal level is facilitate the capacity of every local institution to make healthcare safe for patients.

[Slide]

The next steps for AHRQ are to implement the agenda to keep those 94 grants and contracts that we funded with our 50 million; keep building the partnerships both at the federal and state level; keep the collaboration going among stakeholders; and we have to start disseminating the information so it can be used, and we have to maintain the momentum.

For those interested in what AHRQ is funding and doing, our web page at www.ahrq.gov will track both present funding activity reports and funding opportunities as they come available.

[Slide]

So, thank you very much.

DR. LEWIS: Thank you very much, Jim. I am going to ask that people remember their questions. At the end of our sessions this morning, just before the break, all of the speakers will make themselves available to answer any particular questions you have about their presentations.

As has been alluded to by Dr. Epstein and Dr. Battles, CMS has had a large role in looking at patient safety and gathering information and data from their records. Dr. Shirley Kellie will tells us, first of all, how long--we have to say in parentheses formerly HCFA when we say CMS; not only that, PRO is now QIO so I have one more acronym that I have to learn--Dr. Shirley Kellie is the medical officer and is the national lead for patient safety at CMS, and she is in the Division of Clinical Standards and Quality. Dr. Kellie?


CMS Activity in Patient Safety

DR. KELLIE: Good morning.

[Slide]

I think I probably have the least connection to blood safety of anyone in this room, except to say to you that in our programs at CMS we really came upon what you folks have done in blood safety and we are using that in many ways to inform programs that we are working on at CMS and it has been very instructive to us to know about your work.

[Slide]

What I want to do is go through this very, very briefly. One of the things that we had to do at CMS, because we have been working on our quality improvement programs and on quality projects for many years, was to sort out what we mean by patient safety, and we still go through that on a regular basis. We have been working largely on errors of omission and one of the things we had to decide was what does this new field bring to us in learning about these other types of errors. So we did adopt James Reason's conceptual framework early on so that folks would have a common vocabulary.

For those of you who don't know, there is a QIO, formerly called PRO, in each one of the states in the nation and they work very closely with hospitals and outpatient providers so that we had a lot of demand coming from the hospitals in various states wanting to get projects going on in patient safety after the IOM report. Unfortunately, we couldn't focus on just one area; we had to deal with areas and safety in a more general framework.

I can tell you that based on Dr. Hal Kaplan's work and James Battles input, we have really been moving away from this concept of reporting into an error management framework, and we have really found the MERS-TM system to be exceedingly useful to us. We have a couple of local projects. One is dialysis centers where we are really trying to build on what has been learned in that system.

I will just very briefly talk a little bit about the Medicare Patient Safety Monitoring System which is being developed under the auspices of the Patient Safety Task Force that Jim spoke about. I have to say that we really are working across agencies, which is really a lot of fun actually. To have this input from the other agencies has been very instructive.

I will just mention very briefly a couple of special studies. The Medicare Patient Safety Monitoring System will become part of that network that Jim is building where we will be tying some of that information together and bringing it into that larger data set.

I will say a couple of words about these QIO projects, local projects, and also we have started what we call a patient safety community of practice because, for one thing, we want to be an implementation arm for all of this good research that AHRQ is funding so that we can roll it out through the QIOs into hospitals. Then I will say a word about future activities.

[Slide]

This is the model that everyone here knows. I put it up simply just to let you know that this is the model that we are working from. We had QIOs and staff that are not familiar with some of the patient safety work. So, what we did, we got everyone to sort of use this framework which I think in the long-term has been very helpful to us because we can share some of the work that we are doing.

[Slide]

With regard to the Medicare Patient Safety Monitoring System, the aim here actually is to monitor what we call adverse events or what I think we can more likely call harm to patients and the risk factors in the Medicare population. We are beginning in the hospital setting because that is where we have data. We think that we don't know what these rates are of adverse events. I mean, the data is old but we don't have good denominator data and we will have denominator data for the Medicare population.

This is a collaborative effort, as we have already mentioned. In addition to our federal partners and the Patient Safety Task Force, we are working with the VHA, the National Surgical Quality Improvement Project.

[Slide]

We have a support PRO because that support PRO is what brings to us confidentiality. So any information that is collected in this program will be confidential; cannot be subpoenaed, and it is very useful. We also have clinical data abstraction centers that will be doing the medical record abstractions for us, our federal agency work group, and we have technology expert panels, one of which was convened last week. The technology expert panels involve both technology experts as well as stakeholders like the AMA and the American College of Surgeons and the Hospital Association.

[Slide]

The data source is medical records, and we are also using Medicare claims data to identify medical records for abstraction.

[Slide]

Our sample will largely be abstracting approximately 93 records per month per state for each year, which is about 60,000 records per year.

[Slide]

These are the adverse events that we are going to be looking at, and it would be adverse events associated with CVCs, which is a device, and that was an area that FDA had a great deal of interest in. The adverse event there is largely blood stream infections. I think that at some point we could probably look at transfusions as well. It didn't come up on this first list. We will also be looking at adverse events associated with joint replacements and revisions, primarily hip and knee, and we are working with the American Academy of Orthopedic Surgeons as well because they are establishing a registry, and post-operative complications, which we all know are the ones that a lot of people look at but we don't really know what the national rates are, and we hope to have those rates for the Medicare population.

[Slide]

In addition, we are looking at adverse drug events. Here we will be using a trigger tool that was developed by Dr. David Klasen and the folks at the Institute for Healthcare Improvement for use in some of their projects. Because adverse drug events are not the easiest thing to detect, they are not in the nurses' notes or the physicians' notes, you have to look at triggers to highlight them.

We are also going to be looking at sepsis syndrome and blood stream infections, and there we are starting with the outcome and going backwards to look at exposure, but that is another method.

[Slide]

The MPSMS--I am just putting this up to give you some idea. We define an adverse event as a harmful event made more likely by hospital care. We are interested in collecting those adverse events.

[Slide]

Each adverse event in addition has an outcome, which is largely the severity of the adverse event. It is the hospital care that represents the exposure. We can conceptualize how we talk about that hospital care in terms of exposures in the old Donabedian model and the quality model, or we can also begin to talk a little bit about it in the context of patient safety, sort of picking up on the MERS-TM model where we would begin to look at the technology, organizational and human factors.

We also have patients come to the hospital with various conditions and demographics, and sort of characteristics, and those variables actually are what we would call effect modifiers so that if someone has a CVC and it turns out that they are diabetic, then having diabetes could easily modify the effect that the CVC has either in causing the adverse event at all or, if it does occur, making it more or less serious. Maybe at some point we can fill in there, rather than the process of care-specific adverse event CVC, at some point we would like to think about doing something around blood transfusions. Dr. Kaplan is on our technology expert panel and can, hopefully, help us with some of that.

But when we go to the medical records we are not going to abstract 60,000 records for each one of these measures. What we are going to be doing is identifying records in which there is a more likely possibility of DVC exposure. How we will be doing that, we will be using the Medicare claims data and looking at things like revenue codes, and looking for folks who have been in an intensive care unit. We have Dr. Tom Bubotz, at Dartmouth, who is actually working with us as our biostatistician on this project. He has a lot of experience using Medicare claims data. We could also identify the Medicare claims data, as we were talking about earlier, for the number of transfusions that are given to Medicare beneficiaries.

[Slide]

I will just say a word about these other projects. We have a project with NYPORTS, which is the state-based mandatory reporting system in New York. Do you have reporting of transfusion errors in that or is that a separate reporting?--separate reporting in New York. We are looking at surgical adverse events. In NYPORTS our QIO is looking at that, and looking at what is reported to NYPORTS and then what is found in the medical records and the hospital discharge data.

The more important question there though that we wanted to look at is of all this cause root analysis that is being reported for the serious events in NYPORTS is can hospitals actually use that information in any way to improve quality.

[Slide]

Also, our QIO in Utah and Nevada has been a leader actually in patient safety and have had a journal club actually going on in patient safety for providers in Utah for some years now. They are working on translating some of the error management principles into hospital safety improvement interventions. They are piloting tools like various root cause analyses, failure mode and effects analysis and culture surveys. They are also supporting what we call our QIO community of practice, which is to disseminate what we are learning in these projects to a wider group of QIOs, or when some of the findings come out from the AHRQ research we will have a mechanism in place so that some of that can be disseminated.

[Slide]

In Ohio we have local projects, and these range from things like falls in hospitals to implementing medications at best practices. The statewide coalition in Wisconsin also has state funding to look at the safe practices in Wisconsin. We have a high hazard area approach. One of the PROs is working on AMI care in a high hazard emergency department, and have been really quite successful with it. The Alabama QIO is working on medication safety in dialysis, and this is where Dr. Kaplan and Jim have been very helpful in terms of working with this QIO and giving them advice about how to go about this project.

[Slide]

We think about several approaches that QIOs are taking. One is this high hazard area. A second approach would be focused on an adverse event and use all those tools to go backward to the hazards. A third is just putting best practices in place. I can tell you that the third approach has not worked real well because, standing alone, it does not necessarily address the problems that a given hospital might have.

[Slide]

This would be one of our projects that is being carried out in Ohio. I wanted to illustrate to you how important having some models that everyone is working from has been to our program so that we can share across the programs.

[Slide]

We are also very enthused about a community of practice, and Dr. Nancy Dickson, who wrote the book "Common Knowledge" and is a knowledge management expert, is working with us actually in trying to get us to work across these QIOs for how we can share information and tacit knowledge. We are also working with QIOs to do peer assists so that one QIO, having solved a problem, can go to another one, and they can request little teams to move across the QIOs to help out. We are also learning how to do problem clinics on conference calls. So, we find this approach very useful.

[Slide]

Future QIO activities in CMS--we have a national project that we are doing. It is starting this summer. It is surgical infections prevention, and that is in collaboration with the CDC, the Institute for Healthcare Improvement and CMS. The outcome there is getting the right antibiotic in a timely manner.

We are going to be continuing the MPSMS. We hope to be in production on this in July. We are currently just finishing our alpha test and beginning our beta test. Most exciting, we are thinking very seriously about initiating error management pilots in various states for the seventh scope of work.

[Slide]

So, I would say that overall, thanks to folks in this room and what you folks have done in blood safety. We are sort of able to build on that and to move away from what we originally thought of as reporting more into error mg to what we do with the information approach. We certainly do have a strong interest in translating what the folks at AHRQ and anyone is learning in patient safety and bringing that to the QIOs who have a real opportunity to implement things in hospitals. Thank you very much.

DR. LEWIS: Thank you, Shirley. Before we move on to what the sources of errors are maybe I can ask if anyone has any specific questions for what the federal government is doing, both at AHRQ and at CMS. Do you have any questions for Dr. Kellie or Dr. Battles? Yes, Dr. Epstein?

DR. EPSTEIN: For Dr. Battles, you showed us how the fifty million dollars was allocated, but is there any new money in 02 or 03?

DR. BATTLES: There is a small increment of funding in the 02 budget that we have. There is approximately two million dollars available set aside within the 02 budget for patient safety research projects. Rather than have a special RFA for patient safety, the applications are through our regular grant applications. So, anyone interested in applying for our funding, the guidelines for how to apply for those funds are available on our web site under grant applications. But funding is somewhat limited. The fifty million does move forward in each forthcoming year so that what we funded in 01, those projects will continue. They are part of the sort of fifty million base.

The budget for 03 is another approximately five million dollar increase, and some of the increase of money is for the Patient Safety Task Force activities. So that will continue and there are opportunities to get new applications.

DR. LEWIS: Any other questions? Well, hopefully, you have a good idea that the federal government is doing something about patient safety and have a feel for the direction that it is going in.

In order to set the stage for the rest of the conference, we are going to start to look at where do transfusion errors occur. We are fortunate to have a number of people who have looked into this and who are going to present us various aspects where, from their perspective, transfusion errors occur. To begin these discussions, I would like to introduce Dr. Kathleen Sazama. Dr. Sazama has published regularly on fatalities that are associated with blood transfusion. Besides her doctoral degree, she is also a lawyer and she is a professor of laboratory medicine and vice president of the faculty of academic affairs at the University of Texas, M.D. Anderson Cancer Center. Dr. Sazama?


Sources of Transfusion Error
Death from Transfusion: Sources of Error

DR. SAZAMA: Thank you very much for the kind welcome from the government.

[Slide]

It is a real pleasure for me to be a participant in this workshop because, as most of you know, errors in transfusion has been a source of interest to me and I am pleased that we are now at a point where transfusion is looked upon as one of those areas where improvements will, in fact, improve patient safety.

[Slide]

I am going to report to you on my review of the FDA reports between the years 1976 and 1995. Most of you know, of course, that 1976 was the first year the FDA required reporting, and I only chose a 20-year time period because it was the data that I had at hand. The total number of records that were reported to have been collected during that time was 754. At the time that I made the FOI request for these records 25 were not provided; 141 were excluded because the deaths were either not related to transfusion or they were related to a viral infection. So I initially need to make a disclaimer for those of you who don't know, the data about to be presented has nothing to do with the infectious deaths from transfusion, other than bacteria. So the viral deaths are not included in these data. Those are collected by the CDC. There were 29 donor deaths reported among these records. I also excluded them for purposes of this discussion. So there was a total of 754 records, or 74 percent of that total number. As I mentioned before, the exclusions included the viral infections.

[Slide]

Just a quick word about the problem with the reporting system, and I mean this as a criticism of us in the profession and not of the government necessarily although we may share this, that is that the data are quite variable. They range from having a short email of less than a paragraph to a record sheet that the FDA completes which is a transcription of data, to a letter from a reporting facility, or a letter plus accompanying reports, or letters plus reports, plus autopsies, plus quality improvement reviews, course of action and so forth. So it runs the whole gamut of information that you might want to have. A parenthetical note, if you ask for this information to be copied for you, the government will charge you by the page and there are lots of blank pages. That is just a note for my friends in the government.

[Slide]

The number of reports per year has steadily increased. An average of about 37 reports per year are given. Again, there has been a slight increase over time but nothing really statistically significant.

[Slide]

The causes of death by decade, I chose to report in this manner since many of you have seen the published report in 1990 of the first ten years, and that is the first column that you see here. I am comparing then the new data from the second decade of reporting. You can see only very slight differences in the numbers of reports between the decades. You will see an increase in bacteria in the bag as a cause of death reporting. Everything else has remained virtually the same proportionally.

[Slide]

The sources of error that I am going to discuss today are ABO acute hemolysis; the non-ABO acute hemolysis; the non-serologic hemolysis; delayed hemolysis; bacterial contamination; graft-versus-host disease; and TRALI. These are important for us to look at, I think, as sources of error so that we understand where there are opportunities for change.

The ABO hemolysis data are the ones that are of the greatest concern to me because they still represent the greatest number of reports. Now, I want to caution about that. Just because it is the highest percentage of reported instances, it doesn't necessarily mean it is the highest reason for death from transfusion. The reporting system we currently have is voluntary and is flawed because it is voluntary. We have some data that suggest that as few as five percent of all deaths are being reported. Perhaps the number is higher than that; one can only guess since it is voluntary.

I would draw your attention again to a topic near and dear to my heart since I happen to be a group O patient. You will see that the highest risk is for patients who are group O. So A to O, B to O, or AB to O transfusion is much more likely to be found in terms of deaths. So group O patients are those that we are most concerned about. The rest of them--obviously you can have A to B; or AB to B deaths. You can have B to As and so forth. However, the most important thing to remember is that when ABO errors occur it is only a problem when the incompatibility is harmful to the person receiving it.

[Slide]

We have to look carefully at where ABO errors occur in order to understand what we can do to intervene productively. You will see from the data that in both time periods drawing and labeling samples represented between 12 and 14 percent of all errors that resulted in fatality. About 25 percent of the errors occur within the blood bank itself. That is a technological or interpretive error. About 9 percent of the errors occur when the blood is being issued, a hand-off point, and between 34 and 54 percent, or an average of 46 percent, at the time of transfusion.

[Slide]

That is important, and I will take a little exception to the comment that Dr. Epstein made because it actually is 59 percent of the time that an error is made outside the control of the blood bank. Fifty-nine percent of these deaths are due to failure to properly identify the patient either at the time the sample is collected or at the time the blood is being transfused. That is the opportunity for improvement.

Now, I will not deny that 25 percent of the time the errors are being made in the blood bank, and those kinds of errors include the one that is most troubling, that is, 41 percent of this small number of specimens is using the wrong sample for doing the crossmatch. If you actually start with the wrong specimen, you are obviously not going to issue the right blood. There was 26 percent that was identified as serologic misinterpretation. This is within the head of the technologist. And, 26 percent were recording errors. Again, small data but perhaps some opportunity for us to look at systems for improvement.

[Slide]

To reemphasize the sources of error for ABO hemolysis, 59 percent were for failure to properly identify the patient. I will tell you that my personal bias about this is that if we were to focus on objective, enforced systems of patient identification we would probably avoid at least 50 percent of all errors that lead to patient fatalities in hospitals. This doesn't apply just to transfusions; it applies to every other services that is applied to a patient when they are not properly identified for those services.

So, in transfusion medicine for ABO deaths, it is when the samples are obtained or labeled and when the transfusion is given and, of course, we have the 25 percent blood bank error--wrong sample; wrong interpretation and recorded incorrectly. These are ripe areas for improvement.

[Slide]

For the non-ABO acute hemolysis, these are the antibodies of particular concern. We do not routinely test for kell in the blood banks in this country. That is contrary to the practice in Europe. You will see that if you don't detect kell, and an anti-kell is present an acute hemolytic event, very much like an acute ABO acute hemolytic event, can occur 30 percent of the time for the non-ABO acute events. Again, small numbers, only 33 over this time frame, but the most common was an anti-kell. Also, the Rh system, Duffy A and JKb have all been recorded as having caused acute hemolytic deaths.

[Slide]

What are the sources of error here? Well, either the antibody is not detected, meaning the antibody was there but was serologically missed, and that may be an opportunity for improvement in our available reagents and methods, or it could be from technical error which could be addressed systematically, or the antibody was present but wasn't detectable. This may be a combination of not checking historical records, or checking them but not recognizing that the antibody had previously been identified, or having no historical records.

Just a side note comment on that, a comment was made at a conference I was attending yesterday about what happens when hospitals close. Hospitals are closing at a very rapid rate around this country. What happens to the patient records when a hospital closes? Who owns them? Where do they go? How do we access them? In transfusion medicine this is a critical question since it is very important to know what has happened previously. I think this is an area where some attention should be paid at a higher level in our society.

[Slide]

Non-serologic deaths also occur. In the decade of 1986 to 1995 in my opinion, these would be egregious events that occurred. Infusion of 10 percent glycerol in the operating room--this was because the anesthesiologist insisted on infusing the temperature control device which was ten percent glycerol into the patient, killing the patient instantly on the table. There were none of these reported between '87 and '90, but from '91 through '93 there were a lot of failures to warm blood, using devices that were not approved for such purposes or defective devices. Again, these are areas where either education or quality control of devices would help improve them. So for non-serologic hemolysis the sources of error seem to be lack of knowledge or judgment, that is, human error--which fluids and additives can be transfused safely; how do you handle cellular blood components, the devices and physical conditions necessary.

[Slide]

Over this 20-year time period there were 52 deaths due to delayed hemolysis and a whole host of antibodies were implicated in these deaths.

[Slide]

Typically, these are patients who are very seriously ill and the delayed hemolysis is sort of the straw that broke the camel's back. In most cases there is no error that can be detected, and this may be due to limitation of existing tests. That is to say that the antibody implicated as the final straw was not detectable prior to the transfusion that resulted in the hemolysis that resulted in the death of the patient.

[Slide]

Switching to deaths from bacterial contamination, there was a total of 75 such deaths reported between 1976 and '95, and the distribution of implicated organisms looks like this. Obviously, it is different for platelets than for red cells and cumulatively the gram-negatives contribute the most, gram-negative bacterial contamination.

What is interesting is if you look at the data in the decades you will see a very different pattern looking at the first ten years of reporting where gram-positive contamination of platelets was clearly much more common, and as frequent in red cells as the gram-negatives. In the second decade of reporting a very different pattern emerges.

[Slide]

The lessons to be learned from this? I am not quite sure and I welcome any comments from the rest of you, but I think we can understand that bacterial contamination almost always has an unknown source. We can speculate about the potential sources and that can occasionally be documented, but these are donor related, meaning a subclinical bacteremia; a coring of the skin when the specimen is obtained because the needle has to go through skin to get it; incomplete sterilization of the skin surfaces. These have all been implicated in published articles, and there have been a few instances of manufacturing or storage of the collection of bags, tubing, needles, etc. themselves.

Again, a lot of research has been undertaken in the last ten years or so to look at these issues of bacterial contamination, and I think that will continue to be fruitful.

[Slide]

Looking just at the second decade of reporting that I called to your attention today, it is interesting that graft-versus-host disease, which was virtually not being reported at all between '86 and '90--and I guess I will take some credit for the fact that reporting has increased slightly since then when I pointed out that there were 19 reports in the literature up to 1995 and only one reported to the FDA in that period of time. The lesson here is that this is an avoidable disease for many of these conditions. What we learned from this is that it is not enough to irradiate biological first degree relatives; you have to irradiate blood donated from all relatives and that is what the standards now require.

I am not sure what we could do about the solid tumors, such as the pancreatic cancer and the lung cancer but, clearly, any malignancy that involves the immune system itself--those recipients should and deserve to receive irradiated blood because it is a very safe practice and one that is easily intervened.

[Slide]

So the sources of error for these deaths appear to be failure to identify certain at risk patients. We rarely see a report of a death by this mechanism in patients with solid tumors. There were just two in that decade, pancreatic tumor and lung cancer, but failure to identify the immune consequences of an immune malignancy should not be a cause for death. There is actually a lack of systems for known diseases. There are entities still, hospitals out there, that cannot reliably identify a leukemia or a lymphoma or an immune compromised patient. Failure to irradiate when you know you should could be either a physical problem or a technical failure, and both of those are amenable to system improvements.

[Slide]

Looking just at a five-year set of data with regard to respiratory deaths, you can see that there has been fairly constant reporting of both TRALI and anaphylaxis at a very low level across this time period.

[Slide]

The sources of error for respiratory deaths, if I can call these errors, are that we are concerned about multiparous female donors with antibodies that affect the recipients. We know that most donors have normal levels of IgA and if you infuse that into a susceptible recipient it can cause death. Who knows what other factors. The sources of errors in patients may be failure to detect an IgA deficient patient with anti-IgA antibodies who then receives an IgA-containing product, and so forth. This is an area I think ripe for additional research. We really don't understand all of these deaths. I just heard a death described to me yesterday that just doesn't make any sense. There didn't seem to be any basis for the anaphylaxis that was described.

[Slide]

What lessons have we learned? Group O patients are at greatest risk of fatality from an avoidable transfusion error. Think about that. What do we do in our hospital systems or our transfusion services today to guarantee a higher level of protection for group O patients if they are at the greatest risk?

ABO fatalities still most often occur from failure to properly identify the patient. Errors in patient identification are correctable today. This is a resource issue. Systems exist or are well along in development that would provide us with the means by which we could objectively identify every patient for every procedure. We simply haven't done it.

[Slide]

Probably related to this comment, it is professionally and politically insensitive to suggest that we cannot afford to institute measures that increase the safety of patients, but it is shortsighted to overlook the fact that hospitals and medical groups will quietly decide whether they can afford to invest in new safety measures. The government at least has stepped up to the plate and said we are willing to invest at that level. I think the rest of us have to think about this.

[Slide]

Significant error reduction in the short term will depend primarily on the education of healthcare workers. People have to care that it makes a difference that you have identified the patient correctly at every step along the way. In the long-run error reduction depends on system changes. Current research efforts into reducing bacterial contamination and TRALI are worthwhile but are unlikely to have as great an impact as systematic patient identification will have to avoid ABO errors.

[Slide]

Final thought, not much has changed over twenty-plus years. I think the essential message is that reporting alone is not a means by which to improve patient safety. I really applaud the organizers of this workshop because I think we are moving in the right direction. Thank you very much.

DR. LEWIS: Thank you very much, Dr. Sazama. I think that you have taken an important next step in looking at what the reports have told us and publicizing that information.

Our next speaker this morning comes to us from the New York State Department of Health. Most of us at the FDA know Dr. Linden very well for her activities in various advisory committees, particularly the Blood Products Advisory Committee where she has been an important contributor. Dr. Linden is a physician as well as a public health professional, and is currently the director of Blood and Tissue Resources, as I said, at the New York State Department of Health. Dr. Linden?


Transfusion Errors

DR. LINDEN: Good morning.

[Slide]

Today I am going to be speaking primarily about our experience in New York. I think you will see a lot of very consistent findings with what you just heard from Dr. Sazama in terms of the FDA findings as well.

[Slide]

Mamma Hobbs is at the laboratory and they say, "oh darn, the trouble with blood cells is they are all the same color." That is the fundamental problem that we have. The blood does look alike. The blood samples all look alike and that is why the labeling and keeping track is so very important and why it is prone to human error.

[Slide]

I would like to first start out by just giving a few illustrative examples that I think illustrate very consistent findings that we see in a lot of these types of cases that we observe.

In this particular case there was a 30-year old man who was in a motor vehicle accident and was admitted to the surgical intensive care unit. Although he was group O, the highest risk, he actually received a unit of group A red cells. He had an acute hemolytic transfusion reaction which was not recognized by the resident who attributed it to his underlying condition. Then he received a second unit. In looking at how this happened, it was found that, firstly, sequential identifiers were given both numerically as well as the alphabetic identifiers that were given in lieu of a name for this particular patient, WM, WN. We can certainly see that would be very easy to confuse.

In this particular case, because of time, the blood bank tech bypassed the computer system that could have helped detect the confusion and ultimately the nurse failed to identify the patient properly. In this particular case at least six different people made errors or could have helped avert the adverse outcome. And, this is something that we do see frequently.

[Slide]

Case number two involves a unit of blood collected by a hospital but the testing for infectious diseases was done elsewhere. These results were not reported back electronically but were reported back by fax, which still happens today in many situations.

[Slide]

If you look at panel A, this result here, it looks like O positive and it was considered to be O positive and released as an O. However, by a re-faxed transmission using the high resolution setting instead of the standard setting, you can see that this was actually supposed to say B positive. You can also see that the unit number, which I have truncated here, is actually 89, although it looks like 09 up here. So, it is very clear that fax transmissions are not very reliable. They can be very easily misunderstood because of the nature of the clarity.

[Slide]

Case number three is a 41-year old man who was undergoing a laminectomy. A postoperative blood recovery device was used, as surgeons seem to like to do, and 150 ml of sanguinous reddish fluid was collected, which probably didn't have that many red cells in it, along with air. This particular device was such that there was air in the bag which needs to be manually removed. In this particular case there was a shift change, with a new person coming on the shift who was not familiar with the device and was not aware of the need to manually evacuate the air, and the two-minute in-service training by the other staff was not sufficient. This person infused the fluid under pressure and the patient, in fact, suffered a fatal cardiac arrest due to a massive air embolism. This was an example of insufficient knowledge on the part of staff in terms of how to use a device.

[Slide]

Although this is not transfusion related, it is a case I would like to talk about because it is very illustrative. There were two patients undergoing endometrial fertilization on the same day. Embryo was stored and on day three they were ready to be transferred. They were both on the warming stage at the same time. At this point the embryos are graded for whether they are suitable for transfer or not. In this case patient B had a group of embryos that were satisfactory and a group that were to be discarded. Those discarded embryos, by mistake, were transferred to a different patient. The embryologist actually recognized the error at the time and gave the patient a second catheter, which is not at all unusual in these types of situations. So, the patient, nine months later actually had twins with different parentage. In this particular case it was a Caucasian woman who actually had a Caucasian and an African-American baby.

What is interesting to note is that the SOPs in this particular facility did call for only handling embryos from one patient at the same time which is the proper way of keeping things straight. But the embryologist, with a Ph.D., felt that he knew better and he could keep things straight and didn't have to follow the SOPs.

[Slide]

I would like to present some findings that we have made in New York from our mandatory error reporting system. Although it is mandatory, obviously ultimately everything is voluntary. It is a passive system. However, when the surveyors go on site, they do, in fact, ask staff about errors and generally. You can't include everybody so we do sometimes find things on site that have not been reported, but not many. Reporting compliance is actually very good.

What we found is that one in 38,000 units was transfused that was, in fact, ABO incompatible. These figures, in fact, are very similar to the serious hazards of the transfusion reporting system in the United Kingdom. Their findings are very similar to these. We also found about one in 40,000 ABO compatible units that were transfused, with an overall rate of one in 19,000. We made an adjustment for the ABO compatible units that were erroneously transfused that may not have been detected to estimated an overall rate of one in 14,000 units going to the wrong patient. When you think about the current risks for infectious diseases related to transfusion, clearly the risk of getting the wrong unit exceeds all of the combined infectious disease risks, even though it is not what most patients are concerned about. We did have fatalities in this particular series, a risk of about one in two million.

[Slide]

As Dr. Sazama said, the number one problem is at the time of administration. We found that 56 percent of the errors accounting for these erroneous transfusions occurred outside the blood bank which is, again, very consistent with what Dr. Sazama just told you. Twenty-nine percent were in the blood bank alone, and 15 percent had a combination.

Outside the blood bank identification error at the time of transfusion, 37 percent is the number one problem. A nurse--in most cases it is a nurse administering the unit does not adequately identify the patient and gives the wrong blood to the wrong patient. Thirteen percent are phlebotomy errors at the time of the original sample. If you have the wrong sample in the first place you are going to issue the wrong blood group; it is going to be wrong all the way through.

Within the blood bank there was a mixture of things, a combination of testing the wrong sample; making a technical error in testing; issuing the wrong unit; making a clerical or transcription, what Dr. Sazama calls a reporting error. Also, you could tag the wrong unit or make a clerical error, including reporting information on the wrong slip.

[Slide]

There were also compound errors, primarily that the blood bank issued the wrong unit and the nurse on the floor could have detected the discrepancy but failed to do so. In one percent of cases the wrong unit was tagged.

[Slide]

How were these discovered? Well, in our series about 28 percent were because the patient had a hemolytic transfusion reaction. Another 21 percent were at the bedside. The nurse sort of did one of those "oops" and realized that he or she did something wrong. Twenty-two percent basically went unnoted until there was a subsequent blood request and the error was noticed at that time that, you know, the patient's blood type changed, for example. A small number, five percent, was through supervisory review, and then there was a large, sort of miscellaneous group of people realizing things at some point for some reason.

[Slide]

The good news in part is that when looking at the patients who are known to have received ABO incompatible blood, almost half of them actually had no adverse effects at all. Of course, there could also be death and that could be from a very small amount, as low in our group as 30 ml. Dr. Sazama said even less than that in their series. But there also were some symptomatic hemolytic transfusion reactions in 41 percent, and seven percent had serologic findings only. As I mentioned, we did have about two percent that had a fatality due to this. We also found that four percent of patients died coincidentally. Just because they got the wrong blood and died doesn't mean that they died from the incorrect blood so we looked very carefully at the medical findings in those cases.

[Slide]

We also looked to see if there were different frequencies of reported events in different sizes of transfusion services. We did find, in fact, that these events of giving the wrong blood to the wrong patient were statistically significantly more frequent in smaller facilities that transfuse fewer than 2000 units a year. It is possible that such facilities do not transfuse as frequently and people are not as proficient, but it is also possible that this could be an artifact of actually better reporting and better error detection at small facilities. Perhaps there are more occurring at larger facilities that aren't being detected. It is very difficult to know why this is.

[Slide]

I would also like to mention that in this particular series I am only talking about the most significant events, those that led to a patient getting the wrong blood. There are obviously many, many more minor types of errors that are not discussed in this particular series.

We tried to identify contributory factors, what did some of these events have in common? Some of them tend to be very similar. We saw the same things over and over again. We tried to get at some of the underlying systems factors that might be subject to change or improvement. One thing we saw is that the safeguards that were in place were often bypassed. For example, a patient in OR may have the wrist band removed and then you don't have the wrist band to check so you have to do things a little bit differently and not follow the usual matching procedures.

We also found that at the time of phlebotomy using pre-printed labels was problematic because it is very easy to grab the wrong one. It seemed that on-demand labeling at the time would be a way that some of those could be prevented.

We saw very frequently that the patients who got mixed up were patients who had either the same name or very similar names, who were either in the OR at the same time or who were on a medical floor at the same time by coincidence.

Also, consecutive identifiers, as in one of the examples I gave--if medical record numbers are assigned in a sequential fashion, then the two patients who come in next to each other are only going to differ by a single digit. This occurs in the neonatal unit. For example, you may have twins that wind up with sequential identifiers and they may not even have first names at that point so there may be limited ways to distinguish.

We also found that telephone and verbal communications, as we all know, are very prone to be misinterpreted and misunderstood. As in the example I mentioned, fax communications are a very frequent problem. Actually, it didn't occur frequently but it tended to be a problem where fax communications were used. Where there was a computer system in place, if it was not used and a manual system was used instead, that was a problem we observed.

I also have here inadequate consideration given to patient input because in a very small number of situations the patient said that is not my blood type and it was ignored.

[Slide]

Some of the systems factors that we identified included lack of delineation and responsibilities. People didn't know whose job was what. As a result, something didn't get done that should have gotten done. In some cases there were not proper SOPs or they were not followed, as in the embryo example I gave; in some cases a lack of proper training, such as the postoperative blood salvage device that I mentioned, and in terms of mitigating possible adverse effects; and in some cases there was insufficient training and recognizing and handling an acute transfusion reaction that could at least mitigate the adverse effects if an error does occur. In some cases there was unapproved equipment available for use, such as the creative ways of warming blood that Dr. Sazama mentioned. We observed some of that as well.

[Slide]

I would just like to make the point that autologous blood is not completely safe. It can also be given to the wrong person. In fact, if you are accepting seropositive units there is not only the risk of ABO incompatibility, there is a risk of transmission of an infectious disease as well. In one particular series that we had, we found a 1 in 16 risk of erroneous transfusion of autologous blood. People were clearly not being more careful with autologous than they were with allogeneic blood.

[Slide]

This is similar to findings of others. The American Association of Blood Banks certainly found that 1.2 percent of facilities had one or more erroneous autologous transfusion during a particular year. Bear in mind, this is 1.2 percent of facilities that had at least one event; this is not 1.2 percent of units. And, one in five facilities had situations where blood was transfused in the wrong order, that is, autonomous was available but allogeneic was used instead.

[Slide]

CAP had a similar survey with similar findings.

[Slide]

I would also like to just mention briefly testing errors. In this particular case I am really thinking of infectious disease testing, although some of the same things can apply to hematology testing as well. Testing errors can occur in one of the three phases of testing: the pre-analytic phase, that is, at the time of phlebotomy sample collection, labeling, handling and preparation. These errors do occur now. A major blood center in New York just had one of these. We are still not good at the time of collecting donors and keeping the samples straight.

[Slide]

There can also be errors at the analytic phase. In our experience, these are much more frequent when there is not automated testing, as you would expect. We previously published a series that one in 20,000 units had errors in testing. They were, however, mostly at the post-analytic phase, that is, at the time of reporting. Many blood centers have automated reporting so that electronically the results are transmitted back to the computer system but many hospitals that do their own collections do not have such automated reporting. So, one, they can come back by fax, which is a problem, or there is a transcription step. We, in fact, have observed that that transcription step is a significant opportunity for error. That does, in fact, contribute to the risk of transmissible diseases in blood. In fact, at one point several years ago we actually reported that we had more HIV transmissions related to error than we did to window period transmissions. This has been a significant problem so I just wanted to mention that as a problem to be aware of. Thank you very much.

DR. LEWIS: Thank you, Dr. Linden. Are there any questions that anyone would like to present to either Dr. Linden or Dr. Sazama? Yes? Thank you for coming to the mike, and if you will identify yourself I would appreciate it.

DR. MCCARTHY: Leo McCarthy. I just wanted to ask a couple of questions first of all to Kathleen. When you went through your data for this, touching on what Dr. Linden said, did you find the incidence higher in smaller facilities than in larger facilities? That is the first question.

DR. SAZAMA: Leo, as you may or may not know, when the data are reported to the FDA they don't necessarily release the name of the institutions from which the data comes. I made no effort to try to tabulate that at all.

DR. MCCARTHY: The other question is about the fatalities. Were you able to determine what number of those occurred in the operative theater where seizure was involved? That is a point in question at least for me because we find out, where I am from, that we have an awful lot of errors in the OR by our colleagues that are giving the blood under surgery and I am not sure I have ever seen that data.

DR. SAZAMA: In the original report I published, in the ten-year summary some data were reported in terms of the likelihood of that occurring in the OR and ER. Again, these are both places where the normal systems for transfusion often are modified or altered in some way. I don't recall the data from the second data. I have it collected; I just didn't choose to report it here.

DR. MCCARTHY: Well, we all know the CAP survey years ago showed the tremendous error in armbands. Then a question correlated to that would be how many of these deaths were actually in children where kids can't be asked their blood type, and so forth and so on? Was that data in some of your original twenty-year --

DR. SAZAMA: I don't think I have ever published the data, but the number of fatalities in children is relatively small. The ABO deaths are a vanishingly small percentage of those. Several of the other, the non-serologic hemolysis involved pediatric patients, the Pedia lamp use and so forth. But for the ABO there were a few, a handful.

DR. MCCARTHY: Last and not least, since in the last five years I suspect you have at least glanced at the data that has been reported, is it basically the same now or has it gotten a little better since this error and error reporting is sort of sliding under the high-drive objective?

DR. SAZAMA: Well, in terms of the numbers of reports, it has been relatively stable. I think there has been a slight increase in recent years. I think there has been a slight change, although the numbers are so small you can't do any sort of meaningful statistics, but I do think what happens is the Hawthorne effect, which is when we call attention to things and start looking at them we start reporting them at a higher rate, and I think there has been a slight shift in terms of the nature of the deaths that are being reported.

But it still is just a very small percentage of the overall picture, and I think it is important to remember and pay attention to Jeanne's data because I think what is really helpful to us is to understand the near-misses and learn from those data, you know, where in the system were the interventions applied and where did they actually help to avert errors. I think that is another fruitful area for discussion. I only wanted to be sure we looked at the fatality data, one, because the FDA has diligently collected and, number two, because it does give us at least some picture of how death occurs when only life is intended.

DR. LEWIS: Before we move to other sources of transfusion error, let's take a break. We have coffee and food outside in the lobby. Let me remind you that they don't want food in this particular auditorium. Thank you.

[Brief recess]

DR. LEWIS: We are fortunate to have Dr. Michael Busch present some new data. I understand that he has been honing his presentation up until yesterday, where he was working on it in Atlanta. He is a very busy person. Those of us at the FDA appreciate a lot of his input on infectious disease risks with transfusion and blood donors. We see him frequently providing information to our advisory committees. To give testament to his energy and focus, he is the vice president of research at Blood Systems, Inc., Scotsdale, Arizona, and Blood Centers of the Pacific, San Francisco, California. He is an adjunct professor in the department of laboratory medicine at the University of California, San Francisco, and acting president of Blood Systems Foundation, Scotsdale, Arizona. We are looking forward to Error Surveillance in Blood Donor Infectious Disease Screening. Dr. Bush?


Error Surveillance in Blood Donor Infectious Disease Testing

DR. BUSCH: Thanks very much.

[Slide]

Actually, when this meeting was first scheduled for last November, I think, and I was asked to present I wasn't very enthusiastic about presenting. I have done some work, as I will show you, on measurement of error rates in blood screening but I didn't feel we were really on top of it. But, actually, over the course of the three or four months since the meeting was postponed to 9/11 we have actually developed what I am very excited about, which I really think is a new strategy to systematically detect errors in blood screening; track those errors; respond to them; and implement system improvements to further reduce blood screening error rates; and also quantify the impact of these errors in terms of risk.

This was the result of a collaboration and discussions actually over the holidays with Sue Strame and Roger Dodd. There has been a lot of work over the last few months both by Susan and Roger at the Red Cross, Sally Cagliotti and Joan McAuley at Blood Systems Laboratory, and Leslie Tobler and my group at BCP.

This is data, I should emphasize, that is from large blood screening systems. Some of the earlier comments about test errors, for example, with ABO typing attributed to blood banks--I think it is important to recognize those are all transfusion service errors, hospital-based blood distribution sites. Obviously, there are smaller blood banks and transfusion services or hospital-based collection systems, and I think it probably would be inappropriate to directly extrapolate some of the findings I will present to those smaller systems. These are the large blood screening systems.

This program is also coordinated through the NHLB/REDS NAT study group, which is a collaborative study group under NHLBVI support that is doing a variety of studies related to the implementation of nucleic acid testing.

[Slide]

Actually, I want to start by showing a couple of slides that I pulled from the web at CBER that Kathy Zoon presented just a month of so ago at a CBER program, I think the New Orleans quality program. It sort of presents a bad story. In the orange bars you are seeing the number of blood product deviation reports attributed to licensed blood banks versus these unlicensed, sort of hospital collection versus transfusion services versus plasma centers over time. You see this horrible increase in the number of reported blood-bank related problems.

[Slide]

However, on further analysis you see that almost all of these are due to what are considered donor suitability problems, post-donation information reports and recall activities that are really not, to my mind, the critical events in terms of blood risk, things like adding DCJD deferrals when donors come back and acknowledge a history of having been in Britain, it triggers a formal notification recall. Obviously, you know, a lot of this is important but the real emphasis is down here in terms of the issues of labeling and testing, which actually have all declined in parallel with this increased overall reporting due to these post-donation information report activities.

[Slide]

This is also a slide that actually Kathy didn't show but it is data that we have generated through a variety of studies, mostly NIH funded, NHLBI programs that have monitored the risk of these major agents over time and show the dramatic logarithmic reduction in the risks of the major agents as a consequence both of improved donor selection, but particularly the development and implementation of very high sensitivity assays.

Most recently, as you all know, over the last several years we have introduced nucleic acid testing into routine screening for HIV and HCV, which has resulted in another almost log reduction in HCV and a marginal, sort of incremental reduction in HIV risk. We are now talking about risks of blood transfusions in the one in two million range for these major agents. So really a dramatic, dramatic success that contrasts with the errors you heard about earlier in terms of in-hospital transfusion errors.

[Slide]

In terms of the risk, for a while we have realized that the residual risk of the screened agents can be broken into these four major categories of risk. A lot of focus has appropriately been on the window period reduction; concern over viral variants like group O HIV; the possibility that we have now confirmed in rare cases of people who don't form serologic responses and can be chronic carriers. But relevant to this discussion, for a while we have realized that testing errors are one of the four major sources of risk.

[Slide]

When we talk about testing errors, as Jeanne Linden pointed out, a lab testing error, in a sense, can be narrowly defined but the truth is that there is a whole chain of events that are related to how a person gets tested, a donor. There is the collection and labeling of the tubes and the blood bag all the way through the processing of the sample; reagents; manufacturing and cells and performance of the assay; how the results are interpreted, manually or automatically; how those results are eventually transferred into a computer system that enables labeling and release of the blood. What we need to try to do is to monitor this entire process to the extent possible.

[Slide]

We did a study, as I mentioned, about four or five years ago that we published in 2000 that I think for the first time has strategy to measure routine error rates. Obviously, one can do proficiency surveys and have samples coded coming from CAP or others that are tested in blood bank labs, but those are always handled specially and really don't monitor the whole chain of the process. They simply evaluate whether the technicians are proficient at running a series of control samples.

Rather than that narrow approach, what we thought of was an approach that could actually track samples that were coming to the labs routinely, in essence monitoring the whole process. The approach that we took in this study was to look at a large database of over five million donations by one and a half million donors. We asked how many times were there donors who tested confirmed antibody positive and then gave a subsequent donation. You would say how could that be? These people should be deferred. Well, sometimes allo donors do come back and give again even though they should have self-deferred and that is not picked up and the sample goes through. The vast majority of allo's do get interdicted. They don't come back. They are notified or, if they do come back, they are not allowed to give.

What we did here in order to make an informative analysis was to include autologous donors who are, in most centers, allowed to give again if they are infected with HCV and in a number of centers even HIV-infected individuals are allowed to give for themselves. What we ended up being able to look at was over 2000 donations that were given subsequent to a confirmed antibody positive donation by about 1200 donors. Some of these donors actually gave auto donations twice subsequent to their first confirmed positive.

In this analysis we identified initially 11 cases that were negative on a follow-up sample EIA. So the computer system indicated that they were non-reactive. When we investigated those cases, and I will illustrate this, ten of them were really not what we would consider frank procedural test errors. They were borderline reactive samples picked up, in the first place, just over the cut-off and the subsequent donation was just under the cut-off. These were all associated with earlier generation relatively insensitive assays.

[Slide]

This just summarizes that of these 1224 follow-up donations, 19 donors were HIV-infected, gave 33 subsequent units. All were reactive on subsequent donation. Most of these, as you see, were HCV because it is a much more prevalent agent. So, we had 1800 subsequent donations by HCV-infected donors, and nine of them actually on follow-up tested antibody negative. So, an overall 0.5 percent rate of initial error rate.

[Slide]

But then when we looked at these again, as illustrated here for these donors. These were HTLV-II infected donors who were screened by HTLV-I assays. You can see in these two examples autologous donors giving only a couple of weeks apart, and these donations were initially detected based on a very borderline reactive result, S to C just over one. In the follow-up sample, although technically non-reactive, it was just below the cut-off. So, this is not really talking about, test error. This is a bad test. It was subsequently fixed.

[Slide]

For HCV, just for some other examples, is similar. In the first example an allo donor gave about four months apart, and the first donation was reactive; the second was just under the cut-off.

Here are a few more autos. In this case the initial donation was picked up as reactive and repeated reactive. The follow-up was initially reactive but the duplicate repeats were just under the cut-off. Again, these are HCV 2.0. These were earlier generation assays by Ortho. We are now into the 3.0 assay and all of these individuals, you know, would be off scale on S to C.

But the important case is down here on the bottom. This was a donor who was an autologous donor, giving serially over the course of a few months who had two sequential blazing reactive donations followed by one that was flat negative. We got this donor back. We were able to confirm the infection and there is no question this was simply a test error.

[Slide]

If we then look at that test error rate, which is that one case, 0.05 percent, in this paper, we quantify the impact of error by multiplying the error rate times the prevalence of infected donations coming through the donor pool, the allo donor pool. From that, we could calculate the risk per ten million, or about the number per year in the U.S. of predicted infectious units that would be released erroneously due to the failure to accurately test a prevalent infection.

[Slide]

Then we took those numbers and we plugged those numbers into our overall risk compilation, which compiles the estimates for risk for each agent from window period donations, from these variants, from atypical or immunosilent carriers, and then totals out the risk. What you can see is that there are some theoretical risks due to erroneous release of positive units, but prior to NAT they really only accounted for a small fraction of the risk, about two to ten percent depending on the agent. The bulk of the risk was window period. Of course, now we have introduced NAT and we have introduced NAT predominantly to catch these window phase infections. So, one might be concerned that errors have become a relatively more important source of risk now that we have closed the window period risk.

[Slide]

Actually, what we have come to understand is that the existence of a parallel NAT and serologic screening system such as now exists is an extraordinarily efficient, redundant testing strategy that detects these errors and prevents the release of any product that might be erroneously tested on one system or the other. Moreover, it has given us a strategy to actually monitor errors themselves because if we simply identify cases that are discordant, where there is a NAT positive, antibody negative donation, and we investigate what initially might be interpreted as a NAT yield case, viremic seronegative, by routinely retesting that sample for antibody we can detect false-negative serology errors. All the systems routinely do this. So we actually have a comprehensive system to detect all false-negative serology errors for HIV and HCV through investigation of NAT positive, antibody negative cases.

On the other side of the coin, we also identify donations that are confirmed seropositive that are negative for the RNA assays. Our expectation is that most of these are individuals who have resolved HCV infections or, in the case of HIV, we know that there is a problem with false-positive Western Blots. So, our assumption was that a lot of these were simply expected, resolved infections.

By performing investigations of these samples through performance of individual donation NAT on samples that are antibody positive but negative by minipool NAT, we are able to detect the viremic subset of these samples that were mixed by minipool NAT, and then through a study of those samples we can determine whether they were missed due to an erroneous performance of the minipool NAT system or due to a low viral load. By doing the work to not only catch these numerators but to quantify the denominators corresponding to these cases, we are able to actually quantify the rates of these events and also initiate corrective action to reduce the probability that they would happen in the future.

[Slide]

What I will do now is first present the experience with investigating the NAT yield cases to identify the serologic errors and then go to the use of the serology in NAT discordant cases to find NAT errors.

This first example is from a large blood system that over three years has tested 14.3 million donations by both NAT and serology. In the first year experience, 2.3 million donations, this system identified 10 HCV what appeared to be RNA positive, antibody negative donations. However, on retest of those presumptive yield cases serologically on the alternate tube source where the NAT tube was retested serologically, in fact, in this case they actually tested the plasma components, there were three of those presumptive yield cases that were actually fully antibody positive. So, clearly, there had been an error in performing the serologic testing.

Importantly, we derived the relevant denominator for these three cases and that denominator is the number of viremic RIBA-positive donations that were screened during the corresponding period of time. In other words, how many viremic donations that were antibody positive were screened from which these three samples, that failed to detect by the antibody test, were observed? That yields a rate of 0.13 percent during this first period of time.

Over the subsequent two years of testing over 12 million donations in parallel, this system identified 59 initial apparent NAT yield cases, all of which were retested serologically and only one additional case has been observed that was antibody positive. So, a substantial reduction in the resulting rate to 0.12 percent.

[Slide]

A second system screened 3.5 million donations over this approximately three-year period and identified 14 NAT yield cases and failed to detect any of those on retest as antibody positive. So, a zero numerator over 2700 RIBA-confirmed positive, NAT-positive donations were screened during this period through a routine system and so our denominator is that number with zero events.

In summing the two systems in almost 18 million donations tested, 84 were viremic seronegatives. Four were found on further investigation to be false-negative antibody errors. So there was an overall rate of 0.03 percent.

[Slide]

In terms of HIV, the first system, 14.3 million, had six HIV yield cases, none of which were antibody positive on retest. During this period they observed 454 confirmed blot-positive viremic donations. So, zero out of 454 is the false-negative error rate for the HIV antibody test system.

In the second system, 3.5 million donations, there were two NAT yield cases. Neither of them was antibody positive on retest. However, there was one sample that was repeat reactive on the licensed test of record that was negative on the EIA test of record. It repeated negative on the test of record but was found to be strongly reactive on the alternate licensed HIV antibody screening test, which has better window phase sensitivity. This was a very early seroconverting donor who was missed based on the test employed, not due to a performance system problem. So we didn't consider this a procedural test error.

[Slide]

So in sum, for HIV over 18 million donations; 8 NAT yield cases. None of them were determined to be serologic test errors. So, the rate of error in HIV screening, zero of 580.

[Slide]

Moving on to the question of how good is the NAT system, how many errors may occur in the RNA testing side, we kind of stumbled on this. It started out with our interest in how frequently were there low-level viremic samples in the persons who were confirmed antibody positive but NAT negative, and should we be repeating RIBA-positive, minipool NAT-negative samples by individual donation NAT for donor counseling purposes?

So, the first system actually investigating this question identified initially 906 HCV confirmed RIBA positive donations that had tested minipool NAT negative. During the early phase testing by this system a lot of the confirmed positive or the EIA reactive samples were tested individually before pooling was begun. So for our purposes, we restricted our focus on the 357 HCV confirmed positive donations that had been screened in minipools and had tested negative by minipool NAT. Of those, 356 had samples available for individual donation PCR analysis and seven samples were found to be PCR positive.

This is not a surprise, but what we expected was that these would be very low viral load samples. In these chronically infected seropositive people the viral load had been suppressed to a level that couldn't be detected by the pooled NAT, given the dilution factor.

[Slide]

But when we looked at the viral load in these cases, six of them were as expected, low viral load samples that were missed because of the sensitivity of the system in the context of the small pool. But one of these samples was blazing viremic, over 7000 copies. Not only was it readily detected and quantified undiluted, but had ample viral load in 1:16 dilution to be detected by the pooled NAT. So we interpreted this case as a false-negative testing error by the minipool NAT system.

[Slide]

This is a summary of that system experience, 357 cases investigated; seven viremics identified, one of which was a high level viremic who should have been detected by minipool NAT but was not. The relevant denominator was estimated as a corresponding number of antibody-positive viremic samples that were detectable by minipool NAT, about 1400, and it enabled quantification of error rate due to NAT.

The second system, we looked at 177 samples that were RIBA positive and minipool NAT negative, and 11 of these, on duplicate retest by individual donation TMA, were found to be positive for RNA. Nine of these 11 were only positive on one of the two duplicate repeat individual tests, extremely low-level viremics, and the other two had low viral load also and were negative at 1:16. So none of these were test errors; they were very low viral load, seropositive carriers. So, this becomes zero over 578 for the error rate estimate.

In terms of HCV, this is something that, as I mentioned, we kind of stumbled into and began to realize that this is an approach to detect NAT errors. Right now we have over 2400 samples from the larger system that are in the process of having individual donation NAT testing done, samples that have over the last several years confirmed RIBA positive but minipool NAT negative. We are going to be dramatically increasing this data set over the next several months.

[Slide]

With HIV though it is routine that when we get a donor who is HIV Western Blot positive but tests negative through routine minipool NAT, we investigate those cases through performance of individual donation NAT. One large system identified, over the course of about two and a half or three years of testing by NAT, 31 subjects who were confirmed Western Blot positive but negative by minipool NAT.

[Slide]

This summarizes the first 13 of those cases. It turns out that the vast majority of these are false-positive Western Blots. The way we know that is by the pattern of reactivity. It is this incomplete band pattern, lacking a p31 band. In addition, all these cases were borderline reactive on the screening EIA and were negative on repeat PCR both on the index donation and follow-up samples. There was a small number of cases that were truly seropositive individuals--the gold regions, and in some of those cases even individual donation NAT could not detect virus. In the few where we could detect virus, the viral load was very low copy number so the negative minipool was attributable to the low viral load and the dilution factor of the testing.

[Slide]

In this final series of these cases, the same story; a couple of low viral load cases, but the vast majority of these HIV-blot positive, RNA-negative samples represent false-positive Western Blots.

In sum, for the two systems we investigated 32 cases of Western Blot positive but minipool NAT negative findings, five were found to be low-level viremic. None of these had a viral load that would be consistent with a minipool NAT testing error. So, for HIV we identified zero out of 580 viremic donors in whom the minipool NAT failed to detect viremia that should have been detected, given the system.

[Slide]

This sums up sort of everything in terms of combining the results from both systems and for both viruses. In terms of serology errors these were identified through investigation of 91 NAT positive, antibody negative cases, apparent NAT yield cases. Through investigation of those cases we found four examples where the donor was, in fact, antibody positive to HCV and should have been detected by the antibody test but was found as a result of the viremia in the absence of the antibody reactivity. The relevant denominator is 14,000-plus. We got an error rate of 0.028 percent with the confidence bound.

Similarly, in terms of NAT errors we had 566 investigated seropositive, NAT negatives. We had one apparent error in NAT with a 2500 denominator for this error rate. Again, this number will increase dramatically. There are about 2400 samples in the pipeline for testing now to better quantify NAT error.

[Slide]

In terms of the implications of these errors, in order for these errors to result in the release of a positive unit you have to understand the rate of positive donations, the prevalence of viremic infected donations entering the donor pool, the testing site. Then, in order for one of these units to get through, it needs to both test erroneously negative on serology and NAT. So the prediction for the number of hot units that would get out with combined false-negative results on both systems is simply the product of these rates, 63 per million plus this fraction times this fraction. This is now expressed as the risk per billion. You can see that with HIV the probability that an infected donation would get through the system is, like, six per trillion. For HCV, because the prevalence coming in is substantially higher, the risk becomes about one per ten billion.

[Slide]

Just to take it to the last step, this is with the current systems in the large programs. Importantly, we have made a number of advances, and are expecting further advances, in the capacity of the automated testing systems in blood centers. Sally Cagliotti and her associates at Blood Systems Lab have done large-scale evaluations of both existing and newer systems. Blood Systems Lab tests over one million donations per year. In this analysis they evaluated two test of record systems, the earlier system that was in place in the mid 1990's, which combined the Abbott Commander and the Ortho Summit. These were quite manual systems with many, many manual steps, as I will show you. Then we switched to the Ortho Summit processor, which is a more automated system for six assays. We have also done large trials of the Abbot PRISM system.

[Slide]

What Sally did--I am sorry, this isn't too clear but I will just make a few points--is to compare the number of manual events per day as we have moved from that old, very manual system to an intermediate automated system and what we expect as we move to a fully automated system. Basically, this is the number of manual events in terms of documentation, review, the testing process, label checks and the totals.

In terms of totals, with the earlier system there were 82,000 manual events per day. We the current system, 52,000 and with the new fully automated platform less than 60 manual events per day which is where these errors were traced to. I didn't go through it but these errors, when they were investigated, all were attributed to manual reagent addition or sample manipulation steps.

[Slide]

The other points here are just the number of deviations and the operational complexity in terms of the number of SOPs, the number of instruments, the number of trained technicians. It is a similar story, dramatic reductions in the number of people involved and in the number of instruments as we move to the more automated platforms.

[Slide]

In conclusion, I think we have come to appreciate that the existence of the parallel NAT and serologic screening not only offers the ability to detect and prevent errors from being released, but also the ability to quantitate errors and really, I think, will serve as a systematic error detection system that can be used to oversee rates of error, track the rates, and institute corrective actions to reduce the rates. The error rates are extremely low although, depending on your perspective, you could argue that two in 10,000 samples that are positive being missed, with incorrect results, is a little disturbing. It is a lot better than in all other tests because of the automation. But, again, for these tests to be significant they have to happen on infected donations, and when you run the numbers it turns out that the probability that an infected donation will test negative on both systems is well less than one per billion.

Finally, we do think that enhancements in automation that are soon to be available will further reduce this, and we are excited to be able to track the impact of those further reductions. Thank you.

DR. LEWIS: Thank you very much, Dr. Busch. We will hold questions, if you would, until after Dr. Kaplan's presentation and then we will give everyone an opportunity to discuss this.

As the effects of transfusion errors came to my attention and I was starting to learn something about this, I learned first the MERS-TM system from some of Dr. Kaplan's publications. I subsequently became involved in a patient safety task force, as did Jim Battles. And, all the discussions in that task force about error reporting systems when discussion about a particular principle would arise, the first thing that would come to my mind was, well, MERS-TM has that in their system, to the point that in these discussions when someone would point out a particular characteristic, they would then either look at Jim Battles or myself to say, "well, MERS-TM has that!" So, it is truly a model reporting system and a lot of information can come from that, and I am looking forward to hearing some of the information that Dr. Kaplan can give us today.

Dr. Kaplan is a professor and director of clinical pathology at the Department of Pathology, College of Physicians and Surgeons, Columbia University. Dr. Kaplan?


Major Causes of Transfusion Errors

DR. KAPLAN: Good morning.

[Slide]

Those of you who know me know that speaking about MERS is something I do all the time. So the good news is that instead of talking about it, I have the pleasure of listening with you to Jeannie Callum's presentation later. Dr. Callum has been using MERS over a period of time and seeing the kinds of information that she and her colleagues have gathered, and the impact of MERS on their transfusion experience.

I am going to do some of the other things I do all the time. I am going to tell you about two books to read. One is a book called "Managing the Unexpected" by Carl Weik and Cathy Sutherland, and another book by Jim Reason, "Managing Organizational Failure." Much of what I am going to talk about today draws from those two publications. Since Dr. Bogner is here, I do want to mention "Human Error in Medicine," one of the first key books in introducing me to the field that is also available.

Since today is Valentine's Day, I think it is appropriate to recognize that the work done on MERS-TM has been supported by a grant from the National Heart, Lung and Blood Institute.

[Slide]

Major causes of error is the caption. If you can't read the header, it is "the watchdog group promotes strategy to end medical errors." This is from The Washington Post, "You will be happy to know we have new procedures that prevent mistakes, Mrs. Brown." "My name's Smith." Even in the public's eye this idea of identification is appropriately a critical one.

[Slide]

Jim Reason has pointed out that unexpected events or surprises are most likely to occur at the human-system interface, and he suggests three questions to assess where the unforeseen events would surface: the hands-on question; the criticality question; and the frequency question.

[Slide]

The hands-on question, what activities involve the most direct human contact with the system and thus offer the greatest opportunity for human decisions or actions to have an immediate adverse effect on the system?

[Slide]

This is just looking at a set of 423 events with multiple causes, and the yellow piece of the pie is human factor related; the orange, technical; and the big red one, organization. It is this interface with human error and then what Reason refers to as the latent error, or the resident pathogens in the system, whether they are organizational or technical--it is that interface where the surprises occur. So, we talked about the opportunity for the hands-on question.

[Slide]

The criticality question, the second question, what activities, if performed less than adequately, pose the greatest threat to the well-being of the system?

[Slide]

And, the frequency question, how often are these activities performed in the day-to-day operations of the system as a whole?

[Slide]

Reason says essentially an activity scoring high on all three questions is more likely vulnerable to unexpected events. Lots of what we do in the hospital in particular, where there is far less automation in transfusion, is the place we see this.

[Slide]

Because Jeanne Linden's data was so well presented and MERS will be presented, I thought I would just use SHOT data and make the point that this is a universal problem in all the systems in transfusion. I have a couple of slides, just quickly, on the serious hazards of transfusion in the U.K.

There were 366 reports over 24 months. This is a voluntary system, and 191 events, half of them, were due to wrong blood to the patient, multiple errors of identification, often beginning with blood pickup from the lab. There were 22 deaths, three due to ABO incompatibility. There were 62 ABO incompatible transfusions in this set.

[Slide]

This slide is just that there are really two data sets here. The one to the left, 59 percent, the distributions of the incorrect blood component transfused. So, it actually got through to the patient and these are what you would expect what everybody has already talked about. These were the patient identification, missed opportunity at the bedside check to trap the wrong unit.

The right-hand set of data, starting with 64 percent, were the phlebotomy and request generation in a set of near-miss data. All these were detected and trapped, but although they were trapped, there was a significant risk. The safest patient is the one, obviously, where there is something wrong with the sample obviously or they have been seen before and there is a discrepancy. The very dangerous ones are the ones that are properly labeled and it is a one-time misdraw. I am going to come back to this a bit.

[Slide]

There are three different distinct error types. They are largely predictable and happen in three different situations: skill-based, rule-based and knowledge-based. The skill-based error is where you know what you are doing very well. Rule-based, you think you know what you are doing and apply the wrong rule. Knowledge-based is where you don't know what you are doing. I will expand just a little bit on those.

[Slide]

The skill-based error is failure in the performance of a routine task that normally requires little conscious effort. You are driving a car and you are carrying on a conversation. Something distracts you when you are parking the car and you leave your keys in the car. This kind of thing happens to the person who is performing an act that they are expert at. They are kind of on automatic pilot. It is a very important ability that we have, the ability not to have to attend to all the details. That is the plus side. The minus side is we can get distracted and run off into a routine but not the routine we intended.

[Slide]

The rule-based error is the failure to carry out a procedure or a protocol correctly, or choosing the wrong rule. Example--you come to a stop sign, and you think it is a four-way stop sign so you presume you can proceed but it is a two-way stop. You apply the wrong rule and somebody coming at right angles might not stop.

[Slide]

Knowledge-based error is the failure to know what to do in a new situation. It is really experiential learning in a sense, problem solving at the conscious level. The driving analogy, again, is a busy intersection with the traffic light not working. So, you proceed a little bit; you check; you proceed a little bit more and get some feedback. It is trial and error learning.

[Slide]

The question is does practice make perfect? Trainers take a decreased error rate as a measure of increase in proficiency, but this is very much dependent on the type of error we are talking about. If you look at the red arrow going up, that is the skill-based error. As you get to be more knowledgeable and the KB or the knowledge-based errors go down over time, the skill-based errors go up and so the expert makes more skill-based errors. The beginner makes more knowledge-based errors. The rule-based errors tend to go up and then down again, and are then subsumed under the skill-based predominantly.

[Slide]

If we look at these three distinct error types, we can lump the skill-based and the rule-based together because they have relatively low cognitive error potential. The knowledge-based are the ones with the high cognitive error potential.

[Slide]

This is adapted from Kirwan on risk probability assessment. I created this decision table. You can look at any task and say does it involve problem solving? Yes or no? If yes, then you drop right down and you see it has a high cognitive error potential. If there is no problem solving associated with it, then the question is does it require abstract knowledge or is there some basic theory needed to carry out the activity? If yes, again there is a high cognitive error potential. If it is not problem solving and doesn't have abstract knowledge associated with it, does it have novel aspects that aren't covered in training or the SOP? Then, again, it would be a high cognitive error potential. If, though, it doesn't have the novel aspects it is a routine procedure; there is an SOP and the SOP is understood. This is skill-rule-based. This is the automated behavior, a different kind of error and different strategies to correct for it and training would not be affected at this kind of activity.

[Slide]

If we take that cognitive error--I guess this is semi-readable, but all I did was take kind of generic phlebotomy--you start with getting the requisition for the collection, identifying the patient, the name, the ID number, etc; verifying the patient by the information on the wristband; collecting the sample; labeling it appropriately and verifying that the wristband agrees with what is on the label--our standard mantra.

By saying that, I am really saying it is not problem solving to the actor who is doing this. It is not abstract knowledge, not novel. It is a routine. There is a standard SOP that is understood. This is a skill-based activity. It can become a problem-solving activity if somebody cuts the wristband. But as it is carried out normally, people are doing a very well-established, known routine.

[Slide]

Again, as Reason and others have pointed out, the correct performance and error are really two sides of the same coin. The pro side is we can act automatically without moment to moment control. The down side of that is that we are vulnerable to absent-minded slips of action or distraction. If we are interrupted we may pick up the sequence in the wrong place.

Long-term memory--mini-theories allow us to make sense of the world. That is a good thing but we are susceptible to confirmation bias and we sometimes ignore contradictory signals because we are locked on a patter. We know what we are looking at and we carry forward, and we blank out. That is the down side of our very strong pattern recognition capabilities.

[Slide]

In the discussion that we have had this morning so far and what I have talked about, in the context of any level of awareness of errors, let's say, of medication which are biggies in hospitals, versus specimen collection errors, awareness on the part of CEOs and nurses, heads of nursing. In a telephone survey, done by the College of American Pathologists in '99, about 91 percent of the CEOs and nursing heads were very familiar with and felt they knew what they needed to know about the rates of medication errors. About 58 percent of CEOs and about 38 percent of nursing heads thought they knew in the last four months what kind of phlebotomy collection errors occurred in their hospitals. Remember, we are narrowly focusing on transfusion but every time we draw a blood sample to determine something about a patient, that labeling is critical. The blood bank has the dubious distinction of being able to have some feel for the rate of that collection error.

[Slide]

This is from the CBBS e-network forum toward the end of last year. I mentioned to Ira Schulman who runs that how helpful this is, but this represented a communication on the network forum about a patient's full name, medical record number, time and date of phlebotomy and the initials of the phlebotomist. They said that those were the things that had to be filled out on a sample for their blood bank, and five percent of the samples that they received missed one of these elements. Another correspondent said 3.0 to 3.5 percent were rejected because of something missing. Another one said if they included all those elements it would be greater than a 5 percent rejection rate. There was a 2 percent level, down from 7 percent, when they went from hospital phlebotomists to more focused patient care and they switched from phlebotomists to nurses drawing the units and they went up to a 7 percent error rate. With very rigid adherence to labeling protocols, they were finally able to drive that down to what they considered was a floor rejection rate of around 2 percent. There was one reporter who had a 1.4 percent rejection rate.

I want to make a comment that there is the BEST study that is an international study now geared towards trying to establish what this error rate really is.

[Slide]

A little while ago, I think about a year and a half ago or so, perhaps two years ago, the AABB chat room had a series of communications about transfusion specimen rejection rates. I just picked two. Site A had a 2.0 to 2.5 percent rate and they thought that was some kind of floor for them; 0.1 percent misdrawn rate. That is the group in which what is in the tube doesn't match the name on the tube, and there is the potential for ABO incompatibility. Site B had a 1.0 percent rate of rejection, a lower rejection rate, but they had a 1.5 percent or a comparable rate of mislabeled samples.

What is interesting is that in this chat room discussion back and forth it was the issue of, well, what are you going to focus on? Well, we talked to our nurses about all the error rates and the argument was you really don't want to do that; you want to focus them on the misdraws. I think this is a fundamental issue because any time you have a reporting system one of the problems is the power to rate ratio. The information you get versus the lifting you have to do to support that system is critical and if you get a lot of data coming in, how much of it is important? Where do you say this isn't all that important? There are lots of pressures, obviously, to define what is an error rate that we are going to pay attention to.

[Slide]

I go to the Challenger incident. There, there was the experience of enlarging the definition of acceptable risk. The unexpected became the expected. First of all, the first acceptance of deviation was normal heat on the primary O-ring and that caused normal erosion on the primary O-ring. Then there was normal gas blowby, and finally normal gas blowby to the second O-ring which had heat, and then normal erosion of the second O-ring. So, there was a progressing normalization of deviation until there was mission failure.

[Slide]

In terms of O-rings and pre-transfusion label specimen policy, a Johns Hopkins study, done in '97 I think, is elegant. They took rejected samples because they didn't have the necessary elements done correctly. They tested them even though they rejected them, and compared them to the historic record or subsequent correctly tested samples they found that the specimens failing to meet the criteria had a 40-fold greater chance to have a blood group discrepancy.

[Slide]

I think this goes along with what Weik has written about in "Managing the Unexpected," where he talks about high reliability organizations, and he says a weak signal does not necessarily call for a weak response. So, nothing succeeds like success. In fact, I was going to title the talk this because I think we are into this. The potential liabilities of success are complacency, temptation to reduce margins of safety and a drift into automatic processing.

[Slide]

The perception of failure relative to success was well pointed out by Weik when he talks about high reliability organizations, and Tammoos and others describing in an article called "Learning from an N of One," an HRO a near-miss is seen as a kind of failure revealing potential danger. When you have a high reliability organization you don't have a lot of misses. So when you get one you have to look at it. And, you have to use that as a surrogate for a really bad outcome if you are not seeing them, thank goodness. Other organizations see a near-miss as evidence of success. That is a very important different mind set.

[Slide]

With apologies to Jeanne, I never update this properly but it is our famous iceberg model. What we concern ourselves with is that people focus on the top, the stuff above the waterline, and we tend to ignore the stuff below the waterline. Well, I think it is more dangerous than that. I think the unappreciation of the stuff below the waterline is not just that it isn't recognized.

[Slide]

If we take the standard pyramid and we look at that heavy bottom that we are ignoring, we really aren't ignoring it. It is telling us things are all right.

[Slide]

Look at how many times 50 red cells a day times six days a week--50 red cell packed units a day transfused, I am transfusing 15,599 units correctly each year. That is my experience. That is the institutional level. One incorrect unit per year; one ABO every two years; hemolytic transfusion reaction maybe four years, and obviously there are people on both sides of this curve; one fatality in about 115 years. That is the good news. On an individual level among 100 nurses, let's say, it would be not in their career. So, the point is that that stuff below the water line tends to push our consciousness to the fact that what we are doing is okay.

[Slide]

We talk about special cause and common cause when we talk about system error. Special cause is you identify something that is an outlier in the system, not designed into the system; it is a designable cause and you remediate that by eliminating that cause. If it is a common cause, when you have eliminated all the special causes you have a certain performance in your system that has some statistical predictability and the performance you get basically reflects the system you designed over time. If you want to remediate that, if you don't like the variability in your system you redesign the system.

[Slide]

The human is a critical part of the system. I am kind of summarizing now, and there are some analogies. You have special cause with human error. You can improve technique. You can train people better; you can motivate them better. But let's say you get them to the level where they are very well trained, they are very well motivated, all the good things, and you get these random misses--low frequency, not very predictable.

You can't redesign the human. That is where the analogy breaks down. You can't redesign the system but you can automate intelligently. You can provide mistake proofing. Both of those will be discussed today. You can provide redundancy, for example, drawing somebody's sample twice before you treat them as anything other than a group O. Thank you.

DR. LEWIS: Are there any questions for Dr. Busch or Dr. Kaplan? We have certainly heard a lot of information this morning, haven't we?

If there are no specific questions, let me thank both of you again for excellent presentations. I am excited about what we have heard today. Hopefully, as we go back to our jobs we can take some information back to apply.

We are going to take a break for lunch right now. There is a cafeteria up the stairs. That would certainly be the most convenient option for lunch and they are relatively quick. I will see you all back here at 12:45. Thank you.

[Whereupon, at 11:40 a.m., the proceedings were recessed for lunch, to resume at 12:45 p.m.]


AFTERNOON SESSION

DR. LEWIS: Without any extensive introduction, Dr. Linden, would you take this afternoon's session?

DR. LINDEN: Thank you, Dr. Lewis. This afternoon we are going to be deviating a little bit from the published schedule. We are going to be adding in one additional talk, and we will just be running a little bit later in the afternoon. You have heard one of the speakers already, Dr. Kathleen Sazama, and she is going to be discussing a joint initiative between the American Organization of Nurse Executives and the American Society for Clinical Pathology, entitled, Nurses, Pathologists in the Laboratory Working Together Outside the Blood Bank Walls. Dr. Sazama?


Addressing Systems
Nurses, Pathologists in the Laboratory Working Together Outside the Blood Bank Walls

DR. SAZAMA: Thank you, Dr. Linden.

[Slide]

I am really pleased to be allowed the opportunity to introduce this topic to you. As Jeanne said, this is a collaborative effort and I am here actually as a spokesperson. Dr. Rosalind Antovian was the chair of this working group, and she couldn't be here to make the presentation and I am happy to do that.

The focus of this initiative was really to look at the issue of blood safety from the perspective that zero risk can only be achieved if you also look at the blood transfusion process, not just the blood components. We have heard over the past couple of decades, I guess, a lot about how we have improved the components themselves. So this initiative was a collaboration, with representatives from the two organizations that you have heard, and it was an effort to try to define the processes and procedures that represent the complex interplay of activities occurring within the blood bank transfusion service laboratory at the patient bedside and add a variety of inter-departmental interface points between those two entities, something we have called the twilight zone, to represent the often ill-defined nature of these activities.

[Slide]

What I am going to show you, and I apologize to you because it is probably not as legible as it should be but there are copies of the handout that will have these flow charts--what the working group did was to define the process in a series of flow charts, such as this one which is the transfusion process from beginning to end.

[Slide]

Starting, for example, with the patient identification process, as you can see on this slide, this is the description of the process. You have seen something very similar to this in Dr. Kaplan's presentation just before lunch. Here is a column that depicts what kind of procedures need to be in place and who would be the intended parties that would be involved in developing those procedures. So, it is not just what has to happen but what is the supporting documentation and who are the obvious players that need to be involved in this.

[Slide]

I will just give you a quick example of a couple of the other flow charts. Here is the routine blood component dispensing process. Sometimes we use the term "issuance" here. Again, the flow chart is here. What are the supporting procedures, all defined here, and then, again, who needs to be involved in developing those procedures so that safe processes can be put in place.

[Slide]

This is the one for transport, an often under-appreciated and very fallible part of the transfusion process in hospitals.

[Slide]

I am going to apologize for this slide. When I copied this from Word, there was something wrong with the top part so just ignore that. In the document that you have you can see that. Again, you have the flow chart of blood administration with the necessary processes and the people who need to be involved.

[Slide]

These are accompanied by an inventory check list that can be completed by any institution that wishes to use this process, and it allows definition of what should be in place; whether you have it in place; who should be participating in it, or who didn't participate in it; then, who has the responsibility for making sure that you are actually following those SOPs.

[Slide]

This work group felt that this was a comprehensive process that can be applied in any hospital organization or transfusing location, and it this is a preliminary report. It is intended to be published very soon, and the idea here is to make it a generic that is applicable in any site or location where blood transfusion is occurring.

As I said, the two organizations, the AACP, the American Society of Clinical Pathologists, and the American Organization of Nurse Executives, have worked very closely on this project and I would invite you to look forward to a publication very soon. Thank you very much.

DR. LINDEN: Thank you, Dr. Sazama. We will take questions for this entire group at the end of this session. The next speaker is going to be Dr. Sue Bogner, who is president and chief scientist at the Institute for the Study of Medical Error, and she has written a seminal work in this area. She is going to be speaking on the contribution to error by system and cultural factors. Dr. Bogner?


The Systems Approach Analysis of Error:
Applications to Transplantation Medicine

DR. BOGNER: Thanks, Jeanne, and for the plug for my book also.

[Slide]

You are probably sleepy from lunch and it is cool here and you are just ready to settle down and take a bit of a snooze; it is hard to stay awake, but what I am going to do is I want to take you on a journey out of the box, to a different way of thinking about error. So, buckle up your seat belts and come on this trip with me, to the systems approach analysis of error, and with the applications to transfusion medicine.

[Slide]

This is where we are going out of the box, that to err is human. This is the name of an IOM report. Is this an innate human characteristic? Do we have some kind of a gene that has us be error prone? Where this phrase comes from is an essay on criticism that was written in 1711 by Alexander Pope, and the rest of it is "to err is human, to forgive divine."

[Slide]

But we have somehow taken this to interpret it as error is a human trait. I looked in the literature and I can't find anything. I mean, error is a behavior. Behavior in psychology is a study of behaviors. If you look in the psychological literature--nothing; nothing. In physiological literature you can't find anything that we are predisposed to error. But when you look at what we talk about in healthcare, we are always looking at the person.

[Slide]

So, what does this have to say when we presume human error is the cause of things? Lo and behold, we find out that it, indeed, is because we define our measuring instruments to look at what the human does, to have the person report what error they have committed. So, we find that and we have our activities to change that directed at the human. And, if we look to see who has caused an error, we find out that that "who" is a human and we have supported our presumption, our hypothesis. Lo and behold, the human is responsible. Ian Rasmusson has a theory on this or a characterization that the idea of once you have an assumption and you meet that, you don't look any further. Well, I am saying we are going to look further; we are going to look out of this box.

[Slide]

This isn't a band-new idea. The organization with the memory that was referred to earlier is talking about a wider cause of error. Human error may be a factor, but a precipitating factor; it is not the only factor. There are usually deeper systemic factors at work which, if you addressed those, might have prevented an error or act as a safety net. So, what we need to do is find out those. Don't assume we know them, we need to find out what those factors are and address them.

[Slide]

Lo and behold, we have somebody else talking about a systems approach being needed, and this comes from the IOM report. Although most of the IOM report is directed toward the care provider as the entity causing the error, there is a statement on page 42 that I have almost done in cross-stitch and hung on the wall. That is, errors are due most often to the convergence of multiple contributing factors. We find that across domains; across industries. In healthcare people tend to be ingenious to keep errors from happening but still the constellation comes and it does happen.

Here it is talking about preventing errors and improving safety which requires a systems approach. That is what we are going to talk about in order to modify the conditions that contribute to error. We have to change what is causing it.

[Slide]

What is a system? People talk about the healthcare system. They talk about all sorts of systems. I think the term "systems" is replacing the term "thing" as being a generic kind of repository. If you don't know what to call it, you call it a system.

But to look and see just what a system is, going into definitions, it is a set of components that are interdependent and they interact, and a change in one affects the other. This is important because if you call something a system, then you expect it to act as a system. If it doesn't act as a system we are still calling it the healthcare system. We are calling it a system and saying it is broken. Well, yes, it is broken because the healthcare system isn't a system as such if you look at all the components. So, if it isn't, of course, it is broken because you are not talking about what it is.

[Slide]

We are looking in a context. This is something we don't think about. You know, you would not be able to see this podium or my hand or anything without a context. You can see nothing in isolation. We can't perceive it. So often we hear a sentence. Well, you don't understand the sentence; it is out of context. But we can go one step further down on that. A phrase, can we understand a phrase without the sentence context? No, we can't. We really need to know a context in order to know what is going on. Therefore, we need to know the context of an error in order to understand what has happened to cause that error. We cannot effectively find out what is happening without the context.

This has come up from some psychologists, the gestaltists who came here in the second world war from Germany to escape Hitler. They settled in the Middle West and made tremendous contributions to the World War II effort. They discovered the camouflage to move lines so your eyes go across the tank, or what-have-you. That was a crisis in World War II. We, in healthcare, I think are coming close to a similar crisis in that it affects everyone. So, why not look at some of these principles, these behavior principles, and see how they apply to what we are addressing in error?

[Slide]

So, we look again at the importance of context in an industry that has studied errors for a long time, and that is the aviation industry. Shirley Billings is a physician who has been working in this industry for ages, looking at error, and talks about information concerning the context in which accidents occur. Without full information you just can't understand what caused the error, and if you can't understand what caused the error you can't change so it won't happen again.

[Slide]

It is easy to say the context but, you know, you can go everywhere with the context. You think we can't have the world; we can't consider the world. Of course, we can't consider the world but we have to consider the person of focus, the person who we say causes the error and look at what is affecting them at the time of the incident. I don't know what is affecting any of you at this time but you do. You have this life space. What is it that is happening to you? What is there that has happened to you that makes where you are now, and what are you anticipating in the future, and how are things affecting you now? So, that is your life space. It is what you experience and it can only be known by you. It influences your perceptions, and influences your interactions, and when you are talking about error it gives you a snapshot of the error context.

[Slide]

We talk about this so often, we blame the person, or it should be the characteristics of a person, or we train them to change. You are experiencing stress; learn to deal with stress. Take stress reduction classes. Learn to breathe deeply. But stress is affected by things that come from the outside. If you look at engineering, I have a quote here that stress is pressure or tension per unit area. It is from the exterior to the interior. You can breathe deeply until you are almost asleep or actually asleep and it doesn't make any difference. If you are inside a ringing bell of things that are happening you just can't deal with this.

[Slide]

Error-provoking conditions are when factors in the context in which you are functioning and the characteristics of the person performing the task are not in balance. They are mismatched. Weik has this theory saying it is complexity. When the complexity of the task is discordant, out of sync with the complexity of the person expecting to do that task there is going to be an error. It just won't work. That makes sense. You know, if something doesn't work, if it doesn't fit, if you just can't manage it you are going to make an error no matter how hard you try. So, the thing we need to do is find out what are those factors that are discordant with the person and change those to make it in harmony so the person can function because we have certain capabilities and we can't train certain things away. Some of them just come with the package of being human. It is like trying to stop a puppy from chewing your furniture; trying to stop an infant from putting things in their mouth. You can do that until you are purple and they are not going to change because this is in the nature of the creature. We have characteristics which are in our nature as adults also.

[Slide]

So, we look at this context as systems of contributing factors. These systems that I have here, all this is based in the empirical literature and I get it from the work from Rasmusson and Moray in nuclear power plants and in process control industries. They have conducted research in error and have identified factors that have contributed to error, and this is where I am building this system.

This is my artichoke, the artichoke model of the context for error. The heart of the artichoke is a staff member, the patient and the means of performing the task. This is a system because a staff member does something with the means of performing the task and it affects the patient. The patient reacts and it affects the others. But this doesn't happen in a vacuum. They are not just sitting there with nothing else going on. It happens in the broader context.

This broad context is rarely considered but it is very important, and the context of the legal, regulatory reimbursement culture and cultural factors, all these factors. I don't think I need to elaborate too much on what effect reimbursement has on healthcare.

These are cycles. We are back to the artichoke. Those are the outer leaves of the artichoke and the next one is organization. Within the organization you have a physical environment. Within that you have a social environment. People affect what goes on. In what we have as an environment we don't consider a lot but it really can make a difference, and that is the ambient conditions--heat, cold, the temperature, the noise, the humidity, the dust in the air, the altitude. All of these affect what is going on in the heart of the artichoke.

They are affected in a way which I have called the reverse ripple. You know, you throw a stone into a pond and it ripples from the impact of the stone out. Well, this reverse ripple goes from the outer circle. Each tweak, the legal or the regulatory reimbursement, these factors--you change the reimbursement policy and that is going to affect the organization, and that is going to affect the physical environment, and that is going to affect the social. Maybe the ambient conditions on down to all these things ultimately are going to affect how the person, the staff member, performs the task with respect to the patient.

[Slide]

Nothing happens in isolation. One thing tweaks another to affect the life space of a staff member. So there are factors in the context of these systems that affect the person, and those factors can provoke error. Can you empathize with that person?

[Slide]

The importance of the systems approach is an analysis of error as it expands the consideration of the contributing factors beyond the person involved. You are not just going to say the person did it and then we have solved everything; we have identified that Mary Smith, the nurse, did the wrong thing with this transfusion. Shame on her! We will put something in her record and do some training. But why did Mary Smith do that? She didn't intentionally do that. Our staff members in transfusion medicine and healthcare providers don't mean to do it; they don't intend to. Why did they? What are the circumstances? What are the systems factors?

[Slide]

So, we look at this again just to make the point to say you have to keep making a point many times to really get it made, and this is the context as theater. The context is like a script and it has the other performers, and the props, and the cues. You can take an actor out of that performance, out of that script, and put another actor in, like you remove a healthcare provider or fire somebody in the blood bank and put a new one in, but if that script continues you are going to have the same performance. Maybe not immediately. Maybe there will be some variation, but it will happen. What you need to do is find out what there is in this script and the context and change those and that way you alter the performance not only for that person but, if you share that experience across comparable situations, you can really make a difference.

[Slide]

How do we identify these error-provoking factors? You have the staff member analyze the context by completing the systems approach analysis outline. This is a context as the staff member sees it, not as somebody coming in sees it because they are coming in with their own life space and interpreting what is happening. They can do that after the fact but the important information comes from the people involved.

[Slide]

This outline is very simplistic. People can just put in a few words at the time of the incident and fill it in later. You ask for the incident, the date and time, day of week and occasion. There is a reason why these are here, because they have been found to make a difference.

[Slide]

Then we have the systems. The systems are the same things, other artichokes. We are taking leaves of the artichoke and putting them down, the patient and the means of performing the task and the staff member involved. Then we are going out to the other leaves, from the heart out, just exactly what I had in the concentric circles.

[Slide]

We can put on this sheet of paper what are possible factors to try to help the person stimulate their thinking, to stop blaming themselves. Many staff members and healthcare providers blame themselves. We have to pull them out of that, looking at what other factors there are. We can give some hints. These are examples that have been mentioned earlier: similarity of name; and one thing is the ability to communicate; weight; allergies.

[Slide]

Then we talk about the means of performing tasks. What are examples of that? Device and equipment. A lot of people don't look at the equipment and devices. People will try to adjust and jerry-rig things. It is unbelievable how they can make things work. This is a problem. Then, clarity of these different labels.

[Slide]

Then, a staff member that can talk about their education and training, their stress, their fatigue, their nourishment. Do you know how rarely anything comes up about what a person has eaten or what they have eaten? That makes a difference on the way we function, and also about our hydration.

[Slide]

Ambient environment.

[Slide]

Then, the physical, location and arrangement of the information about a patient; equipment; furniture; work space that is cluttered.

[Slide]

Then, the social factors.

[Slide]

Organizational factors and workload and how it is allocated. Policies.

[Slide]

Then, this legal-regulatory reimbursement and cultural factors.

[Slide]

Let me read you an incident and then we will fill in this statement, fill in real quickly the outline. Father and son, Anton Bolitski senior and junior were in an automobile accident. They were taken by ambulance to the emergency room of the Bresti Community Hospital where they were assigned patient numbers and admitted for surgery. Mr. Bolitski was released from the recovery room to room 115, where Sue Drew was assigned to be his nurse.

After surgery to repair damage on his left arm from the accident, Mr. Bolitski, Jr. was released to the floor and placed in room 149. Nancy Barton, who was assigned to be his nurse, introduced herself to Mr. Bolitski who responded with a groggy mumble. She observed that his wristband had been removed in surgery and not replaced. In reading his chart that was on the bed, Nancy Barton noted that blood had been ordered. She went to the pneumatic tube to get it. On her way she was met by an older woman who was quite upset, crying and talking in a language she didn't understand. But Nancy Barton knew that the woman was in distress and she was distressed over the patient and concerned over the patient's condition but Nancy couldn't figure out how to communicate.

She looked for the unit near the tube but couldn't find it because there was no service adjacent to the tube when the items arrived to put them on. When the items arrived whoever was there put them on whatever space they could find, which was typically on the counter of the nursing station. At first she couldn't find the unit on the counter. After moving items, she could see the label, Anton Bolitski. So, she took that. The older woman was still following her and trying to ask questions, obviously seeking information. Nancy Barton went to room 149 and hung the unit and started the transfusion.

Nancy Barton was exhausted. It was the fifth day of the work week and, because of the shortage of nurses, she was working a double shift and had a particularly heavy workload with over half the beds occupied by critically ill patients. She knew that as a good nurse she should search for someone who speaks Polish, which is the language she thought the older woman was speaking, but she didn't know where she could do that and there wasn't time. So she smiled and patted the distressed woman's arm to reassure her and proceeded on with her task.

As Nancy Barton was returning to room 149 to check that Mr. Bolitski's transfusion was completed, she met Sue Drew who commented how much Nancy Barton's patient looked like his father, Anton Bolitski, who was in room 115.

[Slide]

Now, what can we do about this incident? Here is the incident. We put down the time and date, the day of the week; what are the systems factors, the names, senior and junior, and the wristband is missing.

[Slide]

Means of performing a task--here we have units of blood and staff member involved. These are some things going on with that person that are affecting her life space.

[Slide]

What is the ambient condition? Sounds from the distressed family member, and that can be wearing over time, particularly if you are trying to help and you don't know how.

[Slide]

The physical arrangement--the father and son assigned to different rooms on the same floor. This gives an opportunity for Nancy Barton to make a note here that not having a surface at the opening of the pneumatic tube to place items can cause them to be misplaced. This is a way of her getting this information out.

[Slide]

Then, what are the social factors that are involved?

[Slide]

What is the organization? The heavy workload, the shortage of nurses; no support to help the distressed person and no policy for notifying staff family members of patients on the same floor. If you can get these ideas out, it can help the organization change things.

[Slide]

Then, here are the litigation fears and all these other factors that this outline allows the nurse to express.

[Slide]

Essentially, the systems approach analysis outline provides data that identify systems factors to report to the appropriate management for change, and say, look, these things are happening. By having data, it is not that you are a malcontent and you are complaining. You can say this is what happened, and if you keep seeing these, this is making a pattern to convince management that something needs to be done.

What is good about this too is that it involves staff in enhancing patient safety. It is not punitive. It is finding out what is there that has caused this incident. Why did this happen? The staff can figure out why; can contribute. And, if you find out that you can contribute to making things better for the patient, that can develop a safety culture because it becomes a concern. It is a positive way of addressing the "why" of medical error in the application of transfusion medicine. Thank you very much.

DR. LINDEN: Thank you, Dr. Bogner. The next speaker is David Marx, who is a principal at David Marx Consulting in Chaska, Minnesota. He will be speaking on the organizational culture necessary to identify and correct system errors.


The Organizational Culture Necessary to Identify and Correct Systematic Errors

DR. MARX: I would like to thank you for allowing me to come speak to you here this afternoon.

[Slide]

Just a little bit of background. I need to tell you who I am. Not only to err is human, I think to stereotype is human so you need to know a little bit about me so you can stereotype me. I am an engineer who got a law degree at night. My wife likes to say I went to engineering school and lost my personality. Then I went to law school and lost my soul.

[Laughter]

So, I am what is left of that process. I was a design engineer at Boeing and working on aging airplanes prior to Aloha. If anybody remembers the Aloha accident, a 737 blew the top off their airplane. I was working on those issues, and I read a book called "Blind Trust," by John Nance and it changed my life. Because I read his book and looked at the problems we were having in aging airplanes, I realized it is not an airplane problem; it is a human problem that we had to fix. It is a wonderful book. I know John Nance has been on the healthcare side talking about error, but "Blind Trust" by John Nance got me into human error.

My wife also likes to say I make a lot of mistakes, and I think I have validated that because my two presentations, one of which I share with John Grout, are not in your packet. For fear of recrimination, I am not going to tell you how I made that mistake but the error occurred.

[Slide]

I am going to start with a rule. How many of you flew in? All right. You have expectations, right? You want a very safe flight. Actually, after September 11, even us, aviation safety experts, get a little knocky at the knees when it comes to flying. What is your expectation of the pilot who is piloting that, or the co-pilot?

Let me take you back to the basic rule in aviation. No person may operate an aircraft in a careless or reckless manner so as to endanger life or property of another. That is the underpinning of our culture in aviation. It is the basic rule. All right? What do you think of it? It makes sense, right? That is all we want of our pilot. We don't want him to be careless or reckless. The compliance and enforcement manual of the FAA says if you do this, it is a fine. It is possible certificate action. The question is what is this? What is this we are looking at right here?

[Slide]

Well, let's see what the administrative law judge says about this rule. When I say careless, I am not talking about any kind of reckless operation of the aircraft, but the most simple form of human error omission the board has used in these definitions.

Look at the bottom. A simple act of omission, simple, ordinary negligence--a human mistake. A human mistake. What is there to say? Well, by rule there are two things. One, you can't recklessly fly an airplane, which means you can't knowingly put people's lives at risk. If that makes sense, why don't we just say you can't make mistakes either? Mistakes are against the rules. All right? So, I think in aviation our model is to err is not human because we can just tell you, you can't do it and that is how we are going to ensure safety. Is that a rational approach? That just doesn't seem to be too smart. This is the rule in aviation today.

[Slide]

Let's go in and look at the healthcare industry. Washington State, we have a code for professional conduct, a code that says this is unprofessional conduct, acts of moral turpitude, dishonesty and corruption. Of course, we have expectations that healthcare providers won't do that. Misrepresentation and fraud--of course, you can't steal from me. Willful betrayal of a practitioner-patient privilege, that seems to be unprofessional. Abuse of a client or sexual contact with a client--these are horrendous things. Right? Incompetence, negligence of malpractice which results in injury or which creates unreasonable risk that the patient may be harmed. What is that one? Is that what we are talking about, ordinary negligence, a human mistake?

Is it true in Washington State that error is a human error and it is in the same code that says that is equivalent to acts of moral turpitude? The answer is yes. In the eyes of Washington State you can't engage in acts of moral turpitude with your patient, nor can you make any mistake. It doesn't have to harm him; it just has to create an unreasonable risk that you are going to harm him.

So, in the healthcare industry your model today is you, humans, can't make mistakes. In healthcare even in Washington State we are going to put a label around you, and that label around your neck is that you are unprofessional. It is that simple. You are not a professional physician if you make a mistake. That is the rule today. Is that what we think is a good model? No.

[Slide]

What is the model that best supports the system? What culture do we want to have? Do we want to legislate away error? Is that the way we want to go? I will give you three examples. One is the punitive culture, which I think we have seen in the regulation at least. Right? To some extent, what we talk about now is the other end of the spectrum, the blame-free. Right? It is the system's fault; it is not the human. Does that work?

I know in the healthcare industry you have heard that in aviation we have blame-free reporting systems. I have to tell you it is a lie. There isn't one. You saw the rule, the basic rule but it is a lie. We don't have blame-free. You can file an SRS report but you can do that once every five years. Multiple errors is an indication that you are unprofessional. So, we don't have a blame-free system.

What I am going to show you is what I think is the middle of the road that maximizes safety, and I think that is a just culture. I am going to talk about learning cultures and a responsive culture.

[Slide]

What do you see? I am going to put just a little different hat on there because the lawyers in the world don't even know what human error is. It is not defined in the law. There is no technology definition for it. So, I want to talk about a behavior model of error.

In the first column is normal error. It is a product of system design. It is, to some extent, what we buy into in the management of error, that the system leads to errors and we manage processes, procedures, training, design and environment. And, to some extent, I believe to err is human and error is normal. In any system you are going to have a normal rate of error which is going to be something other than zero. It is not a good model to believe that you can have humans not make mistakes. Humans will always make mistakes. No matter how good a job you do, you will make errors at some rate. That is normal error.

The middle column, at risk behavior, this is what I call unintentional risk taking. Remember how you were taught to drive? Where were your hands supposed to be? Ten and two, right? I guess now it is nine and three, right? Anybody hear that? Because you hit yourself in the face when the air bag goes of with ten and two. So, your hands are at nine and three. You look both ways. What do you do today when you drive? Are both your hands at nine and three? Is your hand on a latte? Is your hand on a cell phone? Are you eating a McDonald's egg Mcmuffin on your way to work? We want error reporting programs. How many of you speed? Is that the system? What is it about the system that causes you to speed? Are the roads not wide enough? What is it?

I am here to say not only is to err human, but to drift away and deviate. You are going to drift away. Even professionald drift away. We just did an assessment of a major airline last week where a pilot focus group said 80 percent of the time they don't do a particular task that we have found out to be a pretty risky task if you don't do it, but 80 percent of the time. Why? They don't feel there is risk. Right? You get up to 75 on the freeway when the speed limit is 55--I can do that because there is not a whole lot of risk associated with that.

Do we see this middle column in healthcare, at risk behavior? Absolutely. If I am a nurse and I have met my patient, am I going to go in the room next time and confirm their armband? I know it is Mr. Smith. I have been seeing him for two weeks. Am I going to confirm the armband every time I go in, or am I going to say, look, I know who you are? The risk though is you confirm the armband for two reasons. You confirm it to know it is him, but you also confirm to know what you are bringing in matches him. All right? Quite often the nurse says, well, I didn't realize it is because of what I am bringing in. I just thought I know that is Mr. Smith so I am not going to confirm the armband.

At risk behavior, this middle column, I think is normal too. We all engage in that risk behavior. Just look at how we drive. Look at what we do in all aspects of life. We drift away as we lose recognition of the risks associated with what we are doing. We try to optimize. We want to do things faster and quicker until finally we get bit.

The last column is the third behavior, and that is intentional risk taking. Do we do that? See, in the real error reporting program I should say how many of you have driven intoxicated? Of course, this is videotaped and none of you will raise your hand.

[Laughter]

But I have, particularly in college. I mean, twenty years ago we would laugh about it if you got home safely in the morning. So, we even do the intentional stuff. Even in the aviation world, and I am talking outside 9/11, we have had pilots who have put down airplanes because they were committing suicide. So we have behaviors that are really on the far extreme.

The issue in a just culture is you have to recognize that all these three occur. The system is going to drive error. Even in a good system you are going to drift away from the system, which we call at risk behavior, and you are even going to have reckless people who willingly put people's lives at risk. I have never seen a system, from nuclear to railroad to aviation, that says we will let anybody who is reckless off the hook. There, we do draw a line.

[Slide]

Now, what is a just culture? A just culture is a set of beliefs. It is a belief that professionals will make mistakes. We are going to make mistakes. The regulatory model that says mistakes aren't allowed hurts us. It does not serve us from a system's safety perspective. There are a lot of people out there who believe, look, we have solved the problem. We just told people they can't make mistakes. Bad idea.

A second one is a recognition that even professionals will develop unhealthy norms. Ten years from now you will have a book not only to err is human but to drift away and deviate is human. That will be the next book because I have to tell you that across multiple industries it is the second one that is the biggest risk of harm. In aircraft maintenance it is the biggest harm. In injuries it is the biggest harm. You, on the road, is it the system or is it your at risk behaviors that is the biggest risk of you getting into an accident? Is it something about the street light and the width of the streets, or is it you eating your McDonald's hamburger on the way to work that is the more significant risk? To drift away is human and we have to recognize that people will deviate, and we have to learn how to fix that issue.

The third one is human error is a manageable aspect of the business enterprise. If we know the first two we can manage it. All right? I have spoken to attorneys who say, well, prove it. Prove that legislation doesn't work, that you have a better way. So part of the just culture is believing that it is a manageable aspect.

The last one though is a fierce intolerance for reckless conduct, reckless conduct being I know I am taking a risk; I am putting people's lives at risk. Do you ever see that in healthcare? Be honest, have you seen it in healthcare? Didn't we have a physician carve his initials in the side of a patient because he liked the good job he did in sewing her up? That is beyond mere error. Was it a lack of judgment? Yes, but I don't think we are going to call it human error. We are going to call it something more, and I call it reckless conduct.

[Slide]

A set of duties--the duties in Washington State are you don't make mistakes. If you do, you are unprofessional. What is interesting about that is that the Washington legislature said, well, should we have the person engaged in unprofessional conduct report their own error? What do you think? Write a rule that says if you engage in acts of moral turpitude you have to raise your hand and come forward.

[Laughter]

The legislature didn't want to look stupid, so what they said is, no, other people have to report on you. You don't have to report on yourself but you do have to report on other people who engage in unprofessional conduct. So the duties in Washington State are don't make mistakes and rat out those people who do. In aviation we said what happens in the cockpit stays in the cockpit. That was the professional duty. That is not the set of duties that we need. We need duties to say when I have made a mistake I am going to raise my hand and say I have made a mistake. I am going to raise my hand when I see risk in the system. I am going to resist what is very natural at risk behavior. I am going to participate in the learning culture and, again, absolutely avoid reckless conduct.

[Slide]

Jim has said, well, what is the culture we need to learn? We need a culture where people can raise their hand and come forward and, again, I believe a culture where that person coming forward can say, you know what, I didn't confirm the armband. In direct violation of hospital policy, I did not confirm the armband because I only do that the first time I meet my patient. We need to know that. We don't want the nurse saying where is the rule book and I will tell you that I followed the rule book. We want a culture that says I admit my error and I admit my violation.

The next thing is a learning culture, that you learn from your own events. You learn from normal operations, and auditing in focus groups. You learn from others and best practice. Ultimately, you have a model that, if an error is possible, it has something other than zero probability. I want you to think about that. We often get caught in the trap of saying we never do that.

What is the English case? The baby who went through the wash cycle, the preemie? You look at that event and you say how can that happen? Anybody familiar with that event? Two weeks ago, in England, a baby was born premature and died. It was taken down to the morgue and in the morgue they have a cabinet for dead babies. That is just sort of weird, but they have a cabinet for dead babies and next to that is the chute for laundry. This baby was put into the laundry basket and was later discovered at the facility on the conveyor belt after that baby had gone through the hot water wash cycle. Most of the bones were broken. It was a terrible case. It made national news here. It is a terrible case in England.

Should that ever happen? Well, you know, we don't like the outcome but it is going to happen. There is a risk that it is going to happen. You have to model it and believe that there is a risk that something like that is going to happen. If your model is that certain errors are so egregious that they never happen I think you are destined to have it happen. You have to believe that every possible error has a non-zero risk.

[Slide]

A responsive culture--on this page I am going to put on my hat as a customer of yours because, you know, I work in multiple industries and I have never worked as a healthcare provider but I am your customer, just like all of us are. A responsive culture--Jim said what kind of culture do you need? You need a just culture; you need to learn; lastly, you need a responsive culture. Safety is a perception, as is economic value, and it is a perception that your customer owns. Right? I mean, I look at events of wrong site surgery and look at things that occur and say, you know what, I want more out of my healthcare system.

I am a big supporter of helping you guys create a system where people can raise their hand. But the flip side of that is I think you should do wonderful things and be able to do wonderful things. In a lot of industries we collect a lot of data but we don't act on the data. We collect it and it becomes a never-ending research project. I think the window for you guys is short. I have been in places where the attorney has said, look, I am held accountable for my mistake. Let's face it, if I were your attorney and committed malpractice and you lost your suit, are you going to hold me accountable for my mistake? Probably so. Right? You are not going to say, well, to err is human; I will forego the $100,000. No, you are going to say you have to be held accountable, Mr. Marx.

So, even your just culture that you are going to set up in the healthcare industry, it is not going to reflect society as a whole. Society as a whole is not a just culture. We do hold people accountable for errors in every facet of society. If you create a culture in healthcare care where people can raise their hand without having blame placed on them, there needs to be an expectation that you are going to fix the system, and you have to fix the system. If you don't, you will be right back to where you were previously, back in the blame culture. So, my pitch to you is you have a window.

[Slide]

What is a necessary culture, to answer what Jim's question, first, a just culture with professional conduct. In my view, professional conduct isn't don't make mistakes and rout out those who do. Professional conduct is you raise your hand; you say you have made a mistake. You recognize risk around you. You participate in a learning culture. That is professional conduct.

Learning culture, identify and prioritize risks and ultimately a responsive culture. I have been to hospitals where I have had the whole quality assurance group of the hospital in the room and I have said can we meet our goal of fifty percent in five years? In some hospitals, I have had the whole group say absolutely not; it won't happen. I have to say I not only believe it is possible, I think the goal is low. I think you guys can do great things if you create the right culture and put the processes in place. So, as a customer of yours, my hope is that you do much better than that fifty percent in five with the systems you are setting up. With that, thank you.

DR. LINDEN: The next speaker is going to be Dr. John Grout, who is an associate professor at the Campbell School of Business in Mt. Berry, Georgia. He is going to be speaking about mistake-proofing your system. Dr. Grout?


Mistake-Proofing Your System

DR. GROUT: Good afternoon.

[Slide]

Today I want to talk about an approach to error reduction that will not take a year to implement and will, in many cases, cost less than a thousand dollars to do. I think those are probably some bold statements.

The other thing that I want you to be aware of is, as you know, the Joint Commission is starting to require the use of failure mode defect analysis. In other industries what I am going to talk about today has become a preferred follow-on to failure mode defect analysis. So, with that, I would like to talk a little bit about what it is and what it isn't.

[Slide]

Mistake-proofing is the use of process or design features to prevent errors or their negative impact. Mistake-proofing is also known as poka-yoke, which is Japanese slang for avoiding inadvertent errors. It was formalized by Shigeo Shingo, whose picture is up here in the corner, and comes out of the Toyota product system. I understand that there is a long way between the Toyota product system and medical operations, but I think there are also some worthwhile similarities.

Mistake-proofing is inexpensive. Mistake-proofing is very effective for manufacturers that are aware of it. Not all manufacturers are aware of it. It is also based on simplicity and ingenuity. But let me be clear that simplicity, in its true sense, is rarely an easy thing to accomplish. Finally, mistake-proofing is something you probably already have in isolated instances in your organization. So, as you look around you will find examples.

[Slide]

It is not rocket science. It is detail oriented but, in fact, once you have done it, it will seem common sensical to you. It is also not a stand-alone technique that will obviate the need for other responses to error. I often think that the efforts that we take to get people to pay attention and to be at the highest level of attentiveness as part of their work life is very much like Sisyphus pushing his rock up the hill. When you stop focusing on it, it goes right back down to the valley again. I don't know any way to get rid of that completely. I think that some of these techniques can mitigate that to a point.

It is also not widely known or practiced in manufacturing, or in services in general, or healthcare specifically. So what is it? We are talking about design features that prevent errors. So, if you look in the upper corner there, you will see a file cabinet with a drawer open. If you were then to go and open some other drawer, it would be locked. When you close that drawer all of the others become unlocked. You can only pull one out at a time. Why? So that it won't fall over and kill you.

We have the three and a half inch diskette. You stick it in the machine any way except the right way, it stops in the position indicated. It is only when you flip it over that it will go in correctly. Somebody who designed that diskette felt it was important enough to get the orientation right that they put in a little stop.

How are we doing on our processes to make those kinds of features part of what we do? You have ABS brakes that allow the wrong action to become the correct action. The old standard operating procedure was pump the brakes. If you are in an emergency stop situation you have to have a lot of poise to pump the brakes. Nowadays the standard operating procedure is steady pressure, which means stamp on the brake.

You have the lawn mower where, if you let go of the little wire handle, the engine comes to a stop. Presumably that is so that you now have to really work to cut your fingers off. Can you spot the one on the sink? It is the little hole so that it can't overflow.

[Slide]

These tend to be very effective. AT&T power systems reduced their average defect rate by 70 percent. TRW went from 288 parts per million defective to two parts per million defective. For a manufacturer that is very, very good. We would like to see even better performance than that in medical systems but it is not clear that we do. Federal Mogul had 99.6 percent less customer defects than their nearest competitor and a 60 percent productivity increase by systematically thinking about the details of their operation and mistake-proofing. DE-STA-CO manufacturing went from 800 parts per million omitted down to 10, and in all their modes they went from 40,000 parts per million down to 200 parts per million and, once again, there was a productivity increase as a result. These are some very nice kinds of outcomes.

[Slide]

The other half of this is that the devices tend to be inexpensive. This is a distribution of the cost of the devices as listed in Shigeo Shingo's book. What you will notice is that a quarter of the devices that he implemented were $25 or less. Fully half his devices cost $100 or less. So, the median device is $100. You were up over 90 percent before it started costing $1000 or more. Very simple, little devices that make a big difference.

[Slide]

The paybacks here can be substantial. Dana Corporation had one device where a worker came in one morning and said, "hey, look what I've got." The engineer said, "put it on the machine; let's see how it works." They eliminated a mode of defect that cost half a million dollars a year. The device cost six dollars.

Ortho-Clinical Diagnostics--some of you are here. This was done in the Rochester facility. An individual figures out a way to use Post-it notes to save $75,000 a year.

AT&T Power System implemented 3300 devices and each of those devices had a net savings of $2545, and a variety of others.

One of the issues that we found out at General Electric with the Aircraft Engine Division was that errors are pretty costly, perhaps not as costly as in your industry but costly nonetheless. Any in-flight shut-down of an engine, even one where they land the plane safely, costs a minimum of half a million dollars. How much are you willing to pay to get rid of one in-flight shut-down? Presumably anything less than that half million dollars, and we are charging between $100 and $1000. There are some very beneficial kinds of outcomes here.

[Slide]

Some examples from the medical industry. Here is Broselow tape. Essentially, you have the tape laying over the satchel, there on the right side of the screen. This is for pediatric trauma. You measure the child. You look at the color code on the tape. The doses are printed on the tape. All of the medical devices, fixtures, and what-have-you that you would use on this child are in color-coded packets. So, once you have measured the child's height you are ready to go and can implement treatment much faster.

[Slide]

Another example is the esophageal intubation detector. You intubate the patient; you take the bulb; you squeeze it; you put it on the tube; you let go. If it inflates it is in the right spot. If it fails to inflate fully, it is an error. You then re-intubate and try again but this way you don't have to use any radiology. You don't have to use touch or feel to decide if it is right or not. You know right away.

[Slide]

Healthcare applications are different than manufacturing and here are some of the ways: Both service provider and patient errors impact the quality of the service. The service provider is blamed for all errors. Can you imagine that, patients blaming you all for that or, in this case, perhaps donors blaming you for errors? It turns out that in other service operations as much as a third of customer complaints are related to problems they cause themselves.

[Slide]

We have two different sets of mistake-proofing devices or the Japanese poka-yoke. On the server side, as service providers you have tasks, treatments and tangibles that have to be appropriately dealt with and mistake-proofed. In some sense, you cannot provide the wrong task or treat the person in ways that are less than professional or deliverable, the actual items that you put into their hands are problematic in any way.

[Slide]

Here are some examples. A task poka-yoke would be to have your cash register in your fast food chain have buttons that have the item instead of the price. All of a sudden, you don't have to keep track of the mapping of what item goes with which price; you just point to the items on the tray; the prices come up automatically. Tags on vehicles to indicate which one came to the service department first so that you can maintain your first-in, first-out ordering.

Treatment poka-yokes are a little bit more rare. The bell on the door as you walk into a shop so that the storekeeper will come out from the back to take care of you is a treatment mistake-proofing device. Another favorite of mine is that when you go into the bank--there was a bank that actually put a line on your transaction form that said what is the eye color of the customer. So as the teller was filling out the form they had to look right into the person's eyes to see what color his eyes were so that they could then bring that personal service to the transaction. And, tangible poka-yokes like paper strips and envelope windows.

[Slide]

On the customer side, the customer needs to have mistake-proofing occur in their preparation for the transaction, the actual encounter and then resolutions. Preparation mistake-proofing deals with failures to bring necessary materials, understand their role or engage in the correct services.

Encounter poka-yokes involve inattention, misunderstandings or memory lapses so that when your donor comes in and says, no, I haven't had a cold recently and they have, that is inattentiveness.

Resolution poka-yokes involve failure to signal service failures, that is to day, you want to know when something has gone wrong. Likewise, you would like them to provide feedback, and you would like them to learn what to expect. So, we would like to mistake-proof all of these different aspects, which makes the service side of things much more difficult than the manufacturing side because you have a lot more degrees of freedom there.

[Slide]

Some examples: the preparation poka-yokes have appointment reminder calls to let people know that they are supposed to come at a particular time. In my line of work, if you can have a student degree requirement check list that they can work through before they come to their advising session, they actually might have some idea what course they want to take next semester, which is a good thing for me.

You have encounter poka-yokes. You go to the amusement park and you have the little bear with its arm out. It is a mistake-proofing device to allow you to determine whether this child can ride on the ride or not. Have you noticed recently with ATM machines that you no longer stick your card in and it is taken away for ever? Now you just swipe it. The reason that that is a good thing is because it never leaves your hand. It is very hard to leave it behind. It isn't inside the machine somewhere and asks do you want another transaction? You already have your money and you are ready to go.

Likewise, you have resolution poka-yokes which are ways to help people kind of close the loop on their learning from the experience.

[Slide]

Mistake-proofing in some sense puts knowledge in the worl--that is Don Norman's term, not mine--in addition to the knowledge that we put in people's heads. Here are some examples. If you want to put knowledge in the head, you improve the standard operating procedure. I guess I ought to put quotes around "improve." You have to watch out then. If you get a standard operating procedure that is 30, 50, 80, 120 pages long it is not something people are going to carry around in the top of their brain all the time. You manage the nudge in the head by retraining, by recertifying skills and by trying to manage and enhance attentiveness.

By putting knowledge in the world we provide clues about what to do. We change process design and embed the details into the process, which then frees the mind to consider the big picture. It will also facilitate the knowledge-based kind of work that has to go on.

[Slide]

Here is a quiz for you. Which dial turns on the burner? You notice there is a pan on each of the two stoves. Now, can you tell which knob turns on which burner? It is B. Right? Because there is a natural one-to-one mapping. The question is how many of our processes are stove Bs and how many of our processes are stove As? The point is that a little attention to detail, not a big change, can make a huge impact on our ability to use the system.

[Slide]

Even more challenging, how would you operate these doors? Let's take door A. Push or pull, left or right, and how did you know? My suspicion is that if I asked you about door A you would say that it pulls, and it would come out towards the right. Door B, yes, you would push and you would push in on the right-hand side and it would swing to the left. How did you know that? You know, you walk up to a door and it will have written on it "push." You know what that is. Right? That is the standard operating procedure. That is the process documentation. Could I propose that if you need process documentation to operate a door it is badly designed?

[Laughter]

Our processes should be that way too. How about door C? Don't know what to do. A gentleman in Wales told me "knock."

[Laughter]

[Slide]

Here is an example of a form. Up at the top you have "before" and down at the bottom "after." You have all of these people who are supposed to sign this engineering change notice and you really have to think carefully about who to have sign it. After a change what we have is a grid. The grid now says what type of changes occurred, and has blanked out the unnecessary signatures. So, now the form actually walks you through the approval process so that you can determine how to get the job done. That is putting knowledge in the world. You can see much more effectively what to do.

[Slide]

It is my belief that no system of barriers is ever going to be perfect.

[Slide]

However, I think at the moment there are a lot more barriers that could be put in place that would make a big difference in the outcome.

[Slide]

Having said all that, I would like to introduce you to another book that you can put on your conference reading list. It seems like we are all sending you off to do some reading. This book is Dick Chase and Doug Stewart, "Mistake-Proofing Designing Errors Out." There is only one problem. This book went out of print in 1995 and is currently nowhere to be had, except at this conference. I talked to Dick and Doug, and they have given me permission to hand it out to you. Regrettably, I didn't want to carry a whole big box of books, so as you go out the door at the end of the session, you will find a little compact disc. If you just drop it into your machine, it will take you from there. It will have the book. It will also have other books on the topic, and just a variety of other stuff that you may be interested in. There is a limited number. That is, I think about half of you will end up with a disc. The other half of you need not panic nor rush to the door. If you will just drop a business card in the little box lid that is out on the speaker's table, I will make sure everyone who drops in a card either gets a copy of the disc or gets an email with all the same files in it. That is courtesy of Dick Chase and Doug Stewart. It is a great book. It is the only book on the service sector side of mistake-proofing as opposed to the manufacturing side, and I think it is really pretty well done.

[Slide]

That concludes my remarks for this portion of the talk. Thank you very much. As it turns out, I am going to be the next presenter on the next paper. David Marx and I met this morning, and Jim kind of said you guys are both doing probabilistic risk assessment so why don't you talk together? So, this is the result of some collaboration. If it turns out good, blame it on technology. If it turns out bad, definitely blame it on technology.


Problematic Risk Assessment

DR. GROUT: To continue, we want to talk about probabilistic risk assessment. Would everyone raise your hand? Thank you. That is a functional test.

[Laughter]

[Slide]

Now, how many of you have seen fault trees in some form or another before? That is actually a very nice group. I will be relatively brief on the introductory materials there. I am going to put one twist on the material. David is going to put another twist on the material, but we do have the fault trees in common.

[Slide]

Henry Peroski says we rely on failure of all kinds being designed into many of the products we use every day. We have come to depend upon things failing at the right time to protect our health and safety. We often, thus, encourage one mode of failure to obviate a less desirable mode.

[Slide]

Failure is a relative concept and we encounter it daily in more frequent and broad-ranging ways than is generally realized. This is a good thing. For certain types of desirable failures, those designed to happen are ones that engineers want to succeed at effecting.

[Slide]

I would like to talk to you about a failure that was created. You will recognize the Audi 5000 and the Jeep Grand Cherokee. The Audi 5000 is famous for one thing more than anything else, and that was uncontrolled acceleration. People would drive them through the backs of their garages and would have all kinds of other terrible crashes. And, Audi had the nerve to tell Mike Wallace on "60 Minutes" that it was operator error, that these affluent, well-educated people couldn't tell the difference between the gas and the brake.

Fast forward twenty years to the Jeep Grand Cherokee and what you will find is Stone Phillips, on "20/20" grilling the Daimler Chrysler folks on why there was a defect with the vehicle. They said no defect, just a mis-application of the brakes. They couldn't tell the difference between the gas and the brakes. To "20/20's" credit, they then measured and what they found was that these two vehicles, and no others, had the gas pedal and the brake pedal shifted over.

Now, think about your car. If you take hold of the steering wheel and put your foot at the center axis of the steering wheel, put your foot on that pedal, what are you going to hit? Brakes. Okay? With these two cars, guess what you hit. Gas. They had a design problem on their hands, but was it a defect? No, it was designed the same way every other car is. Your gas is on the right and your brake is on the left.

Both Audi and Jeep Grand Cherokee implemented a mistake-proofing device. The device was that they made it so that you had to put your foot on the break before you took it out of gear. The problem went away. Why did it go away? Because once your foot is on the brake you know where the gas is, and if you are on the gas you know where the brake is and you don't use some other approximation to get there. So, in this case they have created a system where you can't get the car to go into drive or reverse. It is stuck in park. That is a failure. It just happens that that failure is a lot better than the failure of driving through the back of your garage.

[Slide]

Here is a fault tree, a very basic one. You have some top event; it is a bad thing. It could be harm to a patient. It could be driving the car through the back of your garage. Then you have a representation which has something called an "or" gate, meaning that any one of those three things that lead into the "or" gate can cause that failure to occur. You also have an "and" gate which means that you have to have both of those, failure one and failure two, in order to generate that event. So, that is a basic description of how things go wrong.

The other half of what you have there is something called a minimal cut set. A minimal cut set is simply all of the group of items that all have to be present in order for a failure to occur. What you will also notice is that I have assigned probabilities here, and the probability of failure one is 0.1; failure two is 0.1; and the probability of failure three is 0.5. Failure four is 0.5. We then link them together. The way that they link is that with the "and" gate you actually have to multiply them together to get a failure. So, all of a sudden you are at the 0.1 level. Even though that is the least reliable individual item, the fact that you have that redundancy makes a big difference. The top event is just adding up or the union of those three minimal cut sets coming together. In this case the probability of the top event is 0.11, 0.5 coming from basic failures three and four and 0.1 coming from that joint group of failure one and failure two. So, those are the basics of a fault tree.

It turns out I am not really particularly concerned about those rates, from my perspective. David will tell you about how to deal with those rates or at least why we are interested in them. I am more interested in taking more than one fault tree and trying to move stuff around.

[Slide]

In particular, I have a table saw example here. What we have is a situation where the table saw could be turned on prematurely and that is an undesirable failure. A preferred failure is to have the saw not work. So, if I am working with this saw and it turns on I am in big trouble. I would prefer to have the saw not work. The question then is how can we take basic failures that would lead to this undesirable failure and convert them into events that will cause benign failures?

In this particular case I am going to try to do a couple of things. The saw can be turned on prematurely if the anti-kick back guard is not in place or the blade insert is not in place or the wrench is left on the spindle. That is the really bad one of the bunch. How do we fix that? Well, you tie the wrench to the electric cord near the plug so that you have an event where you can't do both things at once. You can put in a switch in the insert cavity in order to make sure that that occurs. When you do that you end up with altered trees.

One tree, the undesirable failures, is now down to one event and you can then determine how you want to manage that. You may want to move it over later on. But in terms of the table saw not operating, you can have the wrench left on the spindle and it now ensures that the saw is not plugged in. In the case of the insert not being mounted properly, it is now an issue where you have broken the electrical connection and the saw will not operate.

[Slide]

The goal here is to take what were very undesirable failures and turn them into benign ones. So, the process will stop but it will stop in ways that we have engineered in. We want our processes to fail but we want them to fail in the ways that we decide and not in the way that just happens by happenstance. So we are going to stop the process that is a failure but it is the best possible one in this case.

I think we are now ready for David to come up and talk about his half of this presentation.


Problematic Risk Assessment

DR. MARX: I had my lawyer hat on earlier; now I have my engineering hat on. I love to celebrate errors so I have to tell you a comical one. You know, John and I hadn't met. So when I get here at the conference I say, well, I have to meet this guy. So, I go looking through his presentation. He puts a picture in there. Well, it is Shigeo or Shingo. I didn't read it; his picture is there.

[Laughter]

So I am out, looking around during the first coffee break for Shigeo or Shingo. When I saw John, you know, I am thinking you don't look anything like your picture. That was really good mistake-proofing on my part!

[Slide]

I am going to talk about probabilistic risk assessment. We call it PRA. It has been used in aviation for sometime. It has been used in the nuclear industry; it is used in aerospace. It has been around for 25 or 30 years. It comes out of the fact that airplanes became increasingly complex where you couldn't have just basic design rules, they were so complex in their digital nature; interdependent systems, sort of like healthcare, very complex so that we needed new analytical models to assess risk.

Severity of technology failure grew too. In aviation and the nuclear industry we don't get to do clinical trials. We don't get to say let's create a power plant and see the rate at which we have nuclear meltdowns and we will adjust from there. All right? What we have to do in those industries is to show, before we ever deliver, that we have an analytical process that tells us what the risk is, in a nuclear power plant one meltdown every 10,000 years. When I was an engineer, prior to reading "Blind Trust" all I thought about was parts and I worked on this airplane, the package freighter for UPS, the first 757. There is a big cargo door on the front side and that door opens outward. If it opens in flight we lose the airplane. The FAA says before we will ever let you deliver that airplane you have to certify, you have to show us analytically that the risk of that door opening in flight is one in a billion flight hours. If you don't do that, that airplane will never fly.

Think about that when you say let's bring in a physician order entry system, a computer system in a hospital. How do you certify what the clinical risk is associated with that system? Well, let's give it a whirl and see what happens and then let's collect event later. Half of our research is event data. Let's put the system in and then we will just see what happens to people.

In aviation, thankfully for you that fly, we don't do it that way. We have to have analytical models up front. I thank God that for nuclear power plants we surely don't do it that way.

[Slide]

John showed us those trees and stuff. That is pretty complicated. I am going to talk to you about what I think is the first application of those trees in the operational environment in healthcare. My understanding on the equipment side is in doing pump designs and things and we have used fault tree analyses and probabilistic risk assessment. We are going to talk about it on the hospital side and where it is not equipment that we are analyzing but an organizational system. Zale Lipshy was the first to do this, back in October. We believe the first, if anybody knows to the contrary, please let me know.

[Slide]

What they did is they said let's build a quantitative model of our medication delivery process where we can estimate what the risks of the wrong patient, wrong med--the five wrongs, the inverse of the five rights--any one of these things occurring, what is the risk of that top level event? Actually, Zale Lipshy has a model that says if you come in here for a two-week stay we have a numerical number for when you are going to get the wrong patient's med; how often you are going to get the wrong med; how often you are going to get the wrong dose; and, ultimately, how often you are going to get it put in the wrong place.

[Laughter]

This little piece down at the bottom is one of the five top level events. So, we had 175 individual errors that we modeled to say what is the risk that we have one of these top level events occur. And, this is just the medication delivery process.

[Slide]

What did the model show? The model showed that there was significant redundancy in many aspects of the process, three or four errors that would occur. I have to say, to the credit of the healthcare industry, your control of drugs in a pharmacy far exceeds the reliability of our control of aircraft parts and aircraft maintenance hangars. You are leaps and bounds better with the automated systems that you have than we are in aviation.

Some tremendous things we saw, three or four errors. We saw in one of the earlier presentations about testing for I think HIV that they required two independent tests that would have to fail before you had the contaminated blood. That is the idea. Here we have sometimes four human errors that would have to occur.

What we did find is that there was a number of single failure paths. What was very interesting is that the documentation was much more believable than the physical movement of drugs. The medication administration record was really accurate, but the odds that you were going to get the wrong drug was not even close to the medication administration record accuracy. So, what was reliable in the process was the documentation. It would be nice if it were the flip side as a patient. I want the right drug; I don't care what my documentation says. But it was actually the opposite.

The hospital said three specific behaviors could have considerable leverage. So we just built the trees and said what is driving the risk? What drives the risk of our top level event? In my earlier talk I said one of the issues is that we want to model things as none-zero risk. You couldn't say, oh my gosh, that will never happen. It had to be a quantitative model. Believe me, through focus groups you can get numbers. All right? You can ask nurses in a focus group how often do you check armbands. Now, in an event investigation where you are in front of the state board of nursing, it is hard to get the nurse to say, you know, I just blew it off but in a focus group, outside that very punitive context, it is easy for a nurse to say the first time I meet the patient may be the only time that I really do it. You can start feeding that into a model.

So they found three things: nurse to confirm name, med dose and route at bedside against the medication administration record. It is a wonderfully reliable document by comparison to the delivery. Let's take it into the room and when we hand the drug over let's confirm that my drug matches what is on the medication administration record. Ultimately, the technological solutions when you get barcoding, barcode your patient and your drug and the computer tells you that we got the right thing. But that is a little time off. Today take the medication administration record in.

The second two were verbal orders when the chart is open. You know, this addresses actually a lot of wrong patient issues where physicians walk into six rooms, come out, or are in a room and tells the nurse here is the drug that we want for this particular person of having the verbal orders occur when the chart is open.

[Slide]

What did they get? Zale says in the delivery process, when we do these three behaviors, we get a 96 percent reduction in wrong patient events; 87 percent wrong med; 97 percent wrong dose. The clinical trial person in you says how do you know that until you really validate it? And, how can you really validate it until you get really good error data, and do you ever get really good error data? Sometimes, but it is pretty hard to get because we have a culture where we really don't know what is going on. But you have to find a way to work in the absence of data. How did we design the nuclear power plant and certify it unless we tried it out first? You need to find analytical tools that can allow you to model risk within your organization.

[Slide]

Zale's answer, and this was a pitch that we made to Jim and Jeff Kirkland and Beverly Allen that Zale talked about and said that with the addition of the PRA we are confident we will exceed our national goal of fifty percent in five years. He said, as a matter of fact, in medication errors we are going to get 96 percent in the next 90 days. So, when I said earlier that I think there are things I think you can do, this is one of the reasons I think there are things you can do. We are not talking about high technology solutions. We are talking about things that John was talking about, simple solutions that will get you there. Ultimately, Zale says, we have a living model of risk now that we can measure how effective prevention strategies are. The idea is to have a living risk model.

[Slide]

How many have heard of 6 sigma? Well, it is three defects per million. Can we get there in medication delivery in a hospital? How many people think we can? What is the rate today? The rate is like one in ten if you consider wrong time. Now we want to go to three per million. This is a hospital that says we might be able to move towards 6 sigma, three defects per million. It is within our ability to grasp and understand how, at least in the delivery process of medication, we can get there.

[Slide]

A final quote, every system is designed to achieve exactly the result it gets. If the 98,000 number is true, you know, if you buy into that, I think you have 98,000 because it is exactly what you have designed into your system. You do things in healthcare that are not allowed in aviation. In aviation we said, yes, you can't make mistakes. We are pretty punitive that way. But we did learn about human fallibility. The lawyers didn't let us change the rule but we did learn about human fallibility. In an aircraft maintenance hangar, if a mechanic can make a mistake and that mistake would endanger the aircraft, it is a required inspection item and two humans have to make the error; it cannot be one. Yet, in a hospital, when a nurse grabs the drug out of the system, they can make one mistake and kill a patient. Just by design, you have said in healthcare we will allow single failures to lead to death. I think that is bad design. The design of a system to say one, single human error can lead to death of a patient is not a very robust design; it is a very thin design.

So, do not believe humans are infallible. To some extent, in some areas you have designed healthcare around the model that the healthcare care provider is a superman that can leap tall buildings with a single bound and won't make mistakes. You have to design from the starting point that humans are fallible and put in a number. If you have to start somewhere say one in a thousand.

Think about your everyday life. How often do you make mistakes? What do you do that is much more reliable than one in a thousand? Get to work on time? Sleep past your alarm? Start with one in a thousand in modeling your system and you will start to see where the risks are. Assume every potential error has a none-zero risk of occurrence. Assume that errors are going to occur and say if I have a single path failure, is that a good design, or should I find a way--and I don't mean a second person necessarily because you can build in systems where a person can catch their own error right in the process.

Good risk modeling can identify opportunities for immediate, quantifiable large reductions in maintenance error. There is a lot of discussion about how do we learn from errors and how do we build a learning system. I think the point I want to make in this presentation, and I think John does too, is that one of the issues beyond learning from your errors is to just go back and look at how did we design our system. Did we use good design principles to design our system. That alone, I think, can get you a really long way, and there are a lot of tools, analytical tools in the absence of data, to identify where the weaknesses are in your system. I think Zale is one of the first on the road to prove that that is the case. With that, thank you.

DR. LINDEN: Are there any quick questions for any of the previous four speakers before we move on to the next session? Seeing none, the next session is on the role of error reporting. The first speaker is Dr. Jeannie Callum, who is Director of Transfusion Medicine at the Sunnybrook and Women's Hospital in Toronto, Canada. She is going to be speaking about her experience with the medical event reporting system for transfusion medicine.


Role of Error Reporting MERS-TM

DR. CALLUM: I am going to talk about the MERS-TM system that we implemented at our institution in February of '99, as it was designed by Harold Kaplan and James Battles. We implemented it at Sunnybrook and Women's Hospital, which is a three-site hospital which consists of a large trauma center, with a huge oncology base; a medium sized hospital that specializes in women's health and has a 50-bed and neonatal intensive care unit; and a small orthopedic hospital. We run all of the transfusion services for all three hospitals. This was funded by Health Canada.

[Slide]

We have two people that really help us run this system. On the right is shown Lisa Merkley, who is our quality assurance officer, and she tracks the errors and implements changes in the blood bank. Shown on the right is Anna Lima, who is our transfusion safety nurse. She does all of our education. She goes out to the wards and she develops system changes to try and curtail our error frequency.

[Slide]

I am going to talk about three things. The first thing is the MERS definitions, just what you need to know to understand our data. Then I will show you the descriptive data and talk about our major event types and the effect of our interventions.

[Slide]

These are the definitions. We define errors by severity. A level one has the potential for fatal outcome or serious injury. Sourcing the potential, 90 percent of our errors are near misses. Level two, minor or transient injury. Level three, no realistic potential for patient harm.

[Slide]

We also define events by whether they are a near miss or an actual event, and this is how we code them. A one is no recovery, patient harm; two, no recovery, no patient harm; three, a near miss but with an unplanned recovery, somewhat unsafe compared to our number four, near miss with a planned recovery such as the time of issuing of blood that system alarms because you failed to irradiate the product before issuing it.

[Slide]

We classify them by event categories. There is one set of categories that really apply to the hospital wards; sample collection; sample handling; requesting products and transfusion. Then, there is a set of categories that relate to processes that happen within the blood bank: product checking; sample testing; unit selection; unit manipulation; unit storage and unit issue.

[Slide]

Those are the definitions you need to know to understand our data. This is what the data looks like at our center. We tracked errors for about 36 months. We captured 2300 events and 98 percent were detected by the blood bank, showing that we have implemented it very well within the blood bank but really have been unable to concentrate our efforts to get increased capture of events coming from the hospital wards. Ninety percent were classified as near misses; 91 percent were detected before the time of transfusion. Fortunately, no patient harm resulted from the events just by transfusion of close to 35,000 units of packed cells, and no packed cells were transfused to the incorrect patient.

[Slide]

I say that no red cells were transfused to the incorrect patient but I can't be 100 percent sure that that is true. I thought we had our first one about six months ago. I got a page in the middle of the night from a resident. She said, thank goodness you called me back; we have a problem. We just transfused the wrong blood to the patient. I knew it was going to happen, sooner or later it was going to happen. I said, okay, just tell me where are you. She tells me her location. I said, oh, thank God because it wasn't my hospital. It was another hospital in the city.

[Slide]

This is what the data looks like in just gross numbers. For 1999, 2000 and 2001 just total numbers. In red are shown the level one's; in yellow, the level two's; and in blue, level three's, showing that as the years go on we are getting increasing detection with increasing efforts to capture more and more events.

[Slide]

Looking at the classification of near miss versus actual events, near misses are nine times more common than actual events. In red is shown the number one, the patient harms; number two, the actual events with no recovery; and then the vast majority of them are in blue, the benign events, more benign events.

[Slide]

A lot of people have asked me, well, what kind of frequency do you see and if I implement the system how many errors are we talking about and how many of then are we going to see? Well, it depends on how hard you want to look. These are the events shown on either side of that line in the middle for events in 1999 and events in 2000 for a large trauma center. You can see that for the period of July to December there was a 50 percent reduction in the number of events reported. You can say we had improved, but actually, shown in the black bar was where myself and Lisa Merkley, our quality assurance officer, within about four days we both delivered a child and left for six months. Because of us being there, no one had the reinforcements to bring it back. The technologists say, of course, that we caused 50 percent of the errors.

[Slide]

The alternative, this is where we really looked very hard. So, in June of 2001 we looked quite hard. On this graph, in yellow is shown our small hospital; in green, our medium sized hospital; and in blue, our large hospital. Every month seems to be stable except for June where we had very high rates of errors. In June we did an audit. We went out and we started looking for errors.

[Slide]

What we did was what we call our 7/24 audit. We had 163 transfusions happen during that seven-day period. For that period we tracked every other transfusion episode that went out of the blood bank. We couldn't track every single one because we couldn't run fast enough. So, we tracked 98 of those transfusions for compliance with the ten steps that have to happen between when the blood leaves the blood bank to when it is completed at the end of transfusion. There was 21 compliance with all the steps. So, two in ten transfusions completed all ten steps. That is with standing at the bedside, watching. Fortunately, 94 percent of those were level three but four of them were what we call level one's. In three cases there was either an incorrect bedside check or no bedside check. In two of them, they checked it with the chart at the desk and then just walked in and hung the units. In one case they walked right into the room, popped it up and just started it. In one case we had a patient who became hypoxic and hypotensive on our oncology ward. The resident was called and the resident said give them Tylenol, benedryl, Demerol, Solucortef and call me if it doesn't go away. The patient needed oxygen.

[Slide]

This is what the data looks like for that audit period without those audit cases. As you can see, June was just an average month. So, I know in all the rest of those months there are several major errors that are missing.

[Slide]

This is what the level one events looked like over time from 1999 to 2001. You can see it is a very erratic pattern. Despite major aggressive attempts, we haven't really changed them. Shown in blue is our large hospital; in green, our medium size hospital; and in yellow, our very small hospital.

[Slide]

Over the time period at our trauma center site where we have been working with this for three years, we have been able to show that with multiple interventions over time the percent of events that are level one is decreasing, nine percent in '99, six percent in 2000, and four percent in 2001. So, we are driving that level down but at the same time driving the detection of events up.

[Slide]

This is some of the sort of demographic data you can get out of the MERS system. It tells you when most of your events are happening. Most of our events happen during the daytime shift, between 8:00 and 12:00 noon and 12:00 to 4:00 p.m. when the peak hours of operation are.

[Slide]

But when you express that as a percentage as a rate, red is level one, yellow is level two and blue is level three, it doesn't matter about those time periods; it is all the same, with the exception that on weekends we see significantly more level one events, six percent on weekends compared to four percent during the weekday. Otherwise, all the time periods are exactly the same.

[Slide]

This shows the point in the process of the transfusion episode or process that the event is discovered. You want all your events, if possible, picked up at unit check-in and before testing. That is a safe process in your hospital. We have a good majority of our events shown here picked up but we still have some that are detected after transfusion or, worse, at the next test for that patient. So, you really want to drive it back towards before testing.

[Slide]

Here is shown the job. We don't record any names, just their job description. Definitely, the nurses are leading in the number of types of errors. I think that is because they multi-task and they are expected to do multiple different things. Nurses and physicians in the wards account for about 60 percent of all events; the blood bank, 35 percent; and everybody else about 5 percent. Those are the identical numbers that were reported out of the SHOT data and everybody else. This is near misses and the other ones are actual events. So, the numbers are very similar to actual events.

[Slide]

The system tells you what happened because of the events. It tracks the number of times you had to recollect a sample, 450 times in our database; record correction; phone calls to the wards; products destroyed, etc.

[Slide]

We have attempted to calculate this as to what it cost us in the year 2001 just for error recovery. That was about $127,000. What makes up the majority of that cost is destroying of products and dedicated staff to intervene on the errors and investigate the errors.

[Slide]

That is what the basic data looks like and what we tried to do. The first thing we tried to do was prevent errors in product check-in.

[Slide]

In Canada, this is what our blood label looks like. Up until October 15 of this year we had no expiry data on the bag. So, we had a collection date which is shown just below here. Here is the collection date. This is new, just added. So the technologist for each unit would have to look at the data and then look on a chart with a ruler, and they would go across and say, okay, 42 days later would be X date. They did that over and over again. So, you can see why the error rate would be so high. We brought this to Canadian Blood Service's attention and, lo and behold, we have an expiry data and, hopefully, soon it will be barcode readable.

[Slide]

These are the number of events shown over time. Up until the error point is where we had implementation on October 15, and I am happy to say that for January and February we had no such errors of this type.

[Slide]

For sample collection--this is a big issue--the first issue is this is because the system is very manual. these are the hospital cards we use for patients. You can see that they all look the same so it is very easy to pick up the wrong patient's card. It is very easy to pick up the wrong card.

[Slide]

When we look at these events shown over the three-year time period, in blue is shown level three; red, level one. You can see that the frequency hasn't dramatically changed over that time for our entire hospital despite a blitz of education in 2000, hiring a transfusion safety nurse who does continuous education and has transfusion rounds, and really no change despite education.

[Slide]

We are aggressively looking at mobile, hand-held computers and I am very optimistic that this will help. We did a trial of this for two weeks in August of 2000. Our baseline error rate in our emergency department was three percent. We implemented it there because that is where we hide the highest frequency of events, hoping that we could see some improvements.

During that trial we processed 67 groups and screens but we had a seven percent error rate, but all of those errors related to improper implementation. The system was not interfaced with our normal computer system so everything was manual. But all of those were fixable so I am very optimistic about it.

[Slide]

In the interim, until we have the funding which will cost our hospital probably about a million dollars for the three hospitals to implement a hospital-wide wireless network system for our hospital, we are using something in our emergency room that is very simple and cheap. Actually it cost us nothing. Every time a patient comes in, stapled to their admission record that the nurse takes to the bedside is a sheet of pre-printed labels with the patient's name. She has to take it to the bedside because she has to write the whole history on it. When she takes her blood samples she labels them up so that everything is labeled right at the bedside.

[Slide]

We implemented this in March of 2001. Shown here you can see we haven't had a complete reduction in errors, and the two blips that we have had of continuous errors of this type relate to failure to use the new system and falling back on the old Bradmus system. So we are continuing to use this because we think it is better than the old system but it shows that it is just not enough. You are going to need a much more controlled system to affect this 100 percent of the time.

[Slide]

What we think is very useful is a dedicated staff for phlebotomy. Shown here are three different departments at our hospital. One is our outpatient, same-day surgery pre-admission program where there is a dedicated staff to do phlebotomy and that is what they do and they really don't have any other task that they have to do. They are shown in yellow. They have a very low event rate. Shown in blue is our emergency department and, in red, our labor and delivery with much higher total numbers of errors. When you express it as a rate, the error rate is much, much higher in emergency and in labor and delivery. When you have a dedicated team who are concentrating on a specific task they do a much better job.

[Slide]

Here, shown for sample collection errors, is the same thing. Our dedicated staff is shown in yellow, and hardly any sample collection events.

[Slide]

The third thing we tried, by American Association of Blood Banks you have to have a signature, some way to identify the phlebotomist. If you don't have it you reject the sample.

[Slide]

On our old form or old requisition, it was not surprising that half the time they forgot to sign. We changed it to make it really obvious--please read and sign. Unsigned requisitions will not be accepted. And, we thought this was going to fix it.

[Slide]

This is where we implemented it, in July of 2000, and really no change. I am still very shocked that we didn't even get a 15 percent reduction.

[Slide]

It is sort of like this, you know, first pants, then your shoes--some things you think must work; nothing is too easy.

[Slide]

The next thing I think is probably the most important thing, product request. We did a number of ordering blood for the wrong patient, and that is because they pick up the wrong card and get two patients mixed up, or ordering the wrong product. The physician writes five units of platelets and it gets transcribed onto the blood rec for ordering as five units of plasma. If I am going to have a major error, this is where it is going to happen at least in my institution. These are more frequent than anything else.

[Slide]

We tried education. It had really no impact on the number of events in 2000. So we chose an area in the hospital to try a new thing. In March of 2001, we sat down with our anesthesia colleagues and said, look, in our cardiovascular ICU patients it is really frequent. We are getting a lot of orders either from the OR or from the CCU for a product for the wrong patient or the wrong product. The anesthetists thought part of the problem is that no patient coming out of CD surgery has an armband because they have a heart line in one arm, an IV in the other and both legs have been prepped to get vein grafts. So, they have nowhere to put an armband. So we came up with the idea and when they were cut off in CD surgery they were taped on the forehead of the patient so the patient was always labeled. Within one week every patient in cardiovascular ICU had their name on their forehead. I think most of the nurses thought I was completely crazy.

The second thing that we did was the blood request forms and the pick-up slips for blood were stamped preoperatively, in an unstressful time, and they were hole-punched into the binder for the patient so that, at a stressful time when a patient was massively bleeding, someone wasn't running over to find the card and stamp up four sheets and get everything ready. Sure enough, we have almost completely obliterated that type of error in our cardiovascular ICU, shown in blue, and the control side is our critical care unit, just on the other side of a door with same types of nurses, really same types of patients, showing that that event keeps going on in the critical care unit but we have obliterated it in our cardiovascular ICU.

[Slide]

We are going even a step further, now that we have seen that that has worked, and we are having pre-printed doctors' orders for transfusions with a bunch of checks on that form so that the blood technologist can check and make sure they got the right patient. If they are ordering red cells, we request them to write down the hemoglobin. If the hemoglobin doesn't match with what is in the computer system we question the order. We already do that so frequently we will get an order for blood and they write down on the form hemoglobin 78; we check it in the system and it is 100 and we call up and say we think you have the wrong person.

[Slide]

Lastly, when we amalgamated with our women's hospital lab, which is a medium sized hospital, we put in a whole bunch of different interventions. This is what our Sunnybrook blood bank pattern looks like. Shown in blue are your more benign errors, and in red you hardly see them. It is a very safe lab. We worked very hard to automate the lab; put in very good process control; and it works very well.

[Slide]

In contrast, prior to the red arrow and that is before we had done any intervention at this site, we had no reporting. They weren't bringing any events forward. You should have lots of three's, a few two's and the rare one's. They weren't bringing any of the three's forward. We had terrible reporting. All that was coming out were the really nasty ones that came up for detection and people were trying to hide errors.

So, in May we went down and we tried to change the culture, and we tried to implement the MERS system and say, you know, this is what we would like to do. So in May, for the first month, we get great reporting. But then there were a couple of really nasty errors and the technologist felt very bad and they reverted back to the same culture.

We went in, we changed the computer system; we changed a lot of the standard operating procedures. We standardized some of the testing for antibody investigations and we put in a new site supervisor, probably one of our best technologists at the other site, and we moved her down there. Within two months we had the identical error frequency pattern that was happening at our Sunnybrook site just with improvement in policies, a good computerized system and everything is nice and smooth, and we have the same pattern for January.

[Slide]

In conclusion, the MERS-TM system allows us to recognize, analyze events, determine patterns and monitor events after corrective action. Errors are usually the result of poorly designed systems. Error correction is difficult and I think it is going to be quite expensive but likely cost effective.

[Slide]

This is my favorite quote, a pessimist sees the difficulty in every opportunity and an optimist sees the opportunity in every difficulty. That is it. DR. LINDEN: The last speaker of this session before the break is Dr. Lee Hillborn, who is professor of pathology and laboratory medicine, and director of the UCLA Center for Patient Safety and Quality at UCLA Healthcare, in Los Angeles. The topic is, is error reporting in clinical settings worth the effort?


Error Reporting in Clinical Settings:
Worth the Effort?

DR. HILLBORN: Thanks.

[Slide]

I will try to talk a little bit quickly in the interest of trying to catch up some time from the additional interesting information that we had this afternoon.

I am from UCLA Healthcare. I am a pathologist but I am involved very much in our safety and quality program, both at the UCLA Center for Patient Safety and, as Dr. Battles talked about earlier, some of the activities of HRQ, we are one of the developmental centers sponsored by HRQ for California, known as the Strategic Alliance for Error Reduction in California Healthcare for safer California healthcare. I will tell you a little bit about what we are doing.

[Slide]

When it comes to error reporting, I would like to talk basically in the next couple of minutes about whether we think it is worth it. First of all, how do we do it? Well, our center for patient safety really doesn't own anything. It is sort of diagrammatically shown here at the center of this propeller but, in fact, we involve a number of different organizations within our hierarchy to work on those areas.

So, four center for patient safety and quality stakeholders are actually involved in areas of reporting. Certainly, our risk management department and particularly the activities that they do relate to risk reduction. Our information technology group is working to develop better systems for error reporting. Our performance improvement group is particularly related to quality, and our human resources folks, as we are working and trying to create that culture that will encourage reporting. I think we are sort of between the just culture and the learning culture. I think that we are still on our way to adopting a responsive culture at this time.

[Slide]

There are lots of reasons why not to report. Our systems really discourage reporting. There is guilt, blame and loss of respect. People fear that they are going to be disciplined despite the efforts that we have started so far to try and minimize that. In terms of events, many of my clinical colleagues are subjective about how they interpret what is an error or not, and I think that that minimizes some of the reporting that we get. It is time consuming. We are working on that as well, as I will discuss, and certainly the risk of legal discovery may, in fact, impact that.

[Slide]

So it is really not surprising that there is rampant under-reporting. First of all, we don't know very much about captured events, and not much is known about the factors that influence reporting. Although we know what we know about the numerator, we don't have good data at the practice level on the denominator although we know that only a fraction of the things that happen are reported.

The primary purpose really of reporting in general up until recently has focused on risk and claims management types issues. So, early notification of potentially compensable events so people can respond, and the roles for performance improvement that have been related to quality and risk reduction, up until recently at least, have been secondary at best.

[Slide]

Just to give you some perception of where we fit in, our hats are off to our nursing department, as others have described, as champions for reporting. What you see here is that of all the groups collectively that report--this is over a two-year period at our organization--the vast majority of the reports we get are from nursing.

I would add that actually our pharmacy, and I will come back to that in a minute, intercepts an additional 11,000 errors annually. Those are near misses because everything was intercepted, but the number of errors are about a thousand a month that occur in terms of the way that orders are written for pharmacy alone. So, certainly, there are issues with laboratory and other areas that go undetected until they get either into a reporting system or are not reported at all.

[Slide]

Of course, because most of us don't report errors, the reported errors that we do get are naturally biased. So they tend to reflect nursing issues. There are a couple of them here that represent laboratory issues, a small fraction of them being transfusion medicine, but for the most part the reporting that we get is related to unit issues.

[Slide]

Commonly reported medication problems, IV problems, patient falls, transportation, some laboratory issues, misdraws for example, rarely are things like misdiagnoses, surgical complications, inappropriate treatments, the kind of diagnostic errors that might point us in the direction of a system problem, issues of communication or junior staff supervision in our teaching hospital where we think that that turns out to be a pretty big issue. Yet, those are common themes of what we see. When we examine liability claims, they point to the under-reported errors, poor communication--provider to patient, provider to provider communication. Patient identification issues, which we heard from Dr. Sazama this morning, is a very big issue over and over again. Inadequate documentation, lack of supervision and some apparently careless or random mistakes. But if one really goes to the root cause one identifies an inability to perceive potential risks that, in fact, are a contributing factor.

We are working on being more sophisticated about the data we use in terms of event reporting. We are now starting to look at secondary analysis of data that are collected for other purposes, and to seek new data sources to identify unsafe situations. As I will share with you in a minute, we are undertaking ways to improve event reporting and provide feedback regarding the findings to encourage change.

One of the biggest problems we had is that all of our event reporting went into what was really a black hole. The people who reported had no clue as to what was happening with the reports that they filed.

[Slide]

If one looks at it as we are talking about patient safety, we collect all kinds of data here. Originally I discussed this at the data conference, about a year ago, that was sponsored by HRQ. There are a lot of different data sources that we have. Certainly, event reporting which goes into our risk management system is one area that we need to look at. But, in fact, clinical diagnoses and our procedures, our pharmacy data and so on, actually have all now been used to contribute to identifying opportunities for performance improvement and leading to patient safety.

I think the point here is in terms of risk management and where is that in terms of safety and reporting. I think our risk management departments, traditionally many of them, have focused on claims management where, in fact, the two main foci of risk management are claims management and risk reduction. If we are doing a really good job on risk reduction, which I perceive to be patient safety that we hopefully will have in the future, there will be fewer claims to manage.

[Slide]

At UCLA we are actually now currently looking at pulling in our risk data and data from other sources, from laboratory issues, including transfusion medicine, from our coded data, from our event reporting and others into a system that we call quality tracking for performance improvement. People call it "cutie-pie." The programmers aren't too fond of that name actually because they are guys but, in fact, nobody forgets the name.

Basically, we have piloted it now in our big services. What it does, it standardizes the way that we report events so that we can track and trend them and analyze what we find. The information can then be pulled out for the purposes of bench marking, as well as peer review, and then reporting to some of the reporting agencies that we talked about this morning. This has actually been fairly successful. We didn't know that we could get three major services to work together on that but, in fact, they have and we are going to implemented it in the next couple of months organization-wide.

[Slide]

Although we don't yet have computerized physician order entry and most places, as you know, don't, but the laboratory has been working very closely with the pharmacy to identify unsafe situations and start to report those back in terms of the patient profiles so that, in fact, the information is available. Of course, if you can't read them neither can the pharmacy. We saw some of this earlier, the faxing issue as well as the sloppy handwriting are really key issues in terms of safety and reporting irrespective of the type of activity that is specifically being reported.

[Slide]

I talked about the pharmacy intervention system. Here are the data, slightly modified because my attorneys told me that I couldn't present our actual data because of the issue of peer protection, although it is pretty close. Basically, there are a number of errors that the pharmacy intercepts, a huge number that, in fact, would have been potentially dangerous had they not been caught. Despite all of our efforts, education and feedback regarding all of these events, we continue to have about the same number of errors, and a group we have actually spawned as a result of looking at our data and the reporting of these data, a new group, to look at how we can provide better information.

[Slide]

So medication error data have really served as the basis for a new committee. I hope that when I talk to you or when I see you within the next year I will be able to tell you that we have, in fact, seen improvement in that area. Our transfusion service audits blood administration and reports the findings. I was actually very pleased to hear that in other institutions the process is actually only done completely accurately twenty percent of the time. We have found the same data. I was pretty much appalled at that, as was the rest of our organization. So we are focusing on that with hopes that, in fact, we can improve that.

But when it comes to both medication administration as well as transfusion services, we know that our staff know what to do, the problem is that, in fact, that they often do what they know how to do. So, basically, we have a high rate of at risk behavior, as was identified just an hour ago, and it is something that we need to focus on as we move forward.

[Slide]

We are educating our staff about some of our ongoing processes, and we have a web presence. Many of the components of this are actually available to you if you want to go and see some of the things that we are doing as an organization. We put a lot of the safety issues right at the front. When people have to go to our web site for other purposes, like to page a doctor, or get the clinical privileges, or find the joint commission manual, in addition to all of that they see safety information.

[Slide]

We also empower our patients, and this is actually something new as well, to report possible errors. The University of California is sharing strategies and concepts. This is a patient education brochure we are introducing now at our medical center. I went to our CEO about two years ago and I said that we ought to tell patients that we make mistakes. Our attorneys--I am not blaming the attorneys but they said to me that will happen when hell freezes over.

We modeled what we did after work at UC Irvine, although they took a slightly different approach. When they did it, I went back to our CEO and said I have to tell you something. He said, what's that? I said that hell is freezing in Irvine.

[Laughter]

So we actually have now a patient safety brochure. I brought about ten of them. I would be happy to either have copies available for you or it is available to look at on the web site if you want to see a copy of this brochure. We are happy to tell patients mistakes happen. We ask them to be partners with us in terms of improving safety.

[Slide]

I mentioned our developmental center. The University of California has a process now to collaborate on safety issues. If you want to get to our web site you can go here and click on our UC campuses and find out what is going on at the various campuses. We are putting up links actually as we speak. The link that is there right now is the UCLA because we are sort of driving this process.

[Slide]

We are learning to share information technology when it comes to error reporting. University of California Davis has an electronic intranet based system for incident reporting. Their evidence, when they first rolled out the process, showed that event reporting increased three- to four-fold once they put it in. Not only did it increase, they actually started to get reporting from areas that hadn't traditionally reported, that is to say, other than nursing.

We have agreed to implement this on our five campuses. Our attorneys tell us that because we have one governing body we can share information. San Francisco has actually implemented it about a week and a half ago. The other campuses are to follow. We are happy about that because they worked out all the bugs dealing with sharing. Together we are also going to modify what to do based on input from the University Healthcare System Consortium safety net where they are working on it as well.

[Slide]

By sharing information, the reality is with event reporting, we have already identified some risky situations that can be resolved. One of them was removal of central lines. Many people don't realize that it requires special care to avoid air embolism. Those that commonly do it believe that the steps needed to be taken are obvious but trainees and other physicians were less aware of the risks.

Together we identified, by sharing the data, that several campuses had experienced problems and now we are working together on solutions that I think are very innovative and fall into about the five to ten dollar level, as was discussed just an hour ago.

[Slide]

This is an interesting program. This is something I wanted to do last year and we pulled this off too. We learned that feedback is critical. For one thing, we reward reporting. I went to our director of nursing, the lady shown on the left, and I said I would like to do something during nurse appreciation week. She said, what is that? I said I would like to give an award to the unit that reported the most mistakes. She said, well, that is ridiculous. I said, well, not really, what I wanted to do was to reward the honesty of reporting, not the fact that mistakes were made. I am very comfortable that there are enough mistakes out there that somebody who wants to win next year can simply report the ones that are there rather than create new ones. But we rewarded them with pizza because of the first two letters of pizza, "PI." What you see here is Pat Byrne, who is an absolute nurse manager, coming up sheepishly to accept the award for having the unit that reported the most errors. Again, we identified that we were awarding honesty, not errors.

Also, in the last six months we started sending a thank-you letter to anybody who identified themselves in terms of reporting an incident. At first people were really confused--what did I do? Did I do something wrong? But, in fact, after that they began to realize that we were being very positive about it and what we told them was that they were making a contribution to improving patient safety and to share that with their colleagues.

We share recommendation findings at committees presentations, a web site and a news letter that we just launched last month, called SAFER, the Strategic Alliance for Error Reduction. We feed back the information that we find. So we set a priority on reducing medication errors and we said what they are, some of the key issues that we found. And, we talk about our new, upcoming electronic incident reporting system because w are the last place to do it.

[Slide]

What I can tell you is that one of the safer California healthcare care campuses, and it is not us, has taken the lead. What has been done? They have encouraged reporting. They have shown that they have appreciated the staff and they appreciate when they report. I guess I have already told you who it is because they are the ones where hell was freezing. But they have informed patients about mistakes. What has happened? Their malpractice is the lowest it has ever been. They have the highest employee satisfaction amongst all of our campuses. They have increased patient satisfaction, and they are getting community recognition. So is error reporting a good thing? I think the answer is unequivocally yes.

[Slide]

I think we believe that encouraging our reporting has been a very positive process. When an event occurs we have encouraged reporting. We perform root cause analyses. We get people together. We have really encouraged that blame-free environment. We have instituted real system changes, many of them related to laboratory medicine and a couple of them related specifically to transfusion medicine, resulting in risk reduction.

We have taken victims, people who have made mistakes, and turned those into champions for change. I think that is a real positive message that, in fact, yes, we all make mistakes but having those nurses or other staff involved be champions for change rather than the second victims is a key issue because when that happens what we are hoping to see, and what we are starting to see now, is increased reporting. So I am very much an advocate of reporting. I think we are starting to see it in our system and I frankly hope that you are too. Thank you.

DR. LINDEN: Does anyone have any quick questions of the two previous speakers? If not, we will take a break and we will reconvene at 3:30.

[Brief recess]

DR. LINDEN: We are going to continue the topic of the role of error reporting, and the next speaker is Sharon O'Callaghan, who is a consumer safety officer with the Office of Compliance and Biologics Quality in CBER, at FDA. She is going to be discussing biological product deviation reporting, what have we learned?


Biological Product Deviation Reporting:
What Have We Learned?

MS. O'CALLAGHAN: Thank you, Jeanne.

[Slide]

It is a pleasure to be here, as always, to talk about BPD reporting. This is going to be a little bit different for those of you who have heard me talk about this topic before. It is going to be a little bit different, a lot of similar type information but focused a little bit differently.

To answer the question what have we learned, well, nothing that is surprising yet, that is, mistakes are being made in the blood banks. We are now getting the reports from the transfusion services in the unlicensed blood banks which has increased our reporting significantly, but it is focused a little bit differently now than what we typically had received on the blood centers, and that is really what I want to focus on.

We have had some very interesting approaches to identifying root cause and contributing factors, and also the follow-up actions and I will highlight some of the examples in the reports that we have received.

[Slide]

To start off, the comparison for the reports that we have received over the last couple of years between FY2000 and 2001 shows the increase. It is not a significant increase yet as far as the total numbers, but what I want to highlight is the unlicensed blood banks in FY2000. There are 52 facilities, submitting 125 reports. In 2001 there were 238 facilities reporting 1015 reports. This really represents an increase as of May 7 of last year when the final rule was implemented.

Also, with the transfusion services, 19 facilities reported 53 events in FY2000, and in 2001 there were 2078 transfusion services reporting 536 events. So we are starting to see that increase in those types of facilities reporting.

[Slide]

What I want to do is really focus on the events that have been reported by the transfusion services and the unlicensed blood banks. We are going to break it down into these four major categories of events. We will go through each one of these categories in a little bit more detail to give you specifics of what types of events have been reported within those categories.

The largest percentage of reports are falling under QC, the quality control and distribution category, followed by labeling, routine testing and component prep. The specifics that I will be providing capture the total number of reports for the two facilities combined.

[Slide]

We start off with QC and distribution, the 564 reports submitted for these two types of facilities. Most of these are under the improper blood bank practices. Within that category there were 298 reports, most of them within the patient classification not met criteria. These are events where a particular patient requires some particular type of unit, leukoreduced, irradiated, CMV-negative unit, and that unit is not provided. This is information that the blood bank has either from a specific order from the physician, or from the history of that patient within the blood bank, or it is their procedure to always issue irradiated units for a particular type of patient. So irradiation and leukoreduction make up the greatest percentage of those.

[Slide]

Also under improper blood bank practices is included improper ABO/Rh selected. This would be things like where a patient may have a bone marrow transplant and instead of the actual blood type that he is requires an O-positive or something like that and that particular blood type was not provided, or any other situation where a particular ABO/Rh should have been selected because of the patient's circumstances and the incorrect one was selected.

Improper product selected includes things like platelets being issued instead of fresh-frozen plasma. Unit issued to the wrong patient, and this is from the blood bank standpoint, not at the nursing level, not at the time of transfusion, it would be reportable if the blood bank issues the unit for the wrong patient. But if the correct unit is issued and the nurse goes into the wrong room and transfuses that unit, that would not be reportable under the BPD reporting system. So there is still a significant number of those events happening.

Also, release procedures not followed, and this ranges anywhere from a final visual check is not performed; there is some documentation missing that identifies that everything was checked and everything is okay.

[Slide]

Also under QC and distribution we have inappropriate release. This is products that are released that shouldn't have been basically, where the testing was not performed. I will have a little bit more detail on that in a minute.

Incorrect, incomplete or positive testing, there were 23 of those. Most of these are incorrect or incomplete. Very few of those involved any kind of positive testing. But this really refers to antibody and antigen mostly, also the crossmatch. If the crossmatch wasn't completed before the unit was issued it would fall into this category. For medical history there were only two, and that is if for some reason the unit was identified as being unsuitable because of the medical history that the donor had provided or, for example, an unlicensed blood bank may have if they do their own collection and that unit failed to be quaranteed appropriately.

Unsuitable product released, there were 94 reports within that group. Most of those were clotted unit or segments. I want to clarify that for the transfusion services a clotted unit or a clotted segment would not be reportable if that clot was identified after the product has been transfused or during the transfusion process. It would be reportable by the blood center that collected the unit. So what is captured under here is that most of these are in the unlicensed blood bank arena where they are actually the ones collecting the unit. A transfusion service would be responsible for reporting a clotted unit if the clot was identified either upon receipt from the blood supplier and the unit was issued anyway, or before issuing a clot is identified and the unit is released.

Outdated product released, we had 25 of those. I think a number of these occurred because the night shift techs were supposed to pull the units off the shelf and do that daily inventory thing of taking all the expired units off and, for whatever reason, it didn't happen. So units were released the next day without checking, figuring that the night shift tech took care of this and nobody looked at the label and they went ahead and issued the unit. Shipped or stored at improper temperature represented 39 reports.

Under the inappropriate release where the testing was not performed, most of these were antigen screens or antibody screens where the patient was identified as having an antibody but the units weren't screened for that corresponding antigen and the units were released. The antibody screen patient had a history of an antibody and the antibody screen was not done to make sure that that was the only antibody that this patient still had, or it was a new patient for whom they didn't complete the whole antibody screen.

The recheck of the units wasn't done for ABO. Under crossmatch where the testing was not performed involved like immediate spin crossmatches where it wasn't taken to the Coon's phase when it should have been.

ABO and/or Rh was five reports. Neither of that testing was done. Then some others of hemoglobin S and then a couple of reports that had multiple testing not performed.

[Slide]

Under labeling--and this is something that we have heard quite a bit about--is recipient identifications. The recipient identification was incorrect in 103 of the reports that were submitted by these types of facilities. This includes the identification on the crossmatch tag as well as the transfusion record. Most of these involved things like numbers switched where the patient's identification number was off by one number or missing a digit. The name, instead of having M as the middle initial had W. These were not true significant things where they actually had the true potential of going to a different patient, but they are significant enough that most of these were clerical errors of copying down the wrong information, but having the history of what we have learned today, some of these cases where you have a junior and a senior, putting a Jr. instead of a Sr. could make a difference in that particular patient. A lot of people have asked why do we need to send these in, you know, they are really not that significant. Well, in most cases they are not causing harm to patients but there is the potential to cause harm if the wrong patient gets the unit.

Extended expiration dates, this is involved with any time a product has been modified, such as thawing FFP where the expiration date has to be changed it wasn't modified. Therefore, it has an extended expiration date. Pooling platelets, the expiration date needs to be modified and it wasn't modified. Donor or unit number incorrect, again, on the crossmatch tag or the crossmatch slip is where it was incorrect, and ABO and/or Rh incorrect. That is either the donor ABO and/or Rh on the crossmatch tag or the transfusion record, or the recipient's ABO and/or Rh is misidentified.

[Slide]

Under routine testing we have incorrectly tested for compatibility. Many of those have to do with the incorrect sample being used for the compatibility, an old sample being used. They were captured under that code last year. We actually have a separate code for that particular event for 2002. Antibody screen, the reagents were not used properly. The addition of the reagents wasn't in the right order. Rh seems to be another area where it was actually just tested incorrectly, the technique was not performed properly. Also, under incorrectly tested would be included incorrect interpretation as well. So they may have done the test correctly but wrote down the wrong result or interpreted the results incorrectly.

Under sample identification--this is another one that we have heard quite a bit about--sampled incorrectly or incompletely labeled, there are 43 reports relating that type of event, and 14 reports relating incorrect sample testing. Now with sample incorrectly or incompletely labeled, again, most of those are the switching of letters, the switching of numbers, not having a completely different name on the sample. Most of these were actually caught before the unit was actually transfused. Some of these probably should have been caught before the units were actually crossmatched where the blood bank had some other information so that they could have verified the information on the sample with the labeling that was in the blood bank computer, or the labeling slips, the crossmatch slips, were sometimes different so that it could have been caught at that point but sometimes they were not caught until the next time. The ones that are not caught until after the unit is transfused, they are caught at the second time the unit is transfused. Sometimes when the crossmatch slip is brought back and another unit needs to be set up and they pull that sample tube and somebody looks and says, wait a minute, this isn't the right sample or this isn't labeled right; the initials are missing and it takes somebody else to see that.

[Slide]

One of the things that I wanted to mention here is that even though we have seen a large number of reports coming in from these facilities, most of these reports have not indicated that there has been any harm to the patient. There have been very few where there was actual harm to the patient, where there was actual reaction. There have been a few but not very many. Most of them are causing no harm. So these are really in that narrow miss, no harm category that was discussed earlier, but these are certainly situations that can be learned from so that you don't have that hit or that actual event that could cause that fatality.

Under component preparation, procedure not followed for leukoreduction, irradiation, pooling, thawing or washing, these all fit under component preparation. Most of the time for the leukoreduction and the irradiation, that would be something that an unlicensed blood bank would be doing, not very many under that component preparation; more under the labeling, routine testing and failure to issue the appropriate units.

[Slide]

What I want to do now is go through some of the examples of root cause and contributing factors that have been listed on the reports, just as an example of things that have been reported. This information comes only from the electronic biological product deviation reports we have received because we were able to do queries based on that information that is in that electronic database as opposed to doing the queries from the hard copy reports where we don't enter this information into our database. Even though it is on the reports, it is difficult to go back to those hard copy reports. So that was one pitch for using the electronic form to submit your reports because we will be able to get a lot more information out of these reports.

This seems to be a real popular contributing factor. I hope people are thinking of these as contributing factors and not as a root cause--busy and short-staffed. That really is a contributing factor; it is not a root cause. As we have mentioned several times today, short staff is the life of a blood banker. You are always going to be short-staffed. You can't use that as an excuse for why things happen and move on to the next thing until the next thing happens and you say," well, see, I told you it's because we're too busy." You need to look at what you are doing, how your process is working so that you can try to overcome that at least in some cases. You are not going to be able to overcome it in all cases but there are things that you will be able to do and that you can take a look at. All right, we know that there is always going to be one person on night shift. If that person gets overwhelmed, how are we going to process this work? How are we going to get these units out the door safely?

Clerical error, handwritten, manual data entry is another one that seems to be very popular, specially with any of the labeling things. Again, that is the human element involved, but is there a way that you can try to eliminate or at least minimize some of these deviations that occur because there was a clerical error?

One of the things in trying to identify root cause is that there is a number of ways that you can identify root cause. We have heard about the causal tree earlier. Another method is asking why five times. You made a clerical error, why did you make this clerical error? Find out what was going on to cause this. That is really what we want to get you to focus on, the root cause and not just the contributing factors.

Inattention to details is another one, and the fix to inattention to details is to remind the employee to follow the SOP. That works maybe for that moment, and that is about it.

Special orders overlooked, this is very common for the events that I just stated under the inappropriate unit being released for special products like leukoreduction or irradiation, for those types of products. The information is in the blood bank; it is just missed. Maybe it is not in the right place. Maybe it is not in the place that is easily accessible to the tech.

Additional special needs in the comments section is another one. If the patient requires an irradiated unit, a leukoreduced unit and a hemoglobin S negative unit, if the hemoglobin S negative information is at the bottom of the page or some place buried in a comments section that doesn't prompt the tech to look at that and say, "wait a minute, there are multiple requirements here", then they are going to miss it. That is going to be a very easy thing to miss. We have seen this with the donor deferral area as well where there are multiple reasons for the donor being deferred. Sometimes they are not all captured appropriately in the computer system.

Another one is emergency situation where the nurse wants the unit now. Is that really an emergency or are you trying to get her off your back and say, here, take the unit and just go; I don't have time to deal with you now? That is something that is going to have to be taken up at a higher level outside of the blood bank. When you get the nursing staff involved in something like that you need to identify what are your true emergency situations; what are your procedures for that; and how can you handle these situations.

[Slide]

Other root cause and contributing factors that have to do with computers are that the computer warning was overridden. I have seen this on a number of reports, and it just looks to me that there are so many warnings that pop up throughout the process that after a while the techs probably don't pay attention to those so they just ignore them. If they can override them, they are going to because they want to get this done. They are ready to go to lunch; they are ready to get off; they want to get this done. They don't want to have to find a supervisor, come back and override this or approve it if it needs to be overridden. Take a look at who is allowed to override your warning systems.

I was surprised to see that the computer warning remains on the screen only for a short time. It was something like 30 seconds. The tech had entered this information in the system and it flagged up that the patient required an irradiated unit. She went to answer the phone, came back and the message was gone so she went ahead and the computer system allowed her to process that unit through without it being irradiated.

Computer presents one warning regardless of the number of special need or requirements. This is another one that is big with donor suitability. With temporary deferral and permanent deferral on a donor, the temporary one is deleted because the time frame has passed, the permanent one goes away too.

The same type of thing here where there is only one warning that the unit needs to be leukoreduced but it doesn't identify that it also needs to be irradiated as well. They say, okay, this is leukoreduced and this is okay to issue, and they override the computer warning.

Computer presents more warnings than required. Take a loot at what warnings are being presented. I know there are some computer systems out there that you are not going to be able to change, but you need to take a look at that and see really what your computer is telling you and how many times it is giving you a warning throughout the course of issuing blood and crossmatching units.

If the computer is down backup procedures are not followed. This is another area where you need to make sure that all of your procedures are in place for when the computer system is down and that everybody knows how to do it. It is only going to happen once in a while, we hope, but everybody needs to know what they need to do if the computer is down.

[Slide]

Some of the follow-up action that we have seen has been varied. Actually, I have to comment on the previous lecture about the pizza being given to the nursing staff. There was actually one report that may have been from UCLA, I don't know, that said in the follow-up section that they commended their nursing staff for reporting this to us because it was something that was detected prior to the unit being transfused. I have only seen that in one report so far but I thought that was at least a nice way to try to get the information from the nursing staff.

Follow-up actions we have seen reported is to remind tech to pay attention or counsel the employee. Again, you are focusing on the human element of pay attention to what you are doing; I don't care how busy you are, just be a little more careful. Well, after telling somebody that ten times how effective is that?

This is another one that I was a little surprised to see, email sent to remind staff to follow SOP. When I read that in this report I immediately envisioned a tech coming in to work in the lab, opening up their email, and I don't know when a tech in a lab would have time to read email, but having ten emails in there, each one saying please remember to follow SOP number ten, number two; please remember to follow SOP number 15; please remember to follow SOP 12--how effective is that? There was no indication on the report that there was any follow-up action after that. It looked like that was all they did to fix this problem. It might be great to send an email to people and say this is what just happened; we just learned of this event. Please be aware the next time you issue blood to check the labeling, or whatever--to get a quick message out to everybody easily, but you need to follow that up with some kind of in-service or some other kind of training, looking at the system to see if there was another way this could be fixed.

A new log created--this one was as a result of problems with documenting results and having too many different places to write things down or to check things off. They created a new log to track everything, to check off that the visual inspection was done, to check everything. So I thought that one was pretty good.

Require second review--this one bothers me sometimes because if it is a labeling problem and the person doing the labeling didn't catch it the second person coming along, most of the time, will see what they expect to see, especially if they are forced to do this because somebody else screwed up. In some cases a second review may be necessary. It might be a good idea to have somebody else standing there while the nurse and the tech are reading off the information to verify that everybody is saying the right thing. But I am seeing that a lot, that any time there is a labeling problem we will institute a second review. Just be careful of when you do that. Also take a look at when you are instituting that type of fix. If you are going to institute that and you have one person working night shift, who is that person that is going to review for the second time for the night tech? You are going to have to pull somebody from another part of the lab, or how is that going to work? Is that really feasible on all of your shifts?

Implement computer-generated labels and tags. That is probably one of the best ones that we have seen, and I know that tomorrow we are going to hear a little bit more about labeling. But anything that you can take the handwriting out of the system is probably a good idea to eliminate some of these clerical problems.

Update software to prevent release of unit inappropriately is another one that seems to be an appropriate fix if it is possible. We know that you can't update the software all the time. Sometimes you might have to update a little fix until you can update the software full-fledged but that is something that you want to take a look at.

[Slide]

Using a barcode reader is another one that would prevent the sample labeling errors. Revise request form, help employee focus attention to four major quadrants of the form. If things are being missed on your transfusion record, or your crossmatch slip or your crossmatch tag, take a look at where it is being missed. Is it things that have to be handwritten in as opposed to having it already there and just checking off the box? Is it formatted in a way that you can actually look at it, do a quick scan and see that there is something missing, something not documented properly?

Require second check when computer is down. Again, with your backup system you need to make sure that you have the right procedures in place to function when the computer system does go down.

Another one that I am seeing a lot more that we haven't seen even from the blood center side in previous years, we are seeing a lot more reports identifying that they are going to evaluate for trends and track errors of the same nature. I think that is one of the most positive things that we are seeing out of this, that people are starting to take a look at the events not as isolated incidents but as a conglomerate of information that they can gain.

[Slide]

Several facilities have told me that they have renamed the form that they use for their incident reporting or event reporting to an opportunities for improvement form. I think that is a really good way to characterize these events because these truly are opportunities to improve the system. So if you take the approach and pass that on to your employees and to the nursing staff that you are trying to improve the system and it doesn't really matter who it was specifically that made this deviation, this error whether it is reportable or not, you are getting information about a failure in the system. That is one of the things that you need to take a look at too, identifying not only the contributing factors but the root cause. You need to find out what made the person make this mistake.

You need to, again, evaluate the system. We have heard that all morning, evaluating the system and not just the employee. We know that there are certain things that employees will do and it will be a consistent pattern with a particular employee and the system may not be needing a fix, but you can identify that as well. Based on the root cause and the contributing factors that we are seeing in the follow-up, you know, having inattention to detail and employee retrained, those types of things really indicate that you are focusing on the employee more than the system.

Long-term correction as well as short-term correction--a lot of the reports will have the short-term correction. We got the unit back, we have retested it and we have reissued it. That is kind of the short-term correction, sort of counsel the employee type of thing. Not very many of them will identify a long-term correction plan, other than, you know, saying that we are going to evaluate the trends and track this information. So you need to take a look at that.

With that trending you need to include not only the reportable events but also the non-reportable events. There is a whole lot more information that is out there that I don't see because it is not required to be reported unless the product has been distributed that you may be catching. Sometimes the events that are not reportable that are caught prior to issuing a unit are more valuable because you can identify what actually did work. Where was this unit interdicted? Where was this deviation caught that prevented the unit from being released? As opposed to this other one, you know, we had five of them that were caught so how did this other one get through? That will give you a lot more information.

I also wanted to again remind you to report to FDA using the electronic format. Using the electronic format was the way I was able to get a lot of this information about the root cause and follow-up where I could do searches and queries based on key words, and things like that, from the electronic reports. I think we are going to be in a better place to really do some more trending not only on the events themselves but actually on root cause and follow-ups. Hopefully, that is what we will be doing in the next year. We are still having the problem of getting everybody to report, but once we get that going and everybody is on board with the reporting and what is required, and all that, then we will be in a better place to get some more information out of the system. Thank you.

DR. LINDEN: Our last speaker for this afternoon is Dr. Susan Wilkinson, who is associate director of the Hoxworth Blood Center, and associate professor at the University of Cincinnati Medical Center. Dr. Wilkinson will be speaking on an objective structured clinical examination as a mechanism to evaluate health historians.


An Objective Structured Clinical Examination as a Mechanism to Evaluate Health Historians

DR. WILKINSON: Thank you very much, Jeanne. Good afternoon. I am the last speaker for the day.

[Slide]

Before I begin, I want to make a couple of comments about the authors on this first slide. I want to make note that actually the idea for doing this study came out of a discussion at an FDA workshop probably--what?--four or five years ago between Jim Battles and myself. So beware of conversations you have with Jim today! This was a very positive thing actually.

I also want to make note that Stacy Lee was a graduate student at the Hoxworth Blood Center and this was actually her master of science thesis project. Stacy is currently at the Mississippi Blood Services in Jackson, Tennessee. I played the role of Stacy's advisor through the University of Cincinnati Hoxworth Blood Center for her thesis project. Linda Hynan did a lot of the statistical analysis for us at the University of Texas Southwestern Medical Center. I should recognize the RO1 funding partial support from NHLBI. Thank you.

[Slide]

As we all know, the goals of pre-donation screening are really two-fold. The first is to correctly defer potential donors who should not donate. The second is to minimize unnecessary deferrals which may compromise the blood supply. However, I think most of us in this room know that this is really not a perfect system or perfect process, and we know that because of the issue of post-donation information related to donor suitability that is reported to the FDA. When I say that I am talking about the volume of information that is reported to the FDA.

[Slide]

Mike Busch's slide demonstrated this very nicely but showing this in another fashion and, again, these were historical data, '91, '93 and 99 and the total error and accidents are in green. Those donor suitability issues that are directly related to post-donation information are shown in red. Sharon O'Callaghan provided some information for me just a few weeks ago looking at 2001. As you can see, the number of post-donation information on blood product deviations continues to be quite high in relationship to the total numbers that were reported.

I might just want to note that these two numbers were reversed from the data that Sharon provided, if you want to correct that on your handout.

Again, this whole issue of post-donation information related to BPDs takes up a lot of resources. It takes up a lot of resources for the blood collectors and it also takes up a lot of resources for those in CBER as well.

[Slide]

BPD reports to CBER in FY2001 totaled 25,360. Of those, total reportable were 20,013; total PDI reports, 14,767; total other, 5,246. History of cancer post-donation illness, history of disease and then tattoos, which is something we have seen frequently in our organization but have actually come a long way in reducing in terms of post-donation information.

[Slide]

In this guidance to industry CBER talks about blood establishments evaluating the trends outcome determine the types of post-donation information events that are seen. Again, they suggest that we perform audit processes that not only review the donor records but also look at the health history questions, evaluate our donor screeners and, lastly, interview donors about these discrepancies.

In my conversations with Jim several years ago, one of the things we really wanted to focus on was evaluation of donor screeners. We wanted to do this in a statistically sound method and in our discussions came up with utilizing what is called standard patients. In our case we called these standard donors, utilizing these standard donors in an objective structured clinical examination, or OSCE, format.

[Slide]

I first want to comment on standardized patients. Again, these are individuals who aren't ill but are trained to portray medical cases in a consistent manner. They can evaluate skills in interviewing, interpersonal relationships that one might have with the physician for example, and the physician communication skills. They are used in training and assessment for medical students extensively in this country, but are also used to evaluate the skills of residents, practicing physicians and other healthcare professionals.

[Slide]

The beauty of using a standardized patient is multi-fold. First of all, these individuals are available at any given time. Again, you are exposing students to the same history repeatedly so there aren't variations that one might get if, in fact, you were trying to use patients that actually had the disease.

[Slide]

An objective structured clinical exam was first described by Harden, in 1975, and this is a reliable and flexible approach where a variety of methods can be used to obtain an assessment of clinical skills. A typical medical school OSCE is a series of standardized patient encounters and it may include history taking and physical exam, and there might be writing station that follows all of that where the physician may be asked to complete certain questions that go along with the case that they have just reviewed.

[Slide]

An OSCE is psychometrically sound. Reliability of standard patient-based assessments look for a couple of issues. First is a pass/fail assessment related to reliability of reproducibility of the instrument. Again, this is usually expressed as a dependability index with a cut score. The cut score will represent that pass/fail point. Here we are using dependability, and this is embedded in the theory of generalizability of results.

[Slide]

At Hoxworth Blood Center we adapted our health historian OSCE from the University of Texas Southwestern Medical Center OSCE. This is an assessment that is given to all of their second year medical students. The OSCE that we performed with our health historians looked at two individual skill components. The first skill component was the HXE component. This included the history taking technique of the health historians themselves, what were their communication skills like with the supposed donor? The second component was the HXI skill. This was really their ability to interpret a health history and make a determination of whether this standardized donor was suitable to give a unit of blood.

[Slide]

This OSCE exam served as the annual competency evaluation for these staff members. We are reporting this because this type of OSCE has not been used previously in this type of setting. We were very focused on looking at the pre-donor patient screening process from a post-donation information perspective.

[Slide]

During our study, we developed eight donor cases. There were three of these that had donors that were acceptable, cases one, two and seven, which included travel to U.K. but it was less than six months duration. We had two malarial scenarios. Malaria as a post-donation information event is very problematic, and I think continues to be. We had a case that represented unprofessional ear piercing; one sexually transmitted disease case; then, lastly, a high risk behavior, I believe it was it was an IV drug abuser.

These cases were developed from the actual post-donation information events that we had reported to the FDA over an eight-month period. These cases were not necessarily straightforward. There were other medical issues that were reported from the standardized donors and the health historians were challenged to make a determination as to whether these individuals were actually suitable as donors or not. I won't say we tried to trick the health historians in any way but, again, many of the standardized donors were taking certain medications, had certain immunizations and were under doctor's care for a number of issues that still were acceptable for blood donation.

[Slide]

We actually recruited 12 standardized donors to portray the eight cases that I just reviewed with you to evaluate the HXI score for this OSCE. Eight of these standardized donors had previous experience at the University of Cincinnati Medical School in an OSCE assessment. The standardized donors were trained to present each case, answer probable questions and evaluate health historians' HXE and the communication component that we wanted to find out about.

One of the things that is important to note here is that the standardized donors were specifically told not to volunteer information because, again, I think sometimes it is those follow-up questions, those probing questions that lead to these post-donation information events and we wanted to understand exactly what our staff was doing in terms of the evaluation process.

[Slide]

For the HXE or the communication technique evaluation piece, each of the standardized donors completed one of these on each one of our staff members following the assessment. While it says 15-item checklist, I am only really going to describe for you 13 common items that really closely took a look at this. The items were selected from the literature on medical interviewing and address history taking and communication skills, again the HXE component of our health historians.

[Slide]

For the testing process we actually evaluated 56 individuals of our donor collection staff. Our staff members in our organization do all kinds of activities related to blood collection so everybody went through this testing process. The testing occurred over a one-month period and there were actually nine separate OSCE sessions that occurred. Up to eight historians, obviously the maximum number for the cases, were tested one time. The standardized donors were the ones that actually rotated through the screening booths that we set up around the blood center. At each station the health historians had appropriate SOPs that we use. They had copies of all the high risk questions, donor forms and they also had a copy of our medical criteria book. There were no time limits placed on any of the encounters, and the three SDs that had traveled outside the united states were given copies of atlases and the CDC "yellow book" to carry along with them from station to station.

[Slide]

During each session the health historian documented the pre-donation screening responses on the donor form, applied deferrals and determined donor suitability. These donor forms were then left for the investigator, in this case Stacy Lee, to grade those forms. After each session the SD left the historian and completed that checklist, again, looking at their communication skills and this process continued until all historians had met with all SDs.

[Slide]

In terms of grading, the donor forms were evaluated for completeness and correct determinations of donor suitability. Again, a variety of numbers were assigned to each task that was supposed to be correct. We are looking at the HXI component or the ability of these historians to make the appropriate interpretation for donor suitability. For the checklist the components were assigned on performance and, again, we were looking at the communication skills and history taking skills of the historians.

[Slide]

For the results of the evaluation, for cases one, two and seven--and these were the SDs that were acceptable donors--all historians correctly accepted these donors.

For the next case, and this comes back to my comments about malaria, the results were much more problematic. We actually had three historians who incorrectly accepted the SDs. Two of the historians thought the SDs did not visit malarial areas in South Africa. It is interesting because the CDC book is fairly clear on what is and what is not a malaria area. One of the health historians did not think that South Africa had any malaria areas, which is obviously disconcerting.

Again, these three scenarios are exactly what sets one up for post-donation information, that is, these donors return. The next historian asks a question and they get a very different answer. Then you need to go through the process of consigning notification reporting to CBER and all that goes along with that.

I might also add that at least for the case on the South Africa deferral there were ten historians that correctly deferred the donor, but they didn't document the travel to malaria areas on the donor form.

[Slide]

For case four, and again this points up this issue with malaria and the fact that I think our health historians and the people who work in our blood collection organizations need more geography courses, but while all the historians deferred this donor, one of our health historians thought that Belize was actually in Haiti and that is the reason she deferred the donor. She believed that she had gone to Haiti, not Belize.

Then, one of our health historians had an incorrect eligibility data. Interestingly, ten historians failed to document travel to a malaria area.

For unprofessional ear piercing, all historians correctly deferred this donor, although one of our historians recorded an incorrect eligibility date.

[Slide]

For the chlamydia case, one historian actually incorrectly accepted this donor. There was a very interesting dialogue. Our donor form question reads something to the effect of have you had syphilis, gonorrhea or a sexually transmitted disease in the last twelve months? Instead of reading the question to say those three issues, the health historian said have you had syphilis or gonorrhea in the last twelve months? Of course, the standardized donor answered no. That is, again, a situation that sets one up for a post-donation accident to occur and all the consequences that go along with that. Four of the historians recorded incorrect eligibility dates.

The final case is also interesting. As I mentioned earlier, the standardized donor was portraying an individual who had had, I believe, sex with an IV drug abuser. The health historians actually deferred this donor, but they deferred the donor for the wrong reasons. Embedded in the case history was a fish hook injury and one of our historians deferred the donor for that reason and never even got to the high risk question. The other deferred this donor because, supposedly, they had been to a malaria area but they had been to, I think, St. Thomas which is really a non-malarial area. So, again, there were problems with this case as well. Three historians did not identify the high risk behavior even, again, though the donor was deferred.

[Slide]

If you look at the statistical summary of the events by case, what this shows is just the minimum and maximum scores for each of the cases. As you can see, cases four, six and eight were most problematic in terms of losing points.

[Slide]

Looking at the HXE scores, the communication skills of our health historians, overall our staff did very well but one of the things that was noted was that they very frequently failed to introduce themselves to the standardized donor. There were some issues with vocal quality that were a little bit lower, and then some issues related to confidence in their approach to the standardized donor. But overall our health historian staff did reasonably well in terms of communication skills with the donor.

[Slide]

This is the OSCE summary statistics score for all of the health historians that participated in this. First of all, I am going to do this slide in reverse order. For HXE the mean score for all was 93,56. For the interpretation component, 97.43, giving us an overall total score of 95.49.

[Slide]

Looking at the reliability or our instrument, looking at a dependability index for cut score and the number of cases we had to achieve a dependability factor of 0.9, again, for total OSCE we clearly achieved that. For the HXI component or the interpretive component we clearly achieved that. We needed three more cases to get to a 0.8 level or above for dependability for the HXE component but we were really very close and felt that the instrument really did demonstrate reliability and producibility.

[Slide]

Our results were shared with our donor operations management staff and also our QA staff. At least from the perspective of our organization, the failure to defer the donor who visited South Africa and the donor whether chlamydia were viewed as the most severe. The inaccurate eligibility dates were also cause for concern, although there was information on the donor health history form that provided an opportunity during donor form review to catch those incorrect eligibility dates.

[Slide]

The three historians involved in the four malaria incidents were retrained and reevaluated. I should also comment that although this wasn't part of Stacy's thesis project, there were actually two out of these four individuals who had had repeated issues and ultimately were terminated from the organization. So, again, this instrument just verified sort of what we had seen in other venues all along. All historians recording inaccurate eligibility dates and omissions on the donor forms were also retrained.

I should call your attention to the fact that there is actually a typo on this slide. This word "malaria" should not be here and I didn't notice that until last night. So, my apologies. It should just be four incidents. There were actually three related to malaria and then the one that was related to the sexually transmitted disease.

[Slide]

What are some of the conclusions from this? First of all, those of us who are in blood collection facilities really need to take a look at the current belief regarding post-donation information, that the donor withheld information. It may be just as likely that the health historian did not ask the necessary question or follow-up questions in their health history evaluation of the potential donor.

I think training, evaluation of staff performance and retraining as appropriate must be ongoing activities in a donor center. We have looked at the number of post-donation events that we have reported to the FDA, and they have stayed about the same but one of the things that came into play during this transition was the reporting of variant CJD which skewed the data to some extent.

I must say, we have made remarkable strides in post-donation information related to tattoos. They have virtually disappeared from what we do report to the FDA relative to post-donation information. Again, the thing that continues to be problematic is malaria. As someone said earlier this afternoon, we really need a new script for how we evaluate travel and where people have been relative to malaria areas.

[Slide]

In addition to determining donor suitability, we need to train and reinforce good interviewing skills with our health historians. One of the things we have tried to work on very diligently on in our organization is customer service and how you interact with the donor and what impressions they have of you and your organization and whether they come back again is very, very important.

This OSCE was an appropriate and successful competency assessment that provided valuable information on the history taking and communication skills of our staff.

[Slide]

I think it is important to say that the majority of the staff performed very, very well. Again, the cases were complicated. There were medical issues that each and every standardized donor brought to the screening process.

Implementation of these kinds of ongoing evaluation techniques and any subsequent retraining initiatives or process changes have the potential to reduce post-donation information events and reduce the number of BPDs reported to CBER. Thank you very much.

DR. LINDEN: We now have an opportunity for questions for any of today's speakers, at least those who are still here. Any comments, suggestions, observations people want to make? I have one to start us off, for Dr. Grout. We were talking at the break and a lot of us are really curious what Ortho did with their Post-it notes.

DR. GROUT: Apparently there is a part of the machine--it is kind of like a copy machine and there is a small opening where the part that comes down has to line up just right. In the past they were using a little mirror on the end of a long telescoping rod and a flashlight to look into the machine and up to see if the alignment was correct. Of course, they had some problems there. What they found was most beneficial was to take a Post-it note, put it over the opening, close it and look at the wrinkle to see if it was, in fact, aligned correctly or not. So, that was the Post-it note.

DR. LINDEN: Thank you very much. Any other questions or comments? If not, I guess we will adjourn and we will start again tomorrow morning at 8:30.

[Whereupon, at 4:30 p.m. the proceedings were recessed, to resume on February 15, 2002, at 8:30 a.m.]

Transcript of Day 2 of Workshop

 
Updated: March 8, 2002