TRANSCRIPT

Conference on Human Subject Protection
and Financial Conflicts of Interest

August 15 & 16, 2000

National Institutes for Health

Bethesda, Maryland

August 16 Plenary Presentation

Moderated by Robert J. Temple, M.D.

Raising Ethical Questions Concerning Conflict of Interest Throughout the Process of Clinical Research  --  Jeremy Sugarman, M.D.

Plenary: Reports from the Breakout Sessions
Moderator: Dixie E. Snider, Jr., M.D.

Presenters:
David A. Lepay, M.D., Ph.D. Director of the Division of Scientific Investigations at FDA
John Livengood, M.D. Associate Deputy Director for Science at CDC
P. Pearl O'Rourke, M.D. Deputy Director, Office of Science Policy
Michele Russell-Einhorn, J.D. Director of Regulatory Affairs, Office of Human Research Protection
Lana R. Skirboll, Ph.D.  Associate Director for Science Policy at NIH
Robert J. Temple, M.D. Director of Medical Policy and Acting Director, Office of Drug Evaluation, FDA

Plenary: Reaction to Conference Proceedings
Moderator: David Blumenthal, M.D.

Panelists:
Marcia Angell, M.D., former editor of The New England Journal of Medicine
James S. Benson, Executive Vice President for technology and regulatory affairs at the Advanced Medical Technology Association, formerly known as the Health Industry Manufacturers Association
Dennis DeRosia, North American Regional Council President of the Association of Clinical Research Professions
Abbey S. Meyers, founder of the National Organization for Rare Disorders
Sidney Wolfe, M.D., Director of the Health Research Group

Plenary: Concluding Remarks --  Greg Koski, Ph.D., M.D.

Agenda Item: Plenary Presentation

DR. TEMPLE: Good morning, everyone. I think it is time to begin.

I am Bob Temple and I am supposed to moderate the first speaker. So, if he says anything extreme, I am going to just jump up. Other than that, it is not easy to figure out what moderating a single speaker means.

I have one announcement. On the back table is an index to the materials in the book you got. Dr. Nightingale wants you to know it is here because it helps you find them. So, they are on the back table and you can insert them into the book.

Dr. Sugarman is the director of the Center for the Study of Medical Ethics and Humanities at the Duke University School of Medicine and is also an associate professor of medicine and philosophy there.

Dr. Sugarman spent a great deal of time at Duke, undergraduate and all the way through medical residency in internal medicine, escaped briefly to get an MPH at Hopkins and a master's of philosophy at Georgetown and then returned to where he had come from.

He writes broadly on subjects related to ethics and teaches the same subject to new medical students and I guess more mature medical students as well.

So, Dr. Sugarman is going to talk about raising ethical questions concerning conflict of interest throughout the process of clinical research.

Jeremy.

DR. SUGARMAN: Thanks, Bob, and thanks to the conference organizers for having the opportunity to talk to you about an area that obviously creates a great deal of interest.

Much of what was said yesterday was quite important, showing us that conflicts of interest are imbedded throughout the process of research. We also heard that conflicts of interest do not equal misconduct, that some of these conflicts are inherent in the process of research.

We are trying to grapple with ways of dealing with these conflicts, which are very much a part of the process of the activity in which many of us engage. Nevertheless, conflicts of interest have created quite a storm. It is no surprise that in the headlines we hear drug trials hide conflicts for doctors. I have got to tell you that this headline about research ethics was the only headline about research ethics that appeared in our week of great unpleasantness at Duke, that conflicts of interest were another research ethics scandal that appeared the same time that Duke's MPA was suspended.

In The Chicago Trib, "Safeguards Get Trampled in Rush for Research Cash." New York Times later, "Teenager's Death is Shaking Up the Field of Human Gene Therapy Experiments." The journals are filled with discussions about various types of conflicts, too. "Evaluation of Conflict of Interest in Economic Analysis of New Drugs Used in Oncology" "Is Academic Medicine for Sale?"

In the Beltway, two OIG reports, institutional review boards, "A Time for Reform," "Recruiting Human Subjects," "Pressures in Industry Sponsored Clinical Research." It is no surprise we are here to talk about these issues today.

Now, what I am going to do is try very quickly to enlarge the conversation a bit from yesterday, touching briefly on the ethical foundations of clinical research. I know in this audience I don't need to give you IRB 101 and an introduction to all the principles and the regulations from 45 CFR 46 and all the subparts and the 21 CFR regulations as they have come out and FDA regs.

However, I just want to touch on that to talk about giving a definition of "conflicts of interest," show where the spectrum of conflicts lie, talk about managing these conflicts and some next steps based on what is available. Hopefully, the conversations you had in the breakout sessions yesterday will add data to my assumptions here.

Now, the ethical foundations, again, no surprise, it is not surprising that a series of scandals led to a conference. We have been doing this. This is the way research ethics gets done. As Carol Levine has said in the Hastings Center report, "Research ethics was born in scandal and reared in protectionism." We start with the tragedies of the Nazi doctors and the Nuremberg Trials and we follow by U.S. scandals, U.S. scandals first brought forth by Henry Beecher in his famous New England Journal of Medicine article and then followed by revelation of the Tuskegee syphilis study.

These kinds of scandals led justifiably to a response and the response of the Federal Government in that case was one that relied on trust. James Shannon introduced in the sixties the notion of an assurance in which institutions and investigators would be trusted to carry out the federal rules, to review research prospectively and to follow the rules that have been put in place.

The National Commission for the Protection of Human Subjects in Biomedical and Behavioral Research issued its Belmont Report, in which they provided a philosophical argument to substantiate the regulatory approach. Now, one of the central features to the Nuremberg Code, to the Declaration of Helsinki, to the Federal Regulations, to the Belmont Report, the central component of that is something known as a fiduciary, not found in the same way, not mentioned, not held up as a Belmont principle, but a fiduciary, at least, with a legal definition is a person holding the character of a trustee in respect of the trust and confidence involved in it and scrupulous -- that is good faith -- and candor, which it requires.

A person having duty created by his undertakings acts primary for another's benefit in matters connected with such understanding. What does that mean for us? What does that mean for us ethically? What does it mean for us legally?

Fiduciary obligations in a context of research go to investigators, to research staff, to institutions, to sponsors, including federal sponsors, to look out for the rights and interests of the human subjects that are involved and for the future patients and other individuals who use that knowledge.

You have to put aside self interest. You have to focus primarily on the interest of the person for whom he or she or it serves as a fiduciary. You have to act to promote that individual's interest and so earn the trust of that individual.

The fiduciary concept underlies most of these from the perspectives of patient subjects. This is no surprise. This is from Hagarth, famous medical ethicist. It says in the first frame, "You should trust doctors more. Our first rule is do no harm." Second slide, "It worries me they needed a rule to figure that out."

When folks see us with white coats of any variety, when they see an institution -- and I guess that slide was better, "In God We Trust," the notion of trust is central. We have fiduciary obligations and the problem is conflicts of interest can threaten that.

Now, fortunately, there is a huge reservoir of trust. I had the privilege of working as staff to the White House Advisory Committee on Human Radiation Experiments. We looked at our government doing research on human subjects in 4,000 experiments without the consent of human subjects. There were national security concerns, yes.

We wanted to understand what was going on now that that was the current scandal in the mid-1990s. The Advisory Committee on Human Radiation Experiments. So, one of the charges the President Clinton gave us was to look and see whether the protections in place were adequate to the tasks at hand.

One of the ways we did that was to conduct an empirical study known as the Subject Interview Study and the beauty of having federally funded research is you don't have to have a good title. So, in that Subject Interview Study, we interviewed 1,882 patients at 16 medical centers across the country at five geographic sites, governmental and non-governmental.

Of those 1,882, over 500 had reported that they had experienced being research participants. We talked in depth to 103 of those people and one of the most powerful messages that came through that was the notion of trust. Trust was imbedded in patient subjects, ideas about what research meant.

They trusted individual physicians and investigators. They trusted in specific institutions in which they received care and in which they are research subjects and they trusted the research enterprise as a whole. Their words are better than mine.

In individuals, there is not a lot that you can control when you are sick, so you have to rely on your doctors. If he suggests that you should go into a research project, I think you should really take his advice because if you take the time to find yourself a good doctor and they are involved in research, they would never steer you wrong.

In institutions, I have got the best treatment down there at a hospital. I don't think I could get any better. In the research enterprise as a whole, they know what they are doing. They wouldn't have you do this if they didn't know what they were doing.

That buyer talks about trustworthiness. Not all things that thrive when there is trust between people are things that should be encouraged to thrive. There are immoral as well as moral trust relationships. If we are going to rely on the trust of patient subjects, of human volunteers, of taxpayers, to participate in research, we have to not only rely and build on their trust but build a system that is trustworthy.

Conflicts of interest threaten the trustworthiness of the system that relies upon trust. I am going to choose a definition -- it is a philosopher's trick -- it is called stipulation. You can substitute another one if you prefer. A set of circumstances when a clinician or investigator's or sponsor's own interests conflict with those with whom he, she or it has a fiduciary relationship. Simple.

What is the spectrum? We talked yesterday -- we heard some great ideas and procedural mechanisms from universities that are taking a lead in trying to evaluate some of the kinds of conflicts of interest, which are present. One issue that I think we need to face now when we are looking broadly at the notion of conflict of interest is how imbedded it is in the spectrum of clinical research.

It begins when the research is the gleam in somebody's eye, whoever that may be, to the research design, to the processor's perspective review in the process of research, recruitment, informed consent, integrity of the data and reporting. Go through these in turn.

Initial considerations, from the outset as a fiduciary obligation, is this scientific question worth pursuing? Is it another "me too" drug? Should we expose people to this research? Is the research question salient to the population proposed for the study? Is it to bring a drug to market in the states and is it done in another nation?

Is this an appropriate use of resources? Are we using resources, whether those resources are federal resources, whether those are industry resources, are these appropriate uses of resources? In research design here, too, are incredibly imbedded questions of conflicts of interest. There is a desire to bring a drug to market earlier, to get a publication out earlier, to get your next federal grant with its high indirect cost rates.

How the project is designed is critical to the protection of subjects. We cannot talk about the ethics of research by ignoring the questions of scientific design. Placebos are a great example. We know that we can eliminate a lot of the problems in clinical research in terms of bias by using placebos.

We also know by using a placebo that we can get an answer quicker. Quicker means faster. Smaller sample size means less cost. It is not always okay to use a placebo, even though the science might drive that question.

Ken Rothman did an interesting piece in looking at the continuing unethical use of placebos in clinical research and in it he points out a sign where there is a cure. Doesn't work a lot of the times, but it works some of the time for something like river blindness.

Drugs being brought to market, how do you conduct that trial? Do you do that trial with a placebo or do you do it with an active control? Faster to market. Smaller sample size. No questions about efficacy. All questions that Bob Temple has thought an awful lot about, but how do you design that trial? By using a placebo group, which these investigators did. People on the trial were blinded, the people on the placebo group. That doesn't live up to fiduciary obligation.

Randomization, the same issue. We can eliminate bias. When is it appropriate to use randomization? What are the implications for sample size, interpretation and results. Selecting outcome measures, ideas to bring, again, things to market quicker, publications sooner. But if the outcomes aren't clinically significant, ought they be used? Does that live up to our fiduciary obligation? It is bad science. It is inappropriate science.

Prospective review. Okay? You have got a research design, salient to the population. You have got a sound design and you bring it for review. From the very beginning where we have this funny term, institutional review board, the only nation in the world that uses that term, as if they need to be institutionalized or something.

But the institutional review board exists within an institution. It is a way of understanding the local culture provided it really is an institutional, institutional review board, but as a byproduct of being within with an institution, there are inherent conflicts of interest.

An institutional review board has to approve research. If it doesn't, no grants, no contracts come in. With no grants and with no contracts, there are no direct costs. With no direct costs or no indirect costs -- if there are no indirect costs, there are no doughnuts for the IRB meeting and everything closes down. This is an inherent conflict of interest that was recognized from the very beginning. It was a way of breaking in and responding to the scandals we say in the states of providing prospective review. There are problems with having to work with superiors and inferiors and colleagues and friends and enemies.

The stakes are always so small in academia that, you know, the competitions are so strong. We have recognized this conflict from the beginning, not to say it is unmanageable, but the conflict needs to be there. For independent IRBs, they get paid. Well, everybody is getting paid in this operation. They get paid and so there is that direct handling of money. Does that change the notion of a conflict? No, it just moves it. It is fungible.

There are going to be all sorts of questions about conflicts, as long as we have a system of prospective review in which people's other welfare depends on that income in the process of research.

Let's talk briefly about recruitment, informed consent and data integrity. Recruitment issues have received a lot of attention. Why? Because we see money. We understand it. We know that that causes a conflict. We talked in great detail yesterday and I won't go into detail now about ownership, the unethical nature of finder's fees and bounty payments and what individual institutions are doing.

Here they are obvious. The obvious doesn't worry me quite so much in ethics. We can deal with them. As Aristotle pointed out more than a week ago, the problem is recognizing that something raises an ethical question. Once we know that, we can review it and we can talk about and we can make it explicit.

Compensation for subjects, another side of the issue. How much is enough? What is an incentive? What is a reimbursement? On an IRB which I sit, we recently closed a study because there was such a high degree of fraud on the part of the volunteers coming into the study that we were unsure that we were going to get good data to stop the study.

What is enough? Consent, ethicist's favorite. Investigators, institutions, regulators, we all recognized that informed consent is critical. Much like we can't put everything into IRB review, we can't hope to put everything into the informed consent process? It is trouble, but there are multiple folks working around the country and trying to improve that process.

But in that consent process is the notion of the physician investigator and ought the physician investigator play a role in obtaining meaningful or valid informed consent.

At times the physician investigator is the only one who really understands the protocol. At times, the only individual who really understands the disease. A white coat can sometimes throw the process off a bit. Think of the work from the advisory committee. Think of Paul Applenaum's(?) good work on the therapeutic misconception. Think of the words we use to describe the research enterprise.

In other work we did for the advisory committee, the term "experiment" was used almost never. Patient subjects thought it was scary. It was risky business. In contrast, the term "study" was viewed as very benign. In in-depth interviews, patient subjects said, oh, that is when the doctors and nurses study up on you. Phase 1 study, doctors and nurses studying up on you. Who wouldn't say "yes"? I want my doctors and nurses to study up on me. I would hope they always would. Sure, I will enroll.

It may be an intractable problem to have physician investigators in very risky research provide consent. It is something we need to think about. Therapeutic misconception, even after a thoroughgoing informed consent process, patient subjects believe that the procedures that are in place are part of their care. It is related, I think, to the phenomenology of illness. We can't believe that we are being objectified as science is objective.

Integrity of the data, another port in the process, bias, reliability and validity. Fortunately, we are getting methods of site monitoring to squash out sort of the egregious stuff, but still the interpretation of the data poses some other conflicts when you are interested in those data coming out in a certain way.

In the outcome of studies, what about an interpretation? Rapid reporting and proprietary interest. We heard a great conversation yesterday about these topics. Authorship issues again raised yesterday and the question that wasn't raised or brought to the surface about peer review. What gets published? The work of the Cochran Collaboration showing that it is the positive P value. It is that good, positive science.

What that means is there is an enormous denominator of studies, of information, in which reasonable people, reasonable scientists thought this is a great question to pursue. But how many subjects are unnecessarily exposed to the same great idea if we don't get reports that that idea just don't work?

Well, managing conflicts, they are there. They are not misconduct. They are everywhere. Ubiquitous in the process of research. We could divest. We heard a very strong statement, an important statement yesterday, about divesting completely. In some cases it may be important to divest completely. But what that leaves out is the possibility of people who know the science best, of people using entrepreneurialism, taking a good idea and taking it forward, when no one else believes them.

It may not be an appropriate way to do some of the best science. Disclosure. Okay. Let's talk about it. Let's make it explicit. I am a fan. But to who? What? When? Where? How? We are just starting to learn that. These are empirical questions.

The only literature that I could find in preparing for today's talk was related to the managed care literature. They have asked some of the same questions. Talked with Mark Hall, Susan Gould about their experiences in doing empirical studies related to trust and the doctor/patient relationship under managed care.

Patients in managed care settings don't always want to know this information and there are conflicting data about their effect on trust. We need to learn how to deal with these data. If you think you can piss an investigator by showing them the federal rules, show them a contract with finance language in it. Scary.

We need to learn how to use these data. How will these data be used? What are the effects going to be on trust? What are the effects on the research enterprise as a whole of disclosing these conflicts and are there any alternatives.

Well, this is from The New Yorker. John Murs(?) gave me this one. And here they are, the two wolves. There, now it is all on paper. Feel better? If the system is one of foxes and we disclose, it somehow is insufficient. We need to do more. We need to reflect on the entire process.

What are some of the next steps. We need to consider ethical issues related to conflict of interest at each stage of the research process. We don't need to do it all at once. But we need to recognize that they are there and that similar models will work. We need to incorporate these issues into educational programs for investigators, sponsors and the public. It shouldn't be a surprise that there is a lot of money funding research.

Maybe we need to change the message slightly so that these revelations are okay. We need to gather data on the effects of conflicts of interest and their disclosure and the process of research and development means of minimizing them. We have different models that might be used, RFAs, demonstration projects and the like, the experiences of institutions. We need some mechanism of sharing stuff that people kept secret for awhile.

We need to find a way of sharing those experiences in a safe fashion. We need to encourage rapid publication results, including negative outcomes because there, too, is one of the larger problems of conflicts, of not knowing that this question is already being answered.

Thank you very much for your attention.

[Applause.]

DR. TEMPLE: We actually have about five minutes for questions and Dr. Sugarman has said he would be glad to hear some. Anybody want to go to the microphone?

DR. SUGARMAN: I can't believe I wasn't provocative.

DR. TEMPLE: Well, let me ask one if no one else does.

You did touch on a matter of some interest to me, a couple of matters. One is you raised the question of when randomization is okay. There is a lot of discussion about placebos, but usually randomization comes out of these discussions untouched. Can you elaborate on that?

DR. SUGARMAN: Sure. I am referring to some of the experiences that took place with the early NSABP trials and as one example where there is a question about use of randomization, use of pre-randomization procedures and the desire to enhance enrollment and subjects. As you may remember, there was a critical question about whether modified or radical mastectomy was equivalent to a lumpectomy with adjuvant therapy for the treatment of early stage breast cancer.

When the study was initially designed, the first question was, okay, we had this -- we were in clinical equipoise. The community was uncertain about the best way to go. One surgeon would say that she always believes in a mastectomy and another surgeon might believe that a lumpectomy was appropriate. Clearly, the community felt strongly. They were polarized. It was time for science.

So, the question becomes one of, first, is it appropriate to randomize to answer that question? Well, from a scientific view, yes. That would be ideal because there will be all sorts of bias in the people who receive surgery or not. The question would be, though, can you say to a woman who has breast cancer I would like to randomly assign you to one of these treatment arms.

Now, from my experience in clinical medicine, if I diagnose someone with breast cancer and I say early stage breast cancer, the responses can be very dramatically different and I can't predict them. In one case a woman may say find me a surgeon to operate on me tonight. I want the mastectomy done now and I want that surgeon to take off my other breast as well. I never want to face this again.

Woman 2, same diagnosis. Do I have to have surgery? I can't imagine having somebody mangle me.

Woman 3, breast cancer. What should I do? How can I live the longest? After educating, after saying we don't know, and if women remain in Group 1 and Group 2, is it okay to randomize to bring your science forward? I don't think so.

I think in a case like that, the personal preferences are always so strong that it is probably not okay. For the women in Group 3, it is probably okay.

Now, what happened in NSABP, when they initially did it is they proposed the study as the standard randomized design. Couldn't get anybody enrolled. So, they moved to a design, a pre-randomization design in which the investigator, the physician investigator would walk in the room and say if you enroll in this study, this is what is going to happen.

Something happened. They got enrollment. We have an answer to the question now. What happened? Was there a conflict? Did it change the informed consent process? What was the change?

These are really tough questions and I am sure that there are -- we can all draw on other examples that have been less discussed in the literature about when there is concern about individual preferences which are so strong, which in meeting our fiduciary obligation we ought not randomize in those cases, even though the science might be a little crisper. That is the kind of cases --

DR. TEMPLE: Is your question, though, about the consent process or about whether patients should be randomized. You might have thought that a patient who wanted a procedure would say "no" to being randomized but you are wondering what happened to change enrollment. Is that where the question --

DR. SUGARMAN: It is in part related to the consent process and in part related to even if the consent process -- like the two wolves sitting at the table. Even if the consent process is there and we stand to learn, I am not so sure in every case -- and being a big fan of good science, okay, I think that there are cases like that and we might -- you are probably in a better position, too, to look in cases and saying when might it not be appropriate to randomize.

I don't want to make a huge point of the randomization, but what I want to say is that each consideration, each design consideration, deserves some reflection about whose agenda is being met first.

MS. MC AFFEE: My name is Lynn McAffee(?). I am with the Council on Science and Weight Discrimination. I am like an actual consumer with like a high school degree. So, I hope this doesn't come off sounding wrong, but in the advocacy we do, which is in weight, one of the problems that we see generally and particularly in weight is that we have a situation where for 40 years, the only treatment that was around was absolutely completely ineffective and clearly people going down the wrong path.

There was no, and still hasn't been, any real self-criticism about why it sort of took so long for people to be self-critical of studies with 90 percent failure rates and 95 percent failure rates. There are some other consumer groups that these are some problems in their issues also, that there is not a lot of self-criticism as a body of where things are going, that studies are repeated and repeated and repeated and then nothing happens from that.

I am wondering how one would go about that, particularly in obesity, where there are enormous ties to industry, enormous ties to industry.

DR. SUGARMAN: It raises a great question. One mechanism that I can think of is the one I alluded to, which would be if we had some mechanism reporting negative information from trials so that the same good hypothesis isn't retested might be one means of doing that. I think we -- you know, this is not a new question either about the publication bias that has come out, but that might be one mechanism because it is one mechanism by which scientists can share.

There is an awful lot of proprietary information. For good reason there is proprietary information. I mean, companies are putting a lot of money into drug development and they want to get their money back. So, they don't want to share their secrets with other folks. So, I guess we need to figure out a mechanism of reporting that and we could dream about how to do that.

I would just be making it up right now, but we should get, you know, some folks with experience to think through that question because I think it is a very good one. And in terms of being critical about it, I think, I have the same mission. I want us to be critical about it and that is why I said these things, that I am trying to get people up to talk.

DR. TEMPLE: Last question and then we will have to move on.

DR. KORN: I am David Korn.

Jeremy, there are trials and there are trials. I happen to know a little bit about cancer because I chaired the National Cancer Advisory Board for seven years and lived with these issues very intimately.

In general, I would argue that cancer trials are generally designed because there is no standard of best treatment. There may be prevalent approaches to a disease, but there isn't one that is definitive. Whenever there isn't something definitive in an ideal society, we have the best possible randomized double blinded, whatever it may be, clinical trial. If people have strong preferences, you have got to respect them. There is no question about that.

But the biggest problem we had in the cancer institute during those years -- and we were not a pharmaceutical company trying to maximize profits from green pills versus pink pills -- was the difficulty in accruing patients, the difficulty in getting the damn trial to the point of statistical significance so that you could tell the public what worked and what didn't work.

Think about the bone marrow transfusion issue with breast cancer. This was imposed on our society through a lawsuit in California when an insurance company said by whatever its process, we do not believe this is accepted proven treatment and we are not going to pay for it and the survivor of a woman who died went to court and got an enormous punitive judgment.

Everybody decided we are going to pay for those now because we can't afford punitive judgments. It took what, a decade before the evidence could be obtained that this has no definitive value? I mean, we -- ethicists sit, Jeremy, and hypothesize and live in a world that is really an ivory castle and they like to speculate and draw up scenarios, but there is some real pain out there and real people who are scared to death when they are told they have cancer.

We have got to figure out a way that isn't theoretical and abstract of getting people into clinical trials. Seffrin mentioned yesterday that when ASCO(?) did a survey a year or so ago, 3 percent of eligible -- eligible -- cancer patients ever get near a clinical trial. That is an ethical disgrace.

DR. SUGARMAN: If I said in the beginning -- if the message that you received was that I think all of this should stop clinical research, then I sent the wrong message. I am an enormous fan of good clinical research and share the notion that we need to do research and we need to do it well.

I do believe that we want to get more patient subjects on really well-designed clinical trials and my goal in raising these questions is to put them on the table so that we are not surprised down the road, so that the system is trustworthy enough so that when people do try to make decisions about enrollment when they are sick -- and the best research we have on this -- it is not pie in the sky, it is not ivory tower -- it is good clear empirical research about participation in early phase trials in oncology.

We have data to suggest how people make decisions. We know that they are desperate. We know that they trust. Our goal is to bring science forward as rapidly as possible provided the system is trustworthy. And it is with that in mind that I posed these comments and would hope that all of you who do clinical research that are engaged in its oversight, in its design, in its publication, work hard to provide the empirical data we need to create such a trustworthy system.

Thanks.

DR. TEMPLE: I am sure we could go on all morning. Thank you very much.

[Applause.]

Agenda Item: Plenary: Reports from the Breakout Sessions

DR. SNIDER: Let me invite all the moderators from yesterday's breakout sessions to come on up to the table.

Good morning. My name is Dixie Snider. I am the associate director for science at the Centers for Disease Control and Prevention. It is my pleasure to moderate this panel session.

Yesterday, as you know, we had six breakout sessions with these distinguished moderators, who have with their recorders compiled the information. The way we have decided to do this is to have me pose a series of questions to the group and to ask them to address these questions in sequential order as I go down the table and avoid duplication and, hopefully, reflect many of the opinions -- obviously, we don't have time to reflect all the opinions, but many of the opinions about the questions as we go along.

The moderators are Bob Temple, who you have already met this morning; Lana Skirboll, who is the associate director for science policy at NIH. Next to her is Michele Russell-Einhorn, who is director of regulatory affairs, Office of Human Research Protection; Pearl O'Rourke, deputy director, Office of Science Policy at NIH; Dr. John Livengood, associate deputy director for science at CDC and David Lepay, who is the director of the Division of Scientific Investigations at FDA.

So, let me start off by asking you, Bob, to tell us what your group thought we were talking about. What did we mean by financial interest and how did we determine if those may adversely affect objectivity of science.

DR. TEMPLE: I would say there was a fair consensus in our group that essentially everything that involved money or who was supporting a trial was of potential interest. Later on, I guess, we get to what to tell people and how to do it, but all of these matters were thought to be of interest.

Some are more obvious. Having a stake in the product, you know, a patent or something like that, payment in the form of royalties from later sales, those are sort of obvious.

I think there was agreement that holdings in a small non-publicly traded company, whose entire existence might depend on the study was also very important and even large equity interests would also be considered -- would be considered important.

Less obvious than that was the idea that the whole circumstances of the trial -- this was a trial by Drug Company X and they have employed blank to do it and they have sent money to my institution, that these kinds of things also were relevant, although I would say there was not uniformity on how much detail to provide later.

Then part of that was the idea that there is at least some interest -- much discussion is probably needed, particularly involving patients, in knowing about the payments for the trial itself. Again, there was considerable debate about how much detail was necessary or important, but the fact that the physicians being reimbursed even could be important.

One area that I would say there was uncertainty about was what FDA regulations call significant payments of other kinds. These range from consultancies to the point where a particular investigator is on the stump for a particular product -- I would say that would make most people somewhat nervous -- to the fact that a person who is well-known is helping the company design an appropriate trial, which I would say most people were less worried about.

So, I believe there is a broad view of what should be covered and that the interesting questions come up later, which is what to do about it and what to say.

DR. SNIDER: Thanks.

Lana.

DR. SKIRBOLL: We had a lot of discussion and not too much resolution about what we mean by financial interest of conflict of interest. We did agree that the definition of what that might be might be different for an investigator, for an IRB member and for an institution. Recognizing those differences, we weren't very helpful in coming together about how you might define that.

There was some concern about what I would call other interests that may not appear immediately to be a financial conflict, where there was payment in kind in one form or another; a trip somewhere, help with tuition to a child in school, the kinds of things that might not show up on your standard conflict of interest form or your standard financial compensation form, but nevertheless would serve as a conflict or potentially could serve as a conflict.

That included even future income. If there was a dialogue going on about being hired by a particular company in the long run or there was a desire by the investigator to be hired by that company, that that held a potential conflict that might not show up in the dialogue in the standard way we think about it.

There certainly was an agreement that there needed to be more harmonization between what we consider a financial interest between FDA and the NIH and any other definitions that we come up with in federal regulations, that it was sometimes hard for universities and investigators determining whether if they have an IND, whether they are looking at FDA's guidance on this or NIH's guidance on it. So, a need for harmonization.

Two of those definitions, where NIH, for example, talked about 5 percent equity, the unfairness of 5 percent equity in what; a start-up company that has no value or 5 percent equity in Merck, big differences in what that actually means in terms of dollars in your pocket.

DR. SNIDER: Thanks.

I would remind each of the panelists since I didn't tell you earlier, there is that white button for you to push when you are ready to talk and when you have finished.

I would ask you, Michele and others, if you could tease out in anymore detail if there are differences between investigators, IRB members and institutions with regard to this question that came out of your discussion, can you share that with us?

Michele.

MS. RUSSELL-EINHORN: We didn't -- our breakout session didn't spend a lot of time on this particular issue. We did discuss the concept of some financial interests being problematic and other financial interests not being problematic. In specific, there was discussion about money that goes to the institution, as an institutional entity, as opposed to money that goes to the investigator. We discussed issues of conflict of commitment and conflict of mission, that financial conflict of interest is not necessarily the only conflict to be concerned about and we discussed in particular financial interests raised by research broker agencies and pharmaceutical companies as raising additional financial conflict of interest that people needed to be concerned about.

We also discussed issues that Bob and Lana talked about.

DR. SNIDER: Pearl.

DR. O'ROURKE: Our group was spectacular. We covered everything everyone just talked about, plus more. Our group could not come to any decisions on anything.

[Laughter.]

I threatened that they would all have to stand up but I won't do that. We actually had to go back to the very basic definition of a conflict of interest being really perceived personal gain that may affect objectivity. We were unable to get our hands around what gain was and we certainly reiterated the same concerns that were brought up here in terms of nervousness about what is financial and I think as Lana pointed out a lot of things that don't seem financial today can be parlayed into financial interest in the future.

When we couldn't really come down to a decision on that, we decided, well, the other sort of important noun in this definition is objectivity. Could we get at it from trying to define what is the objectivity that is at concern here.

I hate to tell you, we were unable to do that either. So, we became very confused. We did talk about both the interest of trying to define what this meant in an academic way, but then we obviously are all here trying to implement a process or find a process that would be implementable and, therefore appreciated that some concept of real threshold would be helpful. But we all decided we aren't the ones to do that.

We hope there are some smarter people out there who could do that and in trying to even come up with something, we bandied everything around from absolutely everything, including my, you know, Merck pen to anything above a regular salary to --

[Tape flip from tape 1, side A, to tape 1, side B -- text lost.]

-- was who is the smart person who came up with the $10,000 or greater than 5 percent. We all realized that was probably Mr. Arbitrary from some agency. So, we think that a threshold would be helpful.

DR. SNIDER: Why was the group so concerned about the threshold?

MS. RUSSELL-EINHORN: Concerned about putting a specific number to it or --

DR. SNIDER: Right.

MS. RUSSELL-EINHORN: Well, we were concerned that any number is relatively meaningless. What does 5 percent mean? I think, as Lana said, if you are Merck or if you are a start-up company, 5 percent is quite different. What is $10,000 to me versus Lana, who, you know, makes five times my salary? You know, so there is an issue there, as well as if you are the --

PARTICIPANT: She is an M.D.

MS. RUSSELL-EINHORN: So, I think that was that concern, as well as, I think, what has already been stated, the issue of -- there are financial things that on the surface don't seem financial. If you put a number on it, we may miss that. So, the best we would come up is if we had a specific threshold, it should read something like, you know, if you make x amount of dollars, x percent of interest or, and then maybe have a laundry list of situations that may, in fact, identify potential conflicts.

DR. SNIDER: Thank you.

John, when Bob started talking, I thought we knew what we were talking about. It was pretty clear. It has gotten a little less clear as we have gone down the table.

Can you guys help us out, you and David, help us clarify this?

DR. LIVENGOOD: Well, part of it, in our group we tended to say more clarity was needed, to help people [inaudible] whether they had conflicts themselves or to have somebody else be able to assess with them. We really didn't spend as much time grappling whether it was $10,000 or $25,000 and differences in agency guidance as to what that should be.

We actually spent more time on some of the things, like reimbursement or some of the payment per patients enrolled in the trial and how that possibly could influence an investigator to want more people to be in. They would be paid more for that and our group was actually very worried that that is a financial interest, not usually thought of as a conflict of interest because of how much the person is being paid for themselves, but that that was a concern for people.

I think some of our people hadn't heard of some of the potential abuses that were reviewed yesterday previously in terms of bonuses or extra money as you approach the end of the trial of some of those types of things.

I think our people were sort of -- really distrusted that as a possible motive for, you know, looking the other way, perhaps about an eligibility criteria or accepting the person's answer as being close enough to what I ask you to, that I am not going to exclude you from it and providing other motivations, just not only to sort of complete enrollment for this trial, but to remain eligible to compete for other clinical trials as they come along as motivations for the investigator that they were concerned about, which is in the range of financial interest but not normally what we would call being at financial conflict because it is inherent usually in the relationship between the sponsor and the individual.

Another point that was brought up sort of on that same issue was that the sponsors have responsibilities to look into this, too, and there is a penalty for them if it should come to light later that, in fact, somebody had accepted more money either from the company or for other things and their data were excluded at the FDA level.

That is a high penalty for a sponsor to pay to be at that stage of thinking you have got a product ready for review and, hopefully, licensure and to find out that 5 percent or 10 percent of your patients get excluded because, you know, somebody -- maybe an investigator didn't tell you all the money that they received from other parts of some of these large corporations.

One other thing that we did that was a little different is we tried to occasionally look beyond just the relationship with academia. Now, part of that, since I work for the Centers for Disease Control and Prevention, our usual partners are not always universities. We talked a little bit about some of the ideas around health departments, down to community-based organizations and we were actually a little worried about how conflicts could enter what a community-based organization decides to do with some of their research, which ones they are collaborating in.

Sort of a classic example, if they get money from Philip Morris to help support some of their activities, would they be willing to look at smoking in some studies or would that just not be a priority? Do you get some degree of introduction there in actually -- that is not necessarily the sponsor, but that is an institutional type of conflict, which is more than just a university holds a patent or holds large amounts of stock.

So, it sort of was expanding the range of things. But I don't think, other than expressing concerns about that and trying to address, as will discuss later, how to manage those types of things, we weren't as concerned about whether it was $10,000 or whether that was a lot of money for Person X and not Person Y to that same degree.

DR. SNIDER: Thanks, John.

David.

DR. LEPAY: We started off basically looking at how FDA is defining financial interest and I think Bob has mentioned several of these criteria, the concept of compensation related to study outcome, significant payments of other sorts and what that all implied in terms of both the cost of conducting research consultations, honoraria.

I think the general consensus was, in fact, that was a pretty good microcosm of what we are concerned about in the realm of financial interest. A number of additional issues came up, such as bonus schemes, recruitment bonuses, stock options based on subject enrollment and there was a great deal of discussion about these and how they interplay.

We talked again also or at least the issue was raised about whether financial interest in a competing company or in a competing situation is in important and how that might be something to bear in mind also. In fact, you could have a positive conflict of interest that could bias you toward the project you are working on, but you can also have a negative one that can bias you against that.

We spent a little time talking about the hierarchy of importance of some of these considerations and I think just about universally, the group agreed that ownership or proprietary interest was probably the most important in this schema because it had the greatest potential not only for financial conflict of interest but very much for non-financial conflict of interest.

One member of our group with a quote, "There is a huge pride in inventorship and that needs to be recognized." We discussed also a bit about how financial interest and conflict of interest are dynamic and how that needs to be taken into account, the changing role over time and the possibility of this becoming a moving target that may not even be appreciated necessarily by the investigator or the interested party.

We talked also about financial interest of IRB staff and there seemed to be much less concern about this, again, because the IRB staff don't necessarily receive any payments from sponsors. But that evolved into a discussion as well of the issue of commercial IRBs and, in fact, when we are talking about conflicts of interest for IRB staff and IRB members, do we also need to take into account potential conflicts of interest of an IRB as an organization and what the possibilities or concerns are there.

This even led into some issues of whether the reviewing IRB, the nature of it, whether it is local or non-local, commercial or non-commercial, is something that should potentially be disclosed and there were pros and cons on that issue.

Ownership interest, we talked about institutions as well and for much the reason. Ownership interest was very much focused on as a primary concern for institutions and something that should be considered for disclosure. There was concern very much about university held patents and in a system, going back to FDA system of financial disclosure for clinical investigators, the issue was raised whether university held patents might potentially be a way for an investigator to evade reporting of patent interest, if you will, because they are not directly attributed to the investigator, but would not necessarily have to be reported.

Threshold, we will probably come back to that. That was an issue we spent a fair amount of time on as well. Most members of the group, I believe, felt that there was an importance for threshold reporting to a clinical investigator concerns to an IRB or to a regulatory agency.

Perhaps that is less important from the realm of what you would report to subjects in an informed consent, as far as the nature of that threshold. A lot of attention put on this, there should be consistency across government agencies and however that -- in fact, FDA's figures of $25,000 of payment was considered not a bad figure.

In fact, we had one quote, of course, "$25,000 these days won't buy a lot of bias anyway." In terms of the possibilities of adversely affecting outcome, we had some discussions of that, the nature of monitoring and how monitoring might be conducted and also, again, important to keeping some surveillance of whether a particular site or particular operation might be an outlier and ensuring from FDA's perspective that all important studies are conducted as multi-center.

This also brought into the concept of how much FDA might at this point understand about physician networks and conflicts that may exist there, that even in a setting of multi-center trials, when these are run by a physician network, where the network has potential financial interests, are we fully appreciating what those interests are and the conflicts that can come in that direction.

So, that is pretty much where our conversation went.

DR. SNIDER: Okay. Thank you.

Panel, if we were, assuming we were then able to come up with a definition of a financial interest that might adversely impact on the objectivity -- of course, that may as you have indicated may need to be somewhat arbitrary, but if you were able to do that, Lana, what did your group feel we should do about disclosure of those interests?

DR. SKIRBOLL: We also, again, tried to separate out -- we started off with disclosure of what, but to whom and how and separating that out in terms of disclosure to the patient, disclosure to the IRB or disclosure to the university by the investigator and by others. Some of that discussion got convoluted because, in fact, those disclosures should be in some cases quite different and then what should happen with those disclosures once those various entities get there might be different.

But starting off with the patient, the overwhelming agreement in our group was that the disclosure to the patient needs to be simple. A lot of details about financial interest and conflicts and what those might mean or might exacerbate an already complex and sometimes overwrought informed consent process. The patients spoke up in the group and investigators spoke up and administrators spoke up saying that what really needs to happen is a lot of distilling of what those conflicts might be down to when you actually get to a disclosure with patients, that that be very straightforward and that that disclosure of conflict also be explained in terms of what it means.

You can't just say, well, somebody has a conflict of interest when the average patient may not know what that might mean to both his or her protection of to the interpretation of the outcome of the study. So that issue of patient education came out.

There was a strong sense that it should be the university that should be doing these sort of prescreened filters prior to disclosure to the patient. So, there was a strong feeling that IRBs are already very overloaded and to start getting all the information disclosed to them, that they would have to sift through and then determine what has to go in the informed consent or what should be said to the patient is probably not appropriate and that universities themselves should be putting together perhaps separate committees to determine what should be disclosed to the patient.

All that said, when we raised the final nub, which is what happens if the university itself is conflicted, there was a general throwing up of hands in dismay because if the university can't do this kind of filter, then who? There was concern about a different class of disclosure than had been proposed. I think we referred back to Savio Woo's description of what ASGT was interested in, which is the individual disclosure of the conflicts of individuals, who have direct contact with the patient.

There seemed to be equal concern about the statisticians and the people who do data analysis, who have a role in what the interpretation of the outcome might be. We had a brief discussion on what I would call the disclosure oxymoron, which is the patient being disclosed that your doc has invested in this and the patient interpretation of ain't that a great thing. Gee, they must really think this is a good treatment. That is positive that my physician has actually put his own money into this initiative.

That was a revelation to some people. With regard to disclosure of to IRBs and from IRBs, I think this was -- the same was true of universities and I will sum up by saying this appeared to be something new, that universities themselves are struggling with. How to -- what an IRB member discloses and what the university does with that information, whether an IRB -- it appears to be right now there is a sort of voluntary walk out of the room when you think you might be conflicted in a situation, but no real clear guidance and rules.

I must say that I think with regard to institutional conflicts, this is really new on everybody's radar screen and there were no grand solutions to that, other than recognizing there was a conundrum. There was one suggestion that NIH discourage -- I use that word -- NIH discourage equity investment by institutions, which I thought was a rather broad reaching recommendation.

DR. SNIDER: Thank you.

Michele.

MS. RUSSELL-EINHORN: We had a lot of discussion about disclosure of financial conflict of interest, but let me preface it by saying that for all the issues we discussed, there was a concern that in this dialogue there should be representatives from other federal agencies, that this should not be viewed as something that just affects PHS and FDA and that people should not be confronted with inconsistent guidelines and regulations across the Federal Government.

So that there should be more inclusion of people from other agencies. As a general matter, I think there was pretty clear consensus that institutions wanted flexibility, that one size doesn't fit all, that studies are very different. Disclosure will vary from study to study, from financial interest to financial interest and that it is dangerous and not necessarily productive to create categories or to create specific guidelines that are going to hinder institutions in adequately dealing with the issues.

There was agreement, I think, that institutions need to have detailed disclosure of financial conflicts of interest, but that the subject didn't necessarily have to get such detailed disclosure, that the informed consent forms shouldn't be used as a vehicle to give the subject too much information that would overwhelm the subject, that they should be given some information and then the opportunity and the information to ask for more information if they felt that they wanted to pursue it in more detail.

There was also discussion that the IRBs are not necessarily the place for the detailed disclosure of financial conflict of interest, that in the first instance, institutions might as a matter of management and policy determine that they don't want to pursue a particular research protocol because the financial conflicts of interest put the institution as a whole in a difficult position and that decision is an institutional decision and maybe also plays into the issue of managing conflict of interest.

So, the institution needs to have detailed disclosure of financial conflict of interest and then once the decision has been made that it is worthwhile pursuing, you know, it goes to the institutional review board. In addition, there was discussion of institutional ethics committees that might be more appropriate entities to deal in more detail with these kinds of issues.

I think that pretty much sums up our discussion.

DR. SNIDER: Thank you.

Pearl.

DR. O'ROURKE: Our group came up with a system that I am going to call the uncoupled system. We are proposing the creation of, yes, another committee called the Conflict of Interest Committee and this committee would require from all investigators and potential investigators independently their financial information. Now, clearly it would have to be worked how often, timeliness, realizing, you know, this is a fluid situation.

This committee would look at all the financial stuff and when a research protocol was proposed, sent into the IRB, the IRB and the Conflict of Interest Committee would each be aware that this protocol was now coming to the fore. The Conflict of Interest Committee would look at the protocol, look at this person's financial statement and say, ha, here is where we see potential conflicts. We either need more information from the potential investigator and call him or her in for more information, realizing when I say potential investigator, it is one person, could be ten people. It would be the whole cadre.

Then once this committee has decided or made, you know, some judgment as to relative conflict of interest risk, they then would call the IRB and say this is what we see as potential conflicts of interest and you need now as an IRB to figure out what is the appropriate management, feeling that the management issue should really be at the IRB level, but they really wanted to get away from the IRB being the body who got all this information.

The concern was the IRB is too large, wrong expertise, difficulty to protect people's confidentiality; whereas, if you had a separate committee that just looked at the financial issues -- and we got some very specific suggestions even for this committee. It should be relatively small. You should be at least 45 years old to be on it.

[Laughter.]

I am just the reporter. Okay?

And above an assistant professor. But basically a group that is a little bit more controlled by the institution. And, again, it would just raise the flag and say, hey, we think there might be a problem.

One other thing this Conflict of Interest Committee would achieve is it was concerned that if an IRB were the place and if you just said, you know, I am doing a study on an acne drug and all I am going to do is tell you my financial stuff that might have something to do with acne, well, lo and behold, there might be something else in my financial repertoire here that I didn't think is related, which, in fact, is related.

If this Conflict of Interest Committee had all of my information, there might be the ability to not just look at a protocol driven request, but, hey, it is broader. We need to identify.

I guess since your question is really disclosure, should we say that word in here as part of the answer. But when you went to the IRB, that this Conflict of Interest Committee would certainly say to the IRB you have multiple ways of managing this. Disclosure is but one and important that disclosure alone is not adequate unless it is looked at in context. If the IRB said, okay, given this level of potential conflict of interest and the risk to the patient or the subject, we think that disclosure is adequate. Then in that case it is okay. This has been looked at against this sort of smorgasbord of other options.

DR. SNIDER: Thank you.

John.

DR. LIVENGOOD: Our group was small. So, we had, perhaps, less trouble coming to more conclusive decisions about stuff. When we talked about conflicts of interest and sort of I asked the direct question should conflicts of interest be disclosed to the participant, the answer was just flat "yes." No further discussion. People thought that was extremely important.

But with all due respect to the people and institutions who reviewed their policies with us yesterday, we thought both of the consent language we saw were excessively high in reading level. I mean, to hear that somebody had put their equity interest into escrow is not very helpful to the average research subject.

I work in public health. We aim for eighth grade reading level. Neither of those, you know, got down to high school, it looked to me, on a brief read yesterday. We thought it had to be very, very simple. Additional information could be available or you could ask the person additional questions about this, but that you couldn't burden a subject with so much information. They wouldn't necessarily understand and how would they weigh what are their risks to them, their potential benefits, what was important in the consent process to give to somebody? How much detail had to be about financial interest, that that would be very, very difficult.

One reason we were happy with a very simple disclosure we thought other levels in the institution had to have much more information available to them at their levels, both at the institutional level and to an intermediate degree at the IRB level, to decide how much additional management and additional things needed to be put -- involved in that issue. But as long as they were met and done in good faith and actively, they thought that the patients probably needed a relatively simple notification that conflicts of interest were present and also they felt there should be a separate paragraph if the institution itself had a conflict of interest in regard to that.

But, again, more simple but with more information available. We also talked not surprisingly for those of us who have been in this for awhile, well, how, if you are the investigator and it has a comment in there that says the investigator has financial interest, how much time will he or she or how comfortable would they be discussing it.

So, we actually brought up whole issues of third party consent, monitors, processes, should some disinterested person be available to discuss financial conflicts of interest. We couldn't come to any grips with that in terms of practicality, but we realized that a lot of what is going on in informed consent is very difficult.

We did mention in places where, you know, for certain reasons we explored some of the options around giving people a test to see if they have understood enough information to subsequently be in the study, which we have done in some situations. But we were very clear that some disclosure needs to be and it needs to be very simple, we thought, at the participant level.

One thing that other people had said was one size doesn't fit all and that was true here, that one particular thing wouldn't particularly -- while I was very impressed with what some of the elite institutions we heard from yesterday do to manage conflicts of interest internally, I was much more worried about what other institutions, particularly smaller institutions, who may not be doing large volumes of research, how they could develop a model that perhaps could be less labor intensive or resource intensive, but also coming to a way in which they could come up with appropriate things for their research locale.

DR. SNIDER: Thank you.

David.

DR. LEPAY: We spent a lot of time discussing the issue of disclosing financial interest in the informed consent and I think, again, there was almost universal agreement, over 80 percent of our group feeling very strongly, in fact, that this should be part of the informed consent.

We kind of extended -- we had a small group, but interestingly there was a lot of discussion about whether this, in fact, should be required by regulation as an element of informed consent. It was put in the context that perhaps not as a critical element or it was one of the basic elements of informed consent, but something that could, in fact, be written into regulation as a provision as appropriate, that this should be disclosed or financial interests should be discussed in the informed consent as appropriate and then some consideration to looking at the IRB regulations or IRB guidance and focusing on the IRB as the point at which decisions about where appropriateness should come into play. That should be a decision of the IRB either by regulation or guidance. That seemed to be more the majority opinion in the group we discussed.

When it comes to what should be included, though, about financial interest, again, simple was very much the focal point, that there would have to be some discussion and agreement on what are really the essential elements, what would be the standard -- the amount of information that a reasonable person would want to know and find useful and even some of the examples that were provided yesterday, they were kind of the starting point for thinking.

Many people found the simplest of those to be on the order of what we should be looking at and what we should be considering. As far as discussion of dollar amounts, we spent a little bit of time talking about that and the general feeling was that disclosing dollar amounts is not particularly useful because it is very hard to put this in context even for an IRB, for an individual and it depends on the circumstances of where they are enrolling and the nature of the trial.

As well, there was some concern that if you start to get into dollar amounts or put too much information in informed consent, this may actually impede the informed consent process, that it may create some kind of -- some level of discomfort between the investigator and the subject if, indeed, those are the two individuals who are involved in the consent process.

So, that was something that we had considered. There was a feeling, though, that subjects should be able to get more than this minimal amount of information if they were so inclined, that there should be some reference, much as we reference, well, you can get additional information about the study, that perhaps there could be a reference to a phone number where with this very minimal amount of information, that, in fact, the investigator is receiving compensation for their participation in that trial. If the subject or participant wants more information, there is a means to be able to get that through additional contact.

The group didn't see any benefit at all to disclosing information about the IRB in an informed consent, the IRB members or staff, but also said that there may be value as we saw in some examples yesterday, to talking about the institutions' interests.

Again, also the conversation about knowing how an institution applies conflict of interest rules would not necessarily be helpful to subjects. Again, there was even concern that this might give them a false sense of security. If you have some kind of boilerplate language that implies conflict of interest is, in fact, being handled, this is written into an informed consent.

There was a virtually unanimous view, as well, that the clinical investigator's financial interest should be disclosed in writing to the IRB and a feeling that IRBs are very much the pivot point, but that they need certainly greater support to be able to accomplish this.

One of the bases for our discussion about IRBs perhaps is more relevant to FDA regulated research and that is the trend toward clinical investigators working outside of academic centers. Certainly the percentage of clinical investigators, who are involved solely in private practices and are involved in clinical investigation is increasing and, indeed, putting the burden on an institution or an institutional structure to be looking at their conflict of interest would be more difficult than making this a provision of an IRB, which is the final common pathway, if you will, between both types of investigators.

Also, again, I think there was very much a feeling that IRBs are in the best position to decide what should be disclosed to subjects and the variation that may occur depending on the study and the study population.

The issue of could the IRB assume this burden as has been raised before was brought up very clearly and, in fact, one of our -- a member of our group just said this may be the last straw for the IRB.

But there was a general view that IRBs are perhaps the least biased group and a feeling that IRBs are not biased, necessarily or at all by institutions' interests. Another quote from our group that "Certainly the high powered docs are not the ones who have time to serve on the IRBs." They are the ones who are most likely to be involved in institutional financial practice.

Those were the comments from our group.

DR. SNIDER: Thank you, David.

Bob.

DR. TEMPLE: I guess in the interest of full disclosure, I am not sure whether the first point I want to make is me talking or the group talking. So, I will do it anyway.

It seems very important to distinguish between interests and situations that become disclosed and things that we think are necessarily conflicts. They might not be the same thing. This, in part, relates to what Jeremy was talking about this morning. What you want is for people, I think, who are entering a trial, to feel that nothing is unavailable to them. There is no mystery. Nothing is hidden.

Now, that might require disclosure of things that we don't necessarily believe are a conflict of interest, like the fact that the investigator is being paid for his services. I mean, you don't usually think of that as a conflict, though I suppose you could think of circumstances in which it might be, but it might be appropriate to disclose a lot of these things, again, with mindfulness of all the concern about overdoing it and making it more complex than it needs to be even if you didn't particularly think those things were a conflict.

I think it is very important to separate those things. The conflict of interest by an investigator is something that has to be managed in a variety of ways, one of which is disclosure, but another, which might be you have to sell it. But you might want to disclose very broadly and with access to more information for details, if one wanted it, even where you didn't particularly believe it was a conflict.

Obviously, as many people have said, whether something is a conflict depends a lot on the situation, the stage of research, the nature of the study. Is it blinded? And are you sure it is blinded? A whole bunch of things can protect you against possible conflict. But you might want to disclose those things anyway, even if you don't particularly think they are conflicts.

I know everybody worries about overcomplicating the explanation, but I guess I feel too much worry about that is a little paternalistic and that what you want is perhaps several levels of explanation that people could choose among, depending on how interested they are. I am uncomfortable with the idea that they can't handle a dollar figure. Everybody who buys a car, they can handle dollar figures.

What it means is difficult to explain because it is kind of complicated. It means someone is going to have to spend some time saying, well, we always pay doctors to -- investigators to do a study. That is not unusual. Here is what happens to the money and just all of those kinds of things, but if you don't introduce the idea, no one can even ask the questions.

Our group also had great concern about whether IRBs were the best people to know the nuances of financial arrangements and someone -- sorry, I forget her name, but from Emory, described a conflict of interest committee that works closely with their several IRBs and they found that extremely helpful. It makes recommendations. The IRB has final say on all these things, but they found that very helpful and everyone was quite frightened about having IRBs do more, other than they already do.

I would say in general our group thought that disclosure ought to be pretty broad and might include things that people didn't worry about as a conflict of interest.

I guess I should mention one thing that came up. The differences between FDA's requirements for financial disclosure and other people's are not necessarily, although possibly, irrational. A university might want to have a lower level of disclosure than we might because they are interested in the imagery and how they manage everything and what standards they want to impose; whereas, we are predominantly worried about something that might make the data corrupt.

So, you might have different standards. They are not necessarily the same. We originally, for what it is worth, proposed the 5 percent equity interest in companies until we realized that only Warren Buffet would have such an interest and it would be back down to 50,000, which ordinary humans can have.

DR. SNIDER: Thank you.

Well, my next question was going to be how did your group recommend remanage, reduce and/or eliminate financial conflicts of interest. I am going to ask that question but I am not going to go down the line because I think many of you have already addressed that. So, let me ask if there are panel members, who have things to add, particularly around the issues of divestment of holdings around just not participating at all, not conducting a trial given certain kinds of holdings.

Also managing the conflicts that IRB members have, is there anything else to be said around those issues or any other issues around management reduction or elimination that you haven't spoken about yet?

Pearl.

DR. O'ROURKE: Our group really came up with an idea, which is very similar to what Jeremy spoke about this morning and he called it the spectrum of conflict and I will say the process of research. We came up with what we called critical control points, that in managing potential conflicts of interest and decreasing the potential risks, what the group wanted was the IRB to have this list of potential points during the research, be it the analyzing results, be it getting informed consent, be it administering the product or whatever and that, again, since one size doesn't fit all, each trial would be looked at separately with here is a conflict of interest risk.

In this situation, we think all that is needed is disclosure or we think all that is needed is an ombudsman to help with the informed consent or we think that you shouldn't be anywhere the statisticians. In fact, you need an outsider.

So, the management would really be like this laundry list and, you know, again, if there were no conflicts, then none of these would have to be put in place.

DR. SNIDER: Anyone else?

Michele.

MS. RUSSELL-EINHORN: I just wanted to emphasize that our group is very, very concerned about putting too much additional work on the institutional review boards, that they are overburdened, they are under resourced and this involves a lot of information and there was great concern that without adequate resources, that this, again, might, like John said, would be the straw that breaks the camel's back.

In terms of managing the conflict of interest, there was concern again that there needs to be flexibility, that institutions are different, studies are different, different structures work in different places and there needs to be the flexibility to adapt a particular system that works to the right institution.

DR. SNIDER: Lana.

DR. SKIRBOLL: Since a lot of the responsibility for manage, reducing and eliminating sits on the universities, there was a short but I think interesting conversation that we had about somebody referred to as a giddyap, whoa, university culture today between Bayh-Dole and the need to develop universities for them to get vested in companies and bring income into the university and then this issue of human subjects protection and how or whether we need to change that university culture.

There was a short discussion about the role of Bayh-Dole and that Bayh-Dole itself, although it encourages technology transfer, which can lend itself toward royalty payments for the university, in fact, did not necessarily mean equity investment on the part of the university and it had been interpreted as, you know, let's get it out there and get in the market. So, this issue of the main responsibility being to manage, reduce and eliminate [inaudible] university and this conflict, contradictory university culture that is being developed right now is raised with no solution.

DR. SNIDER: Was there any comment about the proposed solutions from yesterday about trust, for example?

PARTICIPANT: I think that the solution was that actually that came up was that the tech transfer parts of the university need to work more closely with the human subjects protection part of the university so that they are not functioning in two different offices at two different ends of the university administration, that they realize that they are having an effect on each other.

Where universities are setting up those structures, at least the consensus is considering the equity investment on these issues of human subjects protection.

DR. SNIDER: Bob.

DR. TEMPLE: Well, it raises a tough issue. People have been -- have expressed concern about commercial IRBs on the grounds that they if they don't say "yes," maybe no one would ever come to them again. But university IRBs have similar incentives to have a study placed in their institutions. Although, I guess, historically and philosophically the ideas that the IRB should be local to reflect local needs, it seems to me that one could make a case for having not all of the people on the IRB at -- I know they are not. Some of them have to be unconnected with the university, but maybe there should be even more review than we currently have of universities that are not going to be recipients of the grants because it seems very hard to separate the need for placement of studies and institutions from the review, which seems part of a general problem. I know there is more talk about central IRBs, but most talk of those refers to sites in a multi-center study joining together, which they, of course, all have that same interest in having the site placed.

So, maybe they should include people from -- who are not a part of where the site would be placed. Everyone realizes, though, that all of these situations are complicated. I think some of the discussion yesterday looked at the inventor of a device or something like that and your first thought is, well, that person shouldn't be studying it. How can that be okay? But sometimes they are the only ones who know how to place the device.

There may be some stages of the investigation where, perhaps, with someone watching closely, that is the ideal person to do it early. Now, if all the data submitted to the FDA depended on that one person, that is a different question. So, exactly what to protect against and what to recommend and all those things is complicated and to me the main thing seems to me to have everybody discussing this a lot, what was called the -- somebody yesterday called the culture of compliance.

What Jeremy referred to as the whole idea that things should be open and aboveboard and then at least you get a lot of views and you can think about it all the time. But simple answers are obviously not going to be too easy. That said, it doesn't seem crazy for an institution to say you can't own shares in something you are investigating. I mean, that wouldn't be everybody's choice, but that doesn't seem ridiculous.

DR. SNIDER: Any other comments?

John.

DR. LIVENGOOD: Yes, just briefly, we specifically asked about IRB and thinking about the individual members of the IRB. We were comfortable that the appropriate management is that with some oversight so the people really knew what these conflicts were in advance so that the IRB member left there during the entire discussion and the vote on the proposal was sufficient management for internally -- for individuals within the IRB.

Our group, also similar to yours, I think, felt that IRBs are to a large extent independent of the university interests and so felt fairly comfortable with some of the -- not going much further than dealing with the potential for individuals.

Just in commenting about the growing importance to the institutions of revenues of this source, people were aware of that. They felt that was a real issue for the university, as payments under Medicare and HMOs get cut, that the universities, the institutions actually have a need to try to generate income through some of these technology transfers and other issues, as well, and that we need to be cognizant of that.

We didn't specifically comment on whether it could take the form of equity or royalties, but that we need to work within the situation, such as it is there to deal with those issues.

DR. SNIDER: Any other -- Pearl.

DR. O'ROURKE: Our group in terms of the discussion of IRBs -- and I agree with the increased awareness, the culture of compliance, et cetera, but a point was made that our group felt that IRBs were, in fact, part of the institution and, you know many members being members of the institution often times, a member of the board of trustees, so if you start thinking more and more about conflicts of interest, there was the concern that when you asked everyone to recuse themselves, you may have a conflict. You may be ending up with the chairman of the IRB and the doughnuts.

So that what we need to do is truly look at potential implications of how an IRB can work pending how we define or begin to define how conflicts should be addressed. I was initially interested in making this comment after Bob was suggesting perhaps taking it a little bit not so local.

DR. SNIDER: Are there any other comments about this issue? As I understand it, there probably are some functions within institutions that have to be carried out that are relevant to this. One, of course, is -- assuming we have defined what we want disclosed and there is a forum for doing that, there is the function of reviewing that to make some determination as to whether that represents a hazard to potential participants, to institution and others.

So, whether you have a conflict of interest committee or whatever you call it, some person or group of persons have to fulfill that function. There is the human subjects protection function we are all familiar with and also there is the technology transfer function and some -- Lana, I think you mentioned earlier, there is a need for these offices to talk to one another and communicate.

Was there any further discussion on how that might be done or how these functions could most efficiently be carried out within an institution and should they be organizationally affiliated with one another or --

[Tape flip from tape 5, side B, to tape 6, side A -- text lost.]

-- issues, if any -- well, first, let me ask what your groups thought about the Federal Government's role. I mean, obviously, there is some guidance out there. We are having this conference. But what role, if any, did your group feel the Federal Government should play in addressing the issues we have been talking about?

Let me start with you, Michele.

MS. RUSSELL-EINHORN: I think that there was consensus that there should not be regulations, that the Federal Government should take a role in this in terms of putting out guidelines to help institutions, but that the guidelines should provide flexibility, that there should be consistency within the Federal Government, although I know Bob just described reasons why FDA regulations and PHS guidelines should be different, but there were concerns about that and maybe if there are inconsistencies, then there is a need to explain them more thoroughly.

There was some sense that the federal guidelines should be viewed as a minimum, not a maximum, not a feeling, that the message should be given to institutions that when the Federal Government is issuing its guidelines, these are just the minimum standards and institutions should go further than that and decide what is most appropriate for their institutions.

So, there was clearly a sense that there needs to be a federal role and that it shouldn't come in the form of regulations.

DR. SNIDER: Thanks.

Pearl.

DR. O'ROURKE: I would just say two basic points. One, we kind of nibbled around the edges of this. We didn't really address this one head on, but one concern is that while we do have the eyes and ears of the Federal Government here, the concern that there is a lot of research that goes on and potentially will go on, that we can't reach from here and that the whole environment of research is changing so much right that even under the federal umbrella, you have -- we are very focused on academic health centers as the model and how would a system that would work for that system then translate to an industry model, who is using a CRO, who is using 58 private practices, who the last time they went to an academic medical center was to get their degree.

So, it is, you know, something that is going to fit all these different things in the evolving face of research but then also the -- you know, we talk about FDA, obviously if there is a product, we have FDA rules in there, but consider the behavioral research that is done with private money. We have no way to reach through to that. So, there is that concern about what we are not getting.

Second is the desire to reevaluate the existing federal regulations that we have, fear that, in fact, when we talk about FDA disclosure rules, they actually do more -- they do less than what we think they do. I think some of that came out yesterday, that the point of the disclosure is -- I am on thin ice here talking about FDA stuff with two FDA people, but my impression is that at the point of the disclosure rules, it really is an application far down the line. You know, you have missed the boat. In terms of human subjects protections, it was way earlier that we should have been looking at that.

So, can we look at the FDA regs and system is there a way to kind of capture it if our goal is patient protection. Secondly, for NIH, are there ways that we can look at different mechanisms of how we fund studies, looking at problematic areas, multi-center clinical trials. I mean, we already hear, you know, huge complaints from people saying, well, when I went through the, you know, University of Tulsa, I was -- this is my informed consent, but then when I went to the University of Baton Rouge, I had to have a totally different consent. Same thing.

Now if you add conflict of interest on top of that, we may be talking very different requirements in different places. So, one question is should NIH look more at both contracts and cooperative agreements for multi-center studies, where there can be more homogenation of some of the protections, some of the guidelines. So, I guess asking federal agencies -- and I don't want to be bad to CDC -- obviously, there is CDC -- but to each one, look, what do you have, how can that be strengthened. What are the gaps that you already have before -- and while we also look at should there be more on top of this.

DR. SNIDER: John, do you have something to add?

DR. LIVENGOOD: Yes. I sort of specifically asked our group about regulations, guidelines and guidance and what they felt about that. Some of their concerns -- nobody prays for guidance at any point, but

-- for regulations, excuse me -- but everybody says though that they were concerned about uniformity and how this could be looked at across and should there be some type of monitoring functions, where you move beyond just guidelines and it wasn't clear how people wanted to actually achieve some degree of guarantee that all institutions were doing something that seemed likely to be effective in trying to deal with this.

So, I thought there was some ambivalence about what people really felt like was going to be needed to have an impact in improved protection around this issue.

DR. SNIDER: David.

DR. LEPAY: I am going to follow up on the comment Pearl had made, in fact, because we heard as well with respect to FDA's financial disclosure in our group and that is that prevention is not really the focus of FDA's financial disclosure rule and that the regulatory framework should be examined with an eye toward preventing problems.

So, that did come up in the context of regulation and its appropriateness. We have kind of a mixed group. We have a number of government representatives. So, of course, we also heard some comments to the effect that regulation puts everyone on notice, that regulation can be very important in these areas. On the other hand, the other side of the coin, that it is really the IRBs and the institutions that ultimately need this information, that infrastructure put into place and giving it to the regulators and putting more regulation in place doesn't really -- cannot really prevent the problem.

I had mentioned earlier we did have some discussion about regulation in the context of current informed consent regulations and the possibilities of addressing financial interest in that context. Just as a general point, which I am sure everyone heard here, there is a belief -- there was a belief within our group that there are a tremendous number of gray areas here and potential inconsistencies and there certainly is a need for much more guidance in this area, as well as looking to harmonize as much as possible between agencies.

DR. SNIDER: Bob and then Lana.

DR. TEMPLE: Well, if it became an established principle or consensus that it was appropriate to include some information about the finances of a trial in consent, then my presumption is that would go into whatever guidance or regulations govern informed consent. I mean, there would have to be a consensus on that and it would apply to the things that are covered by that consensus. I doubt if anyone would like it to spell out exactly what needs to go in there, but that doesn't seem entirely out of the question.

It is very true that FDA's disclosure is -- occurs after the fact, but to begin an IND, one also to find out from investigators engaged in covered trials -- covered trials really means efficacy trials, Phase 2 and 3, about these conflicts. So, we think there is some prophylactic benefit, but we should be very clear it has very little to do with the kind of Phase 1 trials that made everybody nervous about gene therapy.

It absolutely does not cover those. So, if there is a remedy, it has got to -- it is certainly not going to be from the disclosure requirements we have imposed. I mean, I know people are actively thinking about that. I see Jay Siegal in the room and there is a lot of attention to that question. I don't think it is out of the question that regulations could be written to relate to that.

DR. SNIDER: Thank you.

Lana.

DR. SKIRBOLL: I would reiterate our group went through the same -- to my surprise that they were interested in federal guidance, even in terms of these issues of how the university should look at this and definitions and issues needed to be harmonized and whatever the fed said needed to be a floor and not a ceiling.

But the other thing that got raised but not answered is the extent to which the feds will literally invest and also substantively invest in compliance. You know, once you put stuff out there, how do we know that universities are complying with that? And for organizations like NIH that are not regulatory agencies, don't normally think of ourselves as heavily vested in compliance, you know, how do we ensure that once we put guidances out there, that people are actually adhering to those guidances or at least using them as a floor and the development of university policy.

DR. SNIDER: Okay. So, it sounds like we need to put out some guidance that is consistent and harmonized and helps people a great deal, but at the same time is not too prescriptive and allows for individual judgment, which is an easy, easy task.

PARTICIPANT: We will be happy to review your first draft.

[Laughter.]

DR. SNIDER: Thank you.

What other issues -- since we are moving toward the end of our time, what other issues, if any, came up in your particular group that panel members might wish to comment on?

Bob.

DR. TEMPLE: Well, there were passionate denunciations and great consensus that ghost written articles are a horror and that somebody, it is not clear not who, should do something about that. I think if you ask, many people would say it is the responsibility of medical journals, but I think there is less clarity on that, but, you know, at the level of undermining academia and just feeling very bad about it, I think there was quite uniform views that people should take responsibility for the things they write, shouldn't have words put into their mouth.

That was certainly one issue. The other question that came up but the remedies are less clear are what to do about all the unpublished results that people would like to know about. That is something of a problem because, you know, a negative study of another antidepressant is not something that makes its way into a whole lot of journals. Why would anybody care particularly or maybe they wouldn't, maybe they would, I don't know. But how t od that is not clear. Whether there is some way to have a national database that essentially includes the results of every study that was done on humans is something that probably needs further debate.

There was one question as to whether data in FDA files could be made available to people for doing further research and at least so far, that has usually been considered privileged. For what it is worth, though -- and these matters are under discussion -- we are entering an age where electronic databases will be the norm, the rule, in fact. So that actually making things accessible is not physically -- will not be physically impossible anymore to xerox the entire database in a new drug application is a horrifying thought.

But it is much -- it will become much, much easier to do that if it were thought valuable and thought -- and paid appropriate attention to proprietary rights and things like that. For what it is worth, there are things you can do with FDA databases that are interesting. We have seen one report recently in which people -- in which the investigator used only FDA reviews, which is not the same thing as having access to the data to look at well over 20,000 patients randomized into depression trials to look at suicide rates. I should tell you they found that they were not increased in the placebo groups.

That was something of a relief, but the potential for doing those kinds of things is very great if these databases were available.

DR. SNIDER: Lana.

DR. SKIRBOLL: There were a lot of things discussed, but I think one of the things that I am sure others have heard is that we -- in this dialogue, we have not heard from enough voices. In particular there has been still a sort of deafening silence on the part of patients. We need to hear more patients about this issue and it may reflect how much they know or understand about this issue, but that if we are going to come out with guidance, we need to do more about seeking the opinions of patients and investigators, who are actually doing the work.

There has been a tendency to talk to university administrators or people who are IRB members, people who are overlooking this and perhaps we need to hear more from investigators as well. I think what we heard from others, that the activities of the American Society for Gene Therapy, getting more -- I think that was raised yesterday -- getting more professional groups to internally really debate these issues and come out with a position will help raise the consciousness of everybody on these issues.

DR. SNIDER: Thank you.

PARTICIPANT: I would echo what Lana just said. We had a discussion that there weren't enough voices in this dialogue, that there are a lot of researchers -- there are a lot of physicians who are not covered by a government assurance who are doing research with significant financial interests and those people aren't represented here, the principal investigators or other investigators.

There was some sense that there should be some more discussion about the difference between institutional conflicts of interest and investigator conflicts of interest and discussion about independent institutional review boards and their conflicts of interest and that should be addressed as well.

PARTICIPANT: Three points. One, the whole issue of international. I think the point was well made yesterday in the public comment period that research is globalized to this point; yet, it is also local. That was one issue.

A second, which has been referred to numerous times here is whatever comes down the pike, how are we going to pay for it.

And the third, which was not discussed, and I guess I was a moderator but I was also a member of our committee, I would just like to put out there, the dueling interests of transparency of this process with privacy. I think there are some head-on collisions there.

DR. SNIDER: John.

DR. LIVENGOOD: We spent some time actually talking about the data from the presentation in the morning, that it shows an association between sponsorship and the results of the study. We were quite troubled about that. We felt we couldn't really evaluate after this presentation or on having read the article to what extent that is effecting things by the publication bias of positive studies, that the six journals that they may have reviewed may be top line journals in -- if you have ever tried to get a negative study in the top line journal, you know that that is just very, very difficult and also the possibility that as you have a negative study, you can find a hundred thousand reasons why maybe you didn't just do the right comparison somewhere or other so you get frustrated trying to constantly respond to comments and sooner or later it may not be the journal that is turning you down. You just feel like revising it anymore and it just disappears.

But we were troubled by the appearance of that association. We didn't feel we could ascribe any type of cause and effect to it, but we thought that empiric data and more study on that type of issue was just really needed and that we run a big risk here of ignoring it when we can document that there is such an association, at least to the degree that it was documented here.

That troubled us on behalf of all the sort of research enterprise such as that. We did talk some about what you just raised, privacy, and, in particular, the roles of state laws now that the federal privacy regs are still in process and what state laws are now doing in terms of privacy and what that means for research. As you know, that is a concern of mine at CDC as well, but mostly it is people who felt they had no answers and they had no guidance about what they were supposed to do and how was it going to impact research and that somebody should be thinking about that, whatever that may be.

The last point was very similar to what has been echoed here by the others. My group asked where is the public viewpoint. We need more from what the public thinks about this and less, perhaps, about what we among ourselves do. They did ask the same question that other people have said, too. They asked why didn't we talk about, you know, additional support for IRBs. They felt we should have had that as part of a plenary information. They were very willing to talk about it and it sounds like all the groups talked about it, but it wasn't explicitly addressed as part of that.

Even though I felt that they -- from other meetings that I have been, I can tell them that, you know, the department is spending an enormous amount of time thinking about this and we just don't have any proposals out there right now that really will satisfy that issue, but they felt that they wanted to hear some more information. If there are going to be new duties, that there needs to be some consideration of what can be done to better support the process.

DR. SNIDER: Were there any suggestions about public input, more public input?

DR. LIVENGOOD: That aren't different from the ones you hear all the time, change the overhead, make it a direct cost. I mean, nobody --

DR. SNIDER: No, I am talking about public input into this process.

DR. LIVENGOOD: There was some possibility there. We actually had some people from non-academic settings that they could play a role in trying to get some more public input either from a state health department or from other things as well. But many of the issues, you sort of -- once you get sufficiently committed to this, you cross over the line sort of from a true person, you know, man on the street view of this, into more of an advocacy role into how they handle trying to get an opinion of people, other than trying to do some research on it, they didn't ask.

DR. SNIDER: Okay.

David.

DR. LEPAY: I am probably echoing many of the same thoughts because a lot of our group's conversation focused on the fact that the group there, as well as the group more broadly was not necessarily representative of a number of interests. We didn't have a clinical investigator representation in our discussion and as well as clearly there was not a lot of representation of subjects' interests overall and comments were raised that maybe the government needs to convene some kind of community or patient focus group and when the question came up about what empiric evidence existed, well, maybe the government can sponsor some such studies. Granted they are difficult to design, but that may be the way to acquire this empiric information. So, we had a lot of focus on the issue of who was not represented here and perhaps ways of better engaging them.

I think the only other area that we touched on a little bit is as we are starting to talk about potentially new responsibilities for some of our characteristic players and possibly new infrastructures developed as well, there was concern about who ultimately is going to be monitoring each of these structures and what kind of auditing will be expected. Our sponsor is going to be put into a position where they are going to have to assure to themselves through auditing or monitoring that these -- that their own processes are not being compromised by these new organizations.

DR. SNIDER: Okay. Before we adjourn, are there any other issues or concerns that were brought up or weren't brought up, perhaps should have been brought up?

If not, I want to thank the panel for a wonderful job of summarizing a lot of material.

Thank you all.

[Applause.]

I think we reconvene at 11:00.

[Brief recess.]

Agenda Item: Plenary: Reaction to Conference Proceedings

DR. BLUMENTHAL: Well, in the interest of keeping us on time, I am going to convene our reactor panel.

My name is David Blumenthal. I am director of the Institute for Health Policy at Massachusetts General Hospital and Partners Health System. I am delighted to be here.

Many panels late in discussion, such as this, are convened in the spirit that everything has been said, but not everybody has said it. But I am confident that this closing panel is not convened in that spirit. It is composed of individuals, who have thought deeply about these issues and wisely and have divergent and strong views and I know it is a panel that will send us off with a lot of ideas and perspectives that will need to be accommodated as we go forward trying to manage the issues that have been raised here concerning conflict of interest and human subjects research.

All our speakers deserve long introductions, but they have bios in your book. So, I will only give brief introductions as each speaker rises to address you and I would appreciate it if we would hold questions until the end. There should be plenty of time for questions, but I would like to make sure that every speaker has a chance to get their points across before we embark on questioning.

Before turning to the first speaker, I want to take the moderator's prerogative to make -- to reflect a little bit about the last two decades or so during which this set of issues, the issues of conflict of interest in research have been in one way or another on the public agenda, especially with respect to the life sciences.

When we did in 1983 a first national study of the relationships between academic institutions and industry in the life sciences and in other areas we found somewhat to our surprise, given that we were concentrating our work on what was then the hottest field of the life sciences, the most commercially appealing, and that was biotechnology.

We were somewhat surprised to find that the rate of relationships, the prevalence of relationships between chemists and engineers in universities was double the rate that was detectible among biotechnology practitioners in universities. So, you may be asking as we have over time why is it that we are -- we have heard so much about relationships about universities and industries in the life sciences and comparatively so little about engineering and chemistry or computer science in which these relationships have never been studied for prevalence but probably are much more numerous now as they were more numerous then.

One hypothesis you may have is that though they weren't as prevalent in the 1980s as they were in the chemistry and engineering area, that they have probably in the life sciences grown dramatically. We have surveyed medical school faculty three times over the last two decades and between 1983 and 1996 and the prevalence of relationships between universities and industries has not changed dramatically among faculty over that time. About 23 to 25 percent of faculty have research support from industry and about 7 to 8 percent have equity in companies that are related to their own research. Those numbers, at least as of the mid-1990s, haven't changed much.

I think that there are two other possible explanations, many possible explanations, but two, perhaps, most salient for why we continue to be concerned about conflicts of interest in life science research. One is the involvement of identifiable human subjects in the research with all the ethical and public relations issues surrounding that involvement.

The second is the massive amount of federal support and the consequent interest of taxpayers and their representatives in whether the government and taxpayers are getting their full return on their research investment. I think the fact that we are gathered here under federal auspices to discuss specifically human subjects protection in research is at least symbolically confirmation of the importance of those two aspects and I think one of the implications of that is that we won't put to rest these controversies until we have accomplished at least two things.

First, we have demonstrated that human subjects of research are well and clearly protected against real or apparent harm that may befall them as a result of conflicts of interest in research. And, secondly, that the Federal Government's investment in research, the public sector's investment in research is not compromised by the prevalence of academic industry relationships.

With that introduction, I will turn over the podium first to Marcia Angell. Marcia Angell is I am sure known to many of you. She was until very recently editor of The New England Journal of Medicine and has been a friend and colleague of mine for a number of years.

Marcia.

DR. ANGELL: Thanks, David. I really loved that introduction.

I am pleased to have been invited to participate in this important conference. There is so much to say this morning and so little time in which to say it. What I am going to do is to make some general comments on the topic and weave in my responses to the conference. I apologize for reading most of my comments, but it is the only way that I can be sure of touching on all of my major points in the short time allotted to me.

As we know, this conference was a response to Secretary Shalala's charge to "Identify new or improved means to manage financial conflicts of interest that could threaten the safety of research subjects or the objectivity of the research itself."

That language is important and I will come back to it later. I am going to begin this morning by defining a financial conflict of interest because there is often controversy on that score. A financial conflict of interest, I believe, is any financial association that would cause an investigator to prefer one outcome of his research to another.

Let me give you an example. If an investigator is comparing Drug A with Drug B and also owns a large amount of stock in the company that makes Drug A, he will prefer to find that Drug A is better than Drug B. That is the conflict of interest. Note that it is a function of the situation, not the investigator's response to that situation.

If the investigator then finds that Drug B is better, he may swallow his disappointment and report the facts objectively or, as Tom Bodenheimer pointed out, he may in many ways not report his findings objectively. According to this definition then, there is no such thing as a potential conflict of interest. The only thing potential about it is whether the conflict leads to bad research.

Note also that financial conflicts of interest are not inherent to the research enterprise. They are entirely optional, unlike the intellectual or personal conflicts of interest to which they are often compared, such as the desire for Nobel worthy results.

At one time financial associations with private industry were largely confined to drug companies awarding grants to academic institutions for research in areas of interest to both of them. In the best institutions, this was done at arm's length. The companies had no part in designing or analyzing the studies. They did not own the data and they certainly did not write the papers and control publication. The academic institutions, the best of them insisted on this, and they were the watchdogs.

Things have changed dramatically in the past few years. Arm's length relationships are a thing of the past and financial arrangements are hardly limited to grant support. Companies now design studies to be carried out by investigators and academic medical centers who are little more than hired hands supplying the patients and collecting data.

The companies own the data. They analyze it and control publications. Such a study is not even necessarily of any real scientific importance or interest to the investigators. It may -- particularly Phase 4 studies, it may instead by almost entirely of marketing importance to the sponsor.

For their part, academic institutions are increasingly involved in deals with the same companies, whose products their faculty members are studying. Some institutions are for a price allowing companies to set up research outposts in their hospitals and giving them access to students and house officers, as well as large numbers of patients. They are aligning then their interest with those of industry and allowing the boundaries between them to become ever more blurred.

Consider the story in last Thursday's Wall Street Journal. It reported that Targeted Genetics Corporation will acquire Genovo, Incorporated. As part of the deal, James Wilson will receive $13.5 million worth of stock in Targeted in exchange for his 30 percent equity interest in Genovo.

Recall that the University of Pennsylvania permitted Wilson to own a piece of Genovo, even while he was doing research on its products. Now, that is hardly surprising, given that Penn itself, according to the story, will receive $1.4 million worth of stock for its 3.2 percent stake in Genovo. Some watchdog.

This story is news because of the death of Jessie Gelsinger, but it is hardly unique in its outline. Incidently, I sometimes think that there has been no piece of legislation quite like Bayh-Dole with respect to the enthusiasm and the expansiveness with which it is embraced. It believe it is often used as an excuse for making as much money as possible in as many possible ways.

Now, what about the integrity of the scientific literature. As Tom Bodenheimer summarized yesterday beautifully for us, there is plenty of strong evidence that investigators with financial ties to companies whose products they are studying are, indeed, more likely to publish studies favorable to those products. In my two decades at The New England Journal of Medicine, it was my clear impression that papers submitted by authors with financial conflicts of interest were far more likely to be biased in both design and interpretation.

In my view, the pervasive and manifold financial conflicts of interest that now exist have a number of bad effects, in addition to the threat to the integrity of the scientific literature and the risk to human subjects that primarily concern us here.

I don't have time to discuss those other negative effects. Suffice it to say, though, that we need to remember that the mission of investor-owned companies is quite different from the mission of academic medical centers. The primary purpose of the former, of the companies, is to increase the value of the shareholders' stock, which they do by securing patents and marketing their products.

Their purpose is not to educate nor even to carry out research, except secondarily as a means to their primary end. I believe that academics often forget this and they allow themselves to believe that marketing is really education. They allow themselves to believe that. With all this as background, what I would like to do in the time remaining to me here is to say a few words about the remedies that have been proposed at this conference and why I find most of them unsatisfactory.

The emphasis here has been on managing conflicts of interest. Managing is, in fact, the word used in Secretary Shalala's charge. In particular, the focus has been on disclosure to human subjects and on institutional oversight. In my view, disclosure simply passes the buck to the patient subject, who is left to wonder how the investigator will balance his competing interest. That is neither fair nor helpful to the subject who may be both sick and desperate and it is certainly does nothing to remove the conflict of interest.

Caveat emptor is simply inappropriate in this setting. On the other hand, not disclosing a conflict of interest is even worse because it is fundamentally deceptive. Patients naturally assume investigators are primarily interested in patient welfare and they have a right to know of anything that may shake that assumption.

To disclose or not to disclose then is a Hobson's Choice and the very difficulty of making that choice points to the underlying problem, the very existence of financial conflicts of interest.

Institutional oversight usually boils down to defining conflicts of interest that must be disclosed and documenting them on various pieces of paper. Some of the guidelines are truly mind boggling in their complexity and detail. For example, Harvard Medical School's guidelines widely acknowledge to be among the most stringent and the best, fill eight, closely-worded and nearly impenetrable pages. They deal with such matters as the dollar amount of equity interest investigators and specify family members may own in companies sponsoring their research. The answer seems to be $20,000 for spouse and dependent children.

In short all of this is about management, not about the wisdom of permitting the conflicts of interest in the first place. It is as though financial conflicts of interest were inherent to the enterprise, were a given of nature or a constitutional right, neither of which they are. I believe we need to stop dancing on the margins of this issue and deal with it head on.

The important question is whether financial conflicts of interest should exist, not how to work around them. Mind you, I am not opposed to cooperation with industry. Academic industrial collaborations have led to some important events, but the conflicts of interest that now abound in academic medicine go way beyond cooperation and many of them have no conceivably useful social purpose.

I would like to suggest the following guidelines. One, investigators who receive grant support from industry should have no other financial ties to those companies. Consider an analogy. Suppose a judge had before him a case of two contending companies and he had equity interests in one of those companies and he said not to worry. I am a good judge. This won't bother me. I will hear the case.

He would not be permitted to do that. Why should we?

Two, institutions should not accept grants with strings attached. Investigators should design and analyze their own studies, write their own papers and decide about publication.

Three, consultancy arrangements need to be carefully limited. I believe the argument that they bring new technologies to the bedside is greatly overblown, particularly in clinical research where the technology is usually developed.

Consultancies in academic medicine are virtually ubiquitous and they are more often about income supplementation and the good will thereby generated, than about technology transfer. Incidently, the issue of good will, I think, is considerably underestimated.

Furthermore, technology transfer does not require that fees be paid directly to investigators. Income from limited from limited consulting might instead go to a pool earmarked to support the missions of the institution. After all, investigators at academic medical centers have reasonably well-paying jobs. They do not need to jeopardize their objectivity, the very core and essence of what they do to increase their income.

Four, institutions should not become outposts for industry by allowing investor-owned companies to set up teaching or research centers in their hospitals and giving them access to the students, house officers and patients. I am aware that academic medical centers are now in difficult times financially and I sympathize with their plight. I really do. But the answer cannot be to sell themselves and their patients to industry.

Five, institutions and the senior officials should not have investments in any health care industry. The editors of The New England Journal of Medicine have long had such a rule and it has not been a hardship to us. Rubies, race horses, real estate, yes, we can invest in all of those, but not biotechnology or managed care companies.

Finally, and perhaps most important at this stage, institutions need to get together on this issue and develop a common policy. As it now stands, investigators may threaten to leave institutions with stringent policies and go to more lenient ones. That race to the bottom can be stopped only by the major academic medical centers joining together to do the right thing. I am well aware that my proposals might seem radical. That is because our society is now so drenched in market ideology that any resistance to it is considered quixotic.

But medicine and clinical research are special and I believe we have to protect their timeless values of service and disinterestedness. Patient should not have to wonder whether an investigator is motivated by financial gain and the public should not have to wonder whether medical research can be believed.

The only way to deal with the problem is to eliminate it as much as possible.

Thank you.

[Applause.]

DR. BLUMENTHAL: Thank you very much, Marcia.

Our next speaker is James Benson. He is executive vice president for technology and regulatory affairs at the Advanced Medical Technology Association, formerly known as the Health Industry Manufacturers Association.

MR. BENSON: Thank you.

You know, what David didn't say is that I spent, I think, well over -- close to 30 years with FDA and over 30 years with the Public Health Service. Part of what I learned at FDA was you have to follow instructions. You have got the regs and the guidances and so on. I was told to take five minutes. So, I am going to help make up some time here for you. It is all part of that background.

This session and being here caused me to look back and I can remember soon after I left FDA in 1992, conflict of interest surfaced and like Dr. Angell, I think my first reaction was, well, there just shouldn't be any. That is the simplest possible solution.

But as I looked at the issue from the industry perspective, which was my new job to do, especially from the medical device industry perspective, what IU realized is that it is a lot more complex than that and we have got to really think deeply about that issue. You know, purity on the one hand in terms of the scientific

efforts --

[Tape flip from tape 6, side A, to tape 6, side B -- text lost.]

-- adequate competent support for research on the other as Dr. Kirschstein said yesterday, dollars versus objectivity. I hope I am doing that justice, Ruth.

Now I guess the question that -- especially after listening to Dr. Angell, who I almost always agree with in many of her past positions, I would just ask this question. If we were to eliminate all conflicts. If we could wave the magic wand and do that, would that ultimately benefit patients? In fact, would it ultimately benefit the scientific process? I can see a model once we got there, where that could be a very comfortable way of doing business, but I just wonder what the model is that -- I hope that is what we are all about here is really bringing the right products to patients.

I don't know the answer to that but I just wanted to pose that at this point. My job is, I guess, to focus on the medical device industry and the device issue, which hasn't been mentioned much in the last couple of days. I have heard a couple of references. I guess the thing that I have joked with some of your about over the years is we have something called the drug model that always applies to devices.

It is sort of like, well, if it works for drugs, I guess it should work for devices. So, I thought when I was contacted about this conference that maybe I had better get in sort of at the beginning of the process and remind people that that isn't necessarily the way to go, that devices and device research and that world needs to be looked at somewhat independently.

I don't have exact numbers on this, but more than half and probably more like two-thirds of breakthrough devices, new devices, that come on the market come as a result of not large companies investing large sums in research but rather small start-up companies that, you know, may be made up of a physician and an engineer getting together and starting a process. This is why I think, you know, you have to really think hard about what benefits patients most because many of the products that we depend on today for quality of life and in some cases life itself as a result of that process.

Those companies, those start-ups, often don't have any way of financing research. So, they can offer equity. Now, is that a good thing? There is no question that that kind of arrangement needs to be disclosed. I just hope that we can find ways to keep research unbiased and objective while at the same time encouraging innovation, especially in the device arena.

Now, yesterday, Dr. Henney pointed out that disclosure doesn't surface under FDA regs until the end of the investigative process. I think Bob Temple made reference to that this morning, if I remember correctly.

If we end up changing that timing, that is, informing patients through consent, I hope we do it in a careful and deliberate way. There has been much discussion over this. I certainly don't want to emphasize it. The core question is whether conflict of interest information, assuming that it doesn't get totally eliminated, will, in fact, be helpful to the patient, will cause more confusion. That question has been asked and covered. I certainly won't dwell on it.

Now, having said that, let me also say that it is critical that our investigation process, clinical research, in fact, all kinds of research continue or simply be credible. It is not only for patients that that is true but for the public as a whole.

I think one of the things that certainly is standing here at NIH is then key to NIH and many of our scientific institutions, government or not, is scientific credibility. I will just mention one of my pet themes is our country seems to be getting or large portions of the population seem to be getting less and less scientifically literate and I worry more and more about that scientific credibility issue.

That certainly is key. So, in closing I would just urge that we continue the kind of process that has been initiated by this conference. I want to just take a moment and congratulate all of those folks, Stu, I know, NIghtingale had a lot to do with it, Bill Raub.

I don't want to start mentioning or continue mentioning names because I will leave folks out, but I think that has been a wonderful opportunity to bring people together in a very rapid pace and get information and get data out on the table.

So, I would hope that -- I am not sure what the end product is. I know there have been a lot of things discussed and I don't want to get into that, but I think making sure that all the stakeholders are involved in the process is absolutely critical and you certainly have done that in a great way.

I just want to close by trying to paraphrase something that Mrs. Dr. Nightingale mentioned to me a few minutes ago. If I screw this up, you can straighten it out, but she basically said, you know, whatever we do here, I hope that we look at the outcomes of it because whether those outcomes were intended or not, remembering that the ultimate goal, at least in my mind, is really benefiting patients. I hope I didn't mess that up too much.

Thank you all.

[Applause.]

DR. BLUMENTHAL: Thank you very much, Mr. Benson.

Our next speaker is Dennis DeRosia. Dennis DeRosia is the North American regional council president of the Association of Clinical Research Professions, which is an international organization of professionals involved in clinical research and development.

MR. DE ROSIA: Thank you.

About a month ago in preparation for this, we were so excited at the association to have the invitation to participate, I called Dr. Nightingale and asked if we could come in and talk with him for a little while and find out the perspective from which he wanted us to address and so on and what he wanted us to present. And he said, well, you don't have to present. You just get to react. I walked away delighted and then it dawned on me that if I was presenting, I would have a month to have overheads, lined everything out, draft my speech, prepare it, go over it several times and then I said, react? I have to listen to everything said. I have got to take notes.

I then have to weave it together and make a comment. So, what a punishment I walked away with but I didn't realize it. He sold me on the idea and I didn't even know what happened.

To give you a little example and perspective on who we are, the Association of Clinical Research Professionals, an international organization, is now 13,500 members strong and a couple of key points need to come up here. One is that in the past four years we have undergone dramatic growth. In fact, we have doubled in size in four years. The reason for our growth really came out yesterday in a comment by Dr. Henney, when she said clinical research is becoming increasingly complicated at a time when it is becoming increasingly visible. Therefore, our association has really two reasons why we have grown so dramatically.

One is a fallout of all the press. Good, bad or indifferent, it has made people aware of clinical research and seek out others who are involved with it. Secondly is the realization among all those in clinical research that their own institutions and companies couldn't provide them with a spectrum of education and perspective they needed to function in the field of clinical research. They needed to interact with others involved in the process, other companies, other institutions.

Case in point is this particular conference, the fact that we all needed to get together to hear each other's perspectives share the stories and learn from that and go forward with a new educational perspective. Our members, by definition, if you look at job function, approximately one third bear the title of clinical research coordinator, at the sites, hands on, with the patients involved with clinical research.

The other third represent the monitors as clinical research associates from all the different pharma companies, biotech companies, contract organizations, who monitor those activities and the final third come from IRBs, project management and a miscellaneous array of functions in the clinical research realm.

Approximately 46 percent of our membership are at the sites, the clinical research sites themselves, either academic, private, NIH, NCI, whatever the institution may be and the other half again, approximately that, are from the industry, again, the biotech companies, the pharmaceuticals and so on.

Therefore, our members are truly on the front line of the clinical research process. With all this in mind, you might say that I am here today to represent the devil. That is, if you believe is in the detail, like we do, that is who we represent. Okay?

Our organization serves three primary functions. One is education, professional development and information exchange, like this medium. Under education, there are two particular things that we deal with. One, obviously, is the education of our own membership, all those involved with clinical research, whether it is workshops, seminars, annual meetings. This year we had over 3,000 clinical research professionals at our meeting.

But also we have certification for those coordinators and those monitors, for which they take several years to prepare, a minimum of two years before they can even sit for the exam. I should also point out that our association has earmarked a half a million dollars for the development of a training and ultimately certification program for investigators. It is a five year, half million dollar project.

We look for collaborators on that. The second part of the education really has to do with the public, the patients, only the public the day before they become patients. We are really looking forward to working with any association or advocacy group that wants to educate its constituents about clinical research but do it today before they sit in front of the investigator or the nurse coordinator and you are trying to explain the research process. Let's give them all a basic awareness of what clinical research is on the positive side, proactively and not wait until they are there with all the other duress of what they are being told and then try and explain the research process and informed consent process.

Yesterday, it was pointed out and kind of borne out by John Seffrin, as he wore his American Cancer Society hat as he made the comment that clinical trials are an essential -- are essential for science, but they are critical for hope. Only 3 percent as he pointed out of the cancer patients participate in clinical trials. I you read some of the backup literature that have been published about that, it is because the patients just weren't aware of it and if they weren't aware of it, sometimes the treating physician in other facilities really weren't aware of other clinical trials that were available that they could refer to.

So, we applaud the efforts of their particular group in educating their constituency. Every year we conduct an annual membership survey. This past year -- I should also point out, on that survey for the last four years we have issued a white paper, Future Trends in Clinical Research, and we were very honored lately when we saw the Department of Health and Human Services quoted that white paper in several of its own documents.

But a key point in the last survey was that 61 percent of the members ranked training in medical ethics as the most important training issue. They did that not because they felt the field was wrought with ill works or anything like that. It was because they realized that research ethics is the cornerstone of clinical research and must always be focused on in everything we do.

Also, yesterday, Steve Peckin(?) pointed out from the UCLA IRB, he referenced the Belmont Report and the Belmont Circle of Trust. Again, our members are well aware of that Circle of Trust. They interact with the study subject from the moment of consent to the duration of the trial. So, we are very well aware of that and applaud any group that constantly focuses on it, like we are today. But it must always be part of the basic training and education as we go through.

We will carry out the process. That is why we are so excited to be here today because anything this group ultimately decides as to what should be done, we are the ones that from Point A to Point Z. So, we are very anxious to be here today.

We ask also as you develop a process at some point for all of this, look at the common denominator. Don't come up with a solution that fits one environment. For instance, in our discussion yesterday in Dr. Lepay's outbreak group, one possible solution that was posed was that the institution would create a separate department or reviewing body for reviewing financial conflict of interest. That is great within institutions and I think all institutions should do that, but we also heard today of the constantly rising amount of research done outside the institution. That is not an all-inclusive answer. So, we sure that any resolution that we have out of this include every research environment there is.

And also, as was pointed out yesterday, it will impact the world, not just the U.S. ultimately. We believe that there is a need for basic education in clinical research. This seminar is great. It is just one part of the process. We forget that most of us never trained in clinical research but somehow migrated to this, either as IRBs or as actual investigators or somehow in a government capacity, but we deal with clinical research, but we are never trained in it as an entity.

Keep in mind that clinical research is not a one shot or any kind of research is not a one shot thing, but an incremental building block process. I have heard several of the institutions talk about how -- and very proudly -- that they have a whole half day seminar for all their investigators in clinical research and they boast of that. Actually, we challenge that. That is not an all-inclusive, all you need to know thing about clinical research. That might be a nice orientation to getting into research, but keep in mind that it is a continual process that must always be developed upon.

As part of that ongoing process, we must always be updated and then also reminded. We need to think through the process and its intent rather than just rote compliance with whatever procedures we work out, stand back and say why are we doing this procedure? Are we protecting the subject?

I will close on one final comment which my preceding speaker just mentioned. As great as this body is today and yesterday, the one group that seems to be woefully underrepresented is actually the study subjects. Where were they? Are they going to be invited to the next one? Will we be there with them to sit face to face to work out the solution as to, quote, how we protect them?

Thank you very much.

[Applause.]

DR. BLUMENTHAL: Our next speaker is Abbey S. Meyers. She is a founder of the National Organization for Rare Disorders, a coalition of national voluntary health agencies and a clearinghouse for information about little known illnesses.

MS. MEYERS: Thank you very much.

I am very pleased that the last speaker left with -- finished with that thought. The patient community is missing and clinical research won't go anywhere without their cooperation and their willingness to trust in the system.

NORD, the National Organization for Rare Disorders, is the group of consumers, who are most probably well-known to you as the Orphan Disease Community and this is the group that suffered because of the lack of research on rare diseases, most of which are genetic and in recent years because of the Orphan Drug Act, a large number of very important breakthrough treatments, both drugs and devices, have been developed to help this community.

There are about 25 million Americans with over 6,000 rare diseases. The Genome Project finds more everyday.

In recent years, the research community has changed drastically and we are still dealing with the patient protection rules that were written 50 years ago. It is going to continue to change. First, so much of it is moving away from academic research institutions. So, whatever NIH decides to do about changing its rules is not going to affect the research in the store fronts and the research in the local doctors' offices that don't have to comply to the NIH rules. This is a very, very important factor that should not be forgotten.

The rise of the new medical technologies, specifically gene therapy, xenotransplantation and especially the patenting of human genes, all raise the question of financial conflict of interest. Very often, the gene for a genetic disease will be discovered in the laboratory of the person who sees the largest number of patients with that disease and he has a patent on it. So, the development of a genetic test, the development of any therapy all will come out of his laboratory.

Then, of course, the research that is not funded by the government and is not done in academic research institutions, there are holes in the safety net where there are specific types of research that do not fall under the common rule if there is no federal funds involved. That is a terrible problem that has to be fixed.

As someone mentioned yesterday, there is a federal law that governs the safety of animals in research but there is none that covers the safety of human beings.

As Dr. Kirschstein, objectivity lies at the heart of science and the question is does financial conflict of interest compromise objectivity. Even if we don't want to answer that question or were unable to answer it, I want to tell you that financial conflict of interest compromises public trust.

As we heard yesterday from a scientist speaker, the science has been lost in the rush for money. Patients read the newspaper, watch television. Patients read The Wall Street Journal. We can find out quicker about conflict of interest in our own doctor by reading The Wall Street Journal than we can by reading an informed consent document.

We all accept that the IRB system was created to protect patients, but yesterday we heard that investigators say that it is paperwork nightmare. The institutions say the system is burdened with red tape and, indeed, maybe a new regulation about conflict of interest might be the last straw. Companies say that the system is too slow and the industry is overregulated.

FDA and NIH say that it really doesn't matter that their rules weren't designed to fit together and that they each have quite different rules. Many people are asking if conflict of interest is added to the IRB responsibilities, how will the IRBs get the extra resources that are needed.

Yesterday's speakers reaffirm that local IRBs want to continue having total authority, even though they are overburdened and even though they say it might be the last straw. So, they don't want to give up their power of control. The members of the IRB are employees of the institution. So, there is a conflict of interest that is inherent in the entire system.

For several years I served first on the Gene Therapy Subcommittee here at NIH and then until 1996, I was a member of the Recombinant DNA Advisory Committee. I served on the RAC at the time that the protocol was approved that Jessie Gelsinger died in.

In many years looking at those protocols coming through, I and the bioethicists on the committee and consumers on the committee were appalled at how bad the informed consent documents were. We would complain and we would say we can't vote for this protocol unless the scientists changes that informed consent document. It is not truthful.

FDA would say we don't have the authority to require the investigator to make that informed consent document understandable and to put in the truth. There were some informed consent documents that came in front of us that said we hope this experiment will cure you. This was Phase 1 trials.

I saw through the experience on the Recombinant DNA Advisory Committee that there is something very, very wrong with the system and I worry about the new technologies that are evolving; xenotransplantation, stem cell therapy. These protocols are being reviewed by IRBs without any members who understand not only the new technologies but what the consequences could be.

If we began to see any effort to reform the IRB system and to bring them into compliance, we only saw it begin after OPRR started enforcing the rules. Now we know that institutions have not given the IRBs adequate resources and they say that they will from now on. But all of this debate has omitted the problems associated with privately funded research and paid IRBs. It calls into question the pharmaceutical companies interests and whether it is truly concerned about protection of patients.

If FDA's regulations say that you don't even have to report conflict of interest until Phase 3, what is happening in Phase 1 and Phase 2, which is really the most dangerous part of clinical research? Patients want and deserve information. The only comment in this day and a half conference that has really disturbed me is how many people said patients won't understand.

I don't think there are many people who are clinicians, who have been exposed to patients, who would know. Patients are educating themselves intensely. If you don't give them an answer, they are going to go home and look it up on the Internet. They are going to understand their disease.

Keeping any information away from them is going to anger them. So, putting a paragraph in informed consent documents saying that a scientist owns stock in a company, they will be able to come to their own conclusion about it and they should be able to come to their own conclusion.

Indeed, I have seen conflict of interest in the patient community. Patients will call us to complain about the price of a drug and when you talk to them, they will admit that they own stock in the company. So, they are torn by their own conflict. Patients want the whole truth and if you don't tell the whole truth, there is going to be a heavy price to pay in terms of public trust.

If you want patients to continue to lay their lives on the line, you have to have public trust. You must require conflict of interest because conflict of interest may compromise objectivity, not necessarily but just the fact that it may.

If it does, the research volunteer is going to be very disappointed. No one agrees on the definition. In the day and a half, no one has agreed on the definition of conflict of interest, but we do agree that the perception of a conflict of interest is the problem. I have a definition. It will never get into the Federal Register.

But if we could just say that if a scientist has a financial interest in a product or a company or has a patent on a gene, would it pass the 60 Minutes test? If it won't, then that project should not be approved. There are growing commercial and academic partnerships. So, the problem will get worse unless we do something about it now.

I really want to end with the thought that maybe -- I remember when Dr. Healy was here at the NIH. She went around the country and had a series of conferences and she kept saying research is changing. It is not your father's Oldsmobile. I want to plant that thought in your mind, that maybe the common rule, as it was written 50 years ago is our father's Oldsmobile and maybe we should throw it out and maybe we should create a new system. Maybe local control is one of the big problems and maybe there should be a central research office in every state that would be able to make these decisions without having a direct relationship to the institution.

The decisions would then be in neutral hands. I agree totally with Dr. Angell and it is a perfect, idealistic solution that you came up with, but it will never work politically.

Thank you.

[Applause.]

DR. BLUMENTHAL: Our last speaker is Sid Wolfe. Sid Wolfe is the director of the Health Research Group, has been that for 29 years. The Health Research Group is an investigative research-based group that follows the pharmaceutical and FDA activities in this city and elsewhere.

DR. WOLFE: Thank you.

Not too many hundreds of yards away from here in Building 10, the clinical center where I spent I guess about five very useful to me and hopefully useful to other people years as a clinical researcher and so forth, a tragedy happened about seven years ago. It has been not entirely forgotten, but I think it needs to be remembered because it was not just one person dying during some human experiments, but a whole host of people dying during experiments on a nucleoside called FIAU, an intended treatment for hepatitis B.

It may not have come to attention had it not been done at NIH. It was a clinical trial being done on a drug owned by Eli Lilly. We attempted unsuccessfully to get criminal prosecution brought against Eli Lilly because they withheld information, which would have stopped these clinical trials much earlier than the ultimate last several deaths stopped them.

They had had to hospitalize the patient in their own unit, Eli Lilly did, because he had had extremely high liver enzyme elevations while being given the drug, a normal patient. This information was withheld until after the last group of patients died at NIH. I mention this because this is a conflict of interest of the first order.

Eli Lilly had previously pleaded guilty to criminal prosecution brought against it and the same happened to a number of other companies in this country for withholding information from the Food and Drug Administration. Tom Bodenheimer's very thoughtful presentation about the fact that financial conflict of interest is a risk factor for scientific misconduct is absolutely correct. But it is also a risk factor for endangering human subjects.

To the extent that a number of drug companies, including Eli Lilly and SmithKline and Herbst(?), Wyeth Earst(?) is under investigation now, have withheld information from the FDA that might have made a very big difference as to whether drugs were approved. This is a horrible example of financial conflict of interest harming people, not only the subjects of the experimental trials, but people who got the drug after it was approved.

The comments yesterday by Dr. Kirschstein, Dr. Henney, Dr. Raub had a common theme, which was, to quote one of them, "Profit is not the problem, rather the collision between profit and objectivity." Another, "Financial conflict of interest is now an inherent part of the process." Another, "Financial conflict of interest is an inescapable part of research now."

Those statements are certainly true and the first one by Dr. Kirschstein, "The collision between profit and objectivity," is really what we are talking about here and it takes many kinds of forms.

I am going to spend a few minutes on one that has really not been discussed very much because the focus of many of us comes from the government, from academic medical centers, but it is not where the increasing amount of action is. I want to talk about the rapidly growing, for-profit, human experimentation industry. I am not talking about the pharmaceutical industry itself. That is a different issue.

I believe the pharmaceutical industry has brought some important advances. It has always been primarily a business with a fiduciary duty to its stockholders to produce revenue, but despite that or along with it, there has been room and will continue to be room for some important advances.

In the past, as Dr. Angell alluded to, most, maybe 80, 90 percent of clinical research for the purposes of assessing efficacy and safety of drugs used to be done in academic medical centers, whose primary purposes were and still are medical education and providing health care services.

In its recent very disturbing investigation, Recruiting Human Subjects: Pressures on Industry in Industry-Sponsored Clinical Research, the Inspector General's Office -- curiously not part of the program here today -- described, "The transformation of clinical research into a traditional business model."

As part of the shift to this business mindset, with an estimated 62 percent of clinical studies being done by for-profit companies in 1999, the often highly unethical and possibly illegal recruitment practices documented by the Inspector General report appear to be increasing rapidly. The rise of separate, from the drug companies themselves, for-profit human experimentation corporations -- we call them HECs, human experimentation corporations. We think it is more accurate than calling them contract research organizations because at least the number of contract research organizations are involved in animal studies and things like that.

There are, I think, 17 of these alone, human experimentation corporations, doing clinical trials in North Carolina, one state. The rapid emergence of these in growth has introduced new techniques for rapidly recruiting patients. That is one of their attractive features to the pharmaceutical industry. We get there a month or two earlier. It may mean millions of dollars of difference.

Without either teaching or routine medical care responsibilities, HECs, human experimentation corporations, can do clinical research much more quickly and efficiently than the academic medical centers. In addition, because they lack an institutional base of patients, they are more likely to recruit experimental subjects in private doctors' offices and foreign countries than are academic medical centers.

The number of private practice-based investigators has grown from 3,153 in 1990 to 11,588 in 1995, an increase of almost four-fold and I am sure it is more now, concomitant with the increasing domination of human experimentation corporations in experimentation. The vulnerability of a doctor's own patients to be persuaded to become an experimental research subject because of their trust in the doctor, combined with signing bonuses, which the doctor pockets for the referral, sets up a toxic situation where some doctors are literally selling their own patients into human experiments.

The following statement was really from the Inspector General's report: In a gross commercialization of this practice, one large family practice group advertised its "computerized patient database of 40,000 patients" to HECs and others running clinical trials as one from which "we can actively recruit patients for any study."

According to Richard Friedman of Cornell University, writing in The Lancet several years ago, "In the U.S.A., monetary incentives have spawned a whole industry of private physicians who don't necessarily have any experience in research or with protocols in specialty areas in which they are testing."

These private entities, "push patients through trial after trial" with little concern for what happens to them afterwards. The result of stop gap medicine for vulnerable patients who can't afford treatment any other way, he says.

Drug companies can increase the likelihood of a drug's success by using exclusion criteria to, as one investigator told the inspector general, "in rich trials with patients who are most likely to benefit or respond." One way to accomplish this is to exclude patients who are currently on medication to treat their condition or even those who have been on medication in the past.

What remained for inclusion are patients known in the industry by the double entendre as naive subjects. These prize subjects are hard to locate, but according to the Inspector General's report they can often be found among the uninsured or in foreign countries. Many researchers told the Inspector General's staff that drug companies are increasingly looking abroad for such subjects.

The report points out that the number of new foreign investigators in FDA's database grew from 988 in the 1990 to 1992 to 5,380 in the 1996 to 1998 period. I have seen one industry called Script(?) magazine in an advertisement directed toward possible drug company customers, drug industry customers. The ad was taken out by the world's largest HEC, North Carolina-based Quintiles(?), with offices in 31 countries all over the world.

In this ad, they boasted to the pharmaceutical industry the ability to quickly recruit foreign experimental subjects. The ads promise that Quintiles, "can even help you tap the vast drug naive patient populations of China, Korea and other emerging markets." There is little question that these issues fall squarely within the topic of this conference, human subject protection and financial conflict of interest.

The imposition of an increasing number of for-profit entities beyond the pharmaceutical industry itself between human experimental subjects and the ethical performance of appropriate clinical trials poses a serious threat to human subject protection. There is in a real sense a double conflict of interest. First, the primary allegiance of many of these entities is toward their owners or stockholders, financial conflict of interest, which expands on the more narrow definition focused on by most of the participants in this conference.

Second, the paying clients of these HECs, the pharmaceutical companies, present another conflict of interest. The more quickly the studies are performed, the more favorable the outcome in terms of ability for drug approval, the more the company will be pleased and will likely return for more business. Possibly lost in this duet of conflict of interest with human experimentation corporation stockholders and drug company clients are the patients.

Recommendations, a few. One, the majority of the recommendations in the Inspector General's IRB report, which is now over two years old and a follow-up report a few months ago, which you are aware of, the Inspector General pointed out that most of the recommendations have not been implemented. It may be necessary, as has recently happened, to introduce a law just to implement those recommendations.

Such a law with bipartisan support was introduced recently called Human Research Subject Protection Act 2000. It has in it many of the same recommendations as the Inspector General's. Secondly, the following are appropriate subjects for regulation: Banning finder's fees, banning reimbursement to physicians beyond research-related expenses and time expended, mandatory disclosure to the participant of the sources and amounts of recruitment fees, restrictions on the ability of health care providers, other than the patient's physician, from gaining access to the patient's medical records and several others, which I don't have time to go into.

Third, by regulation, abolish for-profit human experimentation corporations and the other for-profit entities, which partner with them, such as for-profit institutional review boards. They, in my view, are prima facie of irreparable conflict of interest at an institutional level, which can't be ultimately managed, the vernacular of this conference, and need to be declared off limits for the purpose of studying drugs or other products for FDA approval.

Ideally, as Tom Bodenheimer mentioned, there needs to be a separation between the funding and the design, implementation and interpretation of the data of clinical trials. Legislation to accomplish this was introduced in the late 1970s by former Senator Gaylord Nelson. It would have said industry should fund clinical research, human experimentation. The control should be in the hands of something like the NIH, meting it out to academic medical centers on the basis of the merits of their work, not on the basis of their likelihood of saying "yes" when all is said and done.

In conclusion, since I am now at 13 minutes or 12 minutes, in 1962, Nobel Prize winning economist, Milton Friedman wrote, "Few trends could so thoroughly undermine the very foundations of our free society as the acceptance by corporate officials of the social responsibility, other than to make as much money for their stockholders as possible."

My colleagues, Dr. Steffi(?) Woolhandler(?) and David Humelstein writing a couple of years ago of the problem of for-profit health service delivery, that is HMOs hospitals, in an editorial in The New England Journal said, "It embodies a new value system that severs the communal roots and samaritan traditions of hospitals, makes doctors and nurses the instruments of investors and views patients as commodities. In our society some aspects of life are off limits to commerce. We prohibit the selling of children and the buying of wives, juries and kidneys. Health care is too precious, too intimate and corruptible to entrust to the market."

I would add that the same applies to human experimentation. It is time to go beyond mere management, perhaps appropriate for certain kinds of conflict of interest and declare that the participation of other forms, such as human experimentation corporations, for-profit IRBs and their associated businesses are off limits to commerce.

I would add that even within the academic medical centers many of the attempts to manage may ultimately prove really impossible and those in some way or other need to be prohibited as well.

Thank you.

[Applause.]

DR. BLUMENTHAL: I want to thank all the panelists. They were all so not only informative but very disciplined. The moderator notes that and is appreciative of it.

Are there any -- I think we can now open the floor to questions of the panel members. Anybody care to come up to a microphone? If you wouldn't mind identifying yourself as you pose your question, I think that would be helpful and also if there is a panelist to whom you would like to address your question.

DR. MANN: My name is Dr. Howard Mann, Salt Lake City. I have a question and I invite a response from any panel member.

We have certainly heard about the management of conflicts of interest and the notion has been advanced and I am sympathetic to that from the practical point of view that committees or institutional bodies, other than the IRB, should evaluate conflicts of interest. I understand that, but that notion is dissonant with the concept that an IRB, at least by virtue of the diversity of its membership, is in the best position to evaluate that kind of thing, particularly given the fact that IRBs are supposed to have at least one member not affiliated with the institution.

Certainly, I am aware that others have advanced the notion that IRBs should increase their membership of community members, unaffiliated members to say around 50 percent of the membership. That, I think, is a strong argument for the continued involvement of IRBs in the assessment of conflicts of interest.

I would appreciate your thoughts on that.

DR. BLUMENTHAL: Anyone care to respond to that?

DR. WOLFE: I don't think that the proposals that were made apparently in some of the breakout groups yesterday were either/or proposals. The suggestions of setting up a conflict of interest committee, I don't think was meant as a complete substitute for the IRB. I am very sympathetic with the funding and the timing problems of the IRBS.

One of the, quote, attractive features of for-profit IRBs is that they don't quite work around the clock, but they don't have these other responsibilities, so to speak and they are, quote, more efficient, just like the human experimentation corporations.

I think that somewhere in the institution, whether as a subcommittee or whatever of the IRB, there needs to be some serious attention to what everyone has agreed is a very rapidly increasing and a more recent phenomenon of ties between individuals in the academic medical center and the institution itself to industry. it is a new thing and it is very worrisome. I think we will see more and more examples hopefully prevented before deaths occur.

I don't think the suggestion was just to exclude the IRB but rather supplement it with some people that might have more time.

DR. BLUMENTHAL: The next question.

MR. COLLINS: My name is Ron Collins. I am with the Center for Science in the Public Interest and I direct my comments to any or all members of the panel.

Much has been said in largely glorified ways about so-called informed consent forms. I think one of the people missing from this conference is the malpractice lawyer. I would very much like to see what those people representing patients in these contexts think of these so-called informed consent documents. In law they are referred to as contracts of adhesion or unconscionable contracts. After all, they are one-sided. They are drafted by the university.

There is no equal bargaining in them. I think Dr. Angell's comment was most apt when she referred to them as caveat emptor.

Given that, that they are essentially -- they have been called euphemistically, although I don't think that was the intent -- informed consent documents. They are essentially waiver of liability forms. I mean, that is really what they are about. I mean, when somebody dies or is injured, after all, you looked at the form. You signed it. You knew what you were signing. Ergo, we are not liable.

Given that, if we had real informed consent forms that were intelligible, that really represented the interest of patients, wouldn't that on the one hand or might it not discourage patient participation --

[Tape flip from tape 6, side B, to tape 7, side A -- text lost.]

-- but maybe Enback(?) because Enback is very much involved right now in this whole issue. Certainly, FDA, the new office, OPRR isn't anymore or whatever, but as Dr. Sugarman said this morning, we can't control how an individual subject is going to respond to a diagnosis or respond to information that they are given, but we certainly can give them the information and hopefully make it more understandable to them.

DR. BLUMENTHAL: Any thoughts about a sort of catch all statement?

PARTICIPANT: I have seen that kind of document from specific institutions. I believe, if I recall correctly, St. Jude's has a document like that that they give to every patient going into any clinical trial. So, it is really up to the institution.

Now, are you asking should this be put into regulations to ask every institution to do it? And that, you know, I don't know. Maybe somebody else can --

PARTICIPANT: It just seems like it would be -- it is kind of like the California Bill of Rights, building on that, that mandatory document and also mentioning conflict of interests in there. I think it might be a good thing to have in the regulations, that every individual -- but anyway that is for another time. Thank you.

DR. BLUMENTHAL: Thank you.

Next question.

MR. DUSELL: I am Fred Dusell(?), the oldest living law student, Vanderbilt University.

The regulatory regime, very, very few regulations are explicit as to whether they represent a low limit below which sanctions are imposed or a high limit, above which you are free from any sort of liability. With respect to the regulations that are being considered here, is there any chance whatever that the NIH might be explicit as to whether these are minimum standards or what is called the law in economics, national standards?

DR. BLUMENTHAL: I don't think we have any NIH representatives up here. So, I don't think -- nor do I think does NIH have the authority to do this by itself. So, perhaps we could -- I might just change that around to ask a somewhat different question and that is are there certain types of relationships that are simply not acceptable on the part of -- between industry and clinical researchers.

I know that Sid and Marcia have commented on that. So, I guess I would invite other members of our panel to deal with the question of whether there are regulatory applications that are in order in this subject? Is there one thing or one type of relationship which simply is -- should be off the table?

PARTICIPANT: I will state the obvious and that is that no secret relationship -- there ought to be, I think, stringent penalties for that in the present system and, in fact, there are. The question is, you know, can they be discovered and gotten through and so on. There has been a lot of discussion here on -- just picking up as we were supposed to do, reflecting on what went on, I think the ghost writing issue is one that perhaps could be nominated.

DR. BLUMENTHAL: Mr. DeRosia, any thoughts on this topic from the standpoint of your membership?

MR. DE ROSIA: Well, that is very broad. I can't think of any particular thing that wasn't already discussed, in particular with Dr. Wolfe's point on that. Keeping in mind who is paying for research and the fact that it is both government, it is both private, I would just simply again say everything has to be open as far as it can on the table at all times. I think that is kind of a mom and apple pie statement, but that is -- the bottom line is it has to be all open.

DR. BLUMENTHAL: Over here.

DR. LIVENGOOD: John Livengood, CDC.

I wanted to follow up a bit on the discussion of equity ownership and for that reason perhaps preferring an outcome in a trial that Dr. Angell had mentioned. With the exception really of a situation where a start up may not have cash and is offering equity to that, what possible other justification is there for an institution to allow a researcher to conduct research on something that he has an equity interest in? I can't -- I haven't heard anything else offered as a possibility.

As a federal employee, I, too, am subject to a lot of disclosure and a prohibition against owning anything on which I potentially have any interest in an outcome in which it does. It has not really caused me any problems whatsoever.

What resistance is there on the part of universities and other institutions to institute at least in the major pharmaceutical area some sort of similar prohibition? I haven't heard really any justification for why they wouldn't do that.

DR. BLUMENTHAL: Mr. Benson.

MR. BENSON: I guess that is why I am still working. I am sort of in your boat about those conflict of interest investments that we couldn't make as federal employees. I can think of one example that I think was touched on in a number of presentations. That is, when a product is invented -- I am thinking more devices than pharmaceuticals. It probably doesn't apply to pharmaceuticals -- by a person working at the university, at what stage in the development of that product does it -- you know, should it be transferred out under your regime?

I think, you know, this is again the complex question of how do you deal with conflict of interest, but what I would hate to see is that person being turned off and people like that being turned off. So, you know, how that gets handled is problematic. I know a lot of discoveries -- not discoveries, but a lot of products have come that way, not only at large institutions, but also a lot of smaller ones.

DR. BLUMENTHAL: Over here.

MS. RUFFLINE: Michele Ruffline(?). I am with the Office for Human Research Protection.

I just wanted to provide some clarifying information in response to one of the comments that was raised when you opened up the question and answer period.

The federal policy for the protection of human subjects does not create a right of action for people to sue when there are violations in connection with an informed consent document. The Office for Human Research Protection views the informed consent document not as a contract of adhesion. I think we would be very disturbed if we heard someone call it that. We view it as part of a process of a subject being informed about the research protocol in which that person is agreeing to participate in.

The third thing is that the federal policy has a prohibition on informed consent documents in federally funded research, containing exculpatory language in which the subject would be waiving or appear to waive any legal rights that that person would have.

I just wanted to clarify these things.

DR. BLUMENTHAL: Thank you.

Over here.

DR. KORNFELD: I am Don Kornfeld(?). I have had a number of years experience chairing an IRB and I am a psychiatrist by training. I don't mention that because it is not pertinent.

I noticed certainly in the last day and a half we have spent most of our energy dealing with efforts at primary prevention. Yesterday, one of the speakers commented on what do we do about the sociopath. I mean, there is nothing -- it is wonderful if you tell everybody that you have got stock in X, Y, Z, but we all know that the sociopath won't tell you that his brother-in-law has stock in X, Y, Z.

It is very nice to write a very detailed simple English consent form, but once that patient walks into that room with Dr. X, we don't know what strong arm tactics he might use. Now, it has been suggested that IRBs go to the -- observe the process. Now, I don't think anyone is naive enough to think that the process that goes on with me at his elbow is going to be the same as the process when I am downstairs in my office on the third floor.

So, it seems to me that the answer lies with Mr. DeRosia. He thinks he is part of the problem. I don't think so. I think he is part of the solution. The research coordinator is a potential whistle blower. We need whistle blowers. We need people, usually nurses, who have high ethical standards and who will know what is wrong when they see it.

The problem is what risks do they take in calling me or calling the chairman of the department or whoever. It seems to me that institutions have an obligation to establish protections of one sort or another for someone who chooses to take that very courageous role. There may even be a role for the Federal Government in providing such protection. But I think we have to look at what we do after we have put all these other strictures in place as to how to deal with the individual, who we all know, but will find a way to get around it.

Thank you.

DR. BLUMENTHAL: Any thoughts on that?

PARTICIPANT: A couple of issues on that one. Very good. The whistle blower issue, I mentioned our annual meeting of 3,000 clinical professionals. Even though it was just in May, we are already reviewing the abstracts for next year's meeting and, in fact, that is my light reading on the way back. It is two volumes of about four inches each of abstracts presented.

One of them caught my attention immediately. It was what to do when you blow the whistle. It is something that as an organization we have never addressed, but if you look at -- not that I am encouraging everyone to be whistle blowers, but, in fact, when it comes to clinical research, we are our brother's keeper and we do have to look out for each other and keep each other on the same path. That is how we protect the patient, but really looking -- is that the right thing to do for the patient. So, we always have to challenge ourselves on that.

So, I was glad to see that come up as a topic for our meeting. It will be interesting to see the debate that goes on with that at our meeting next year.

The other thing I would say about that is if you look at some of the instances that have occurred where previously OPRR went in and so on at many institutions or investigators that have problems, it was, in fact, as you pointed out, the coordinator or somebody on staff and just said one day this is not right. I have voiced it within the institution or if it is a small clinic, I voiced it here. It is falling on deaf ears. I am now going to step out of the rank and, in fact, blow the whistle. But it is a rank and file person that someday said that is not right.

DR. BLUMENTHAL: Ruth.

MS. FISHBACK: I am Ruth Fishback.

Over the past day and a half, we have heard over and over calls for more empirical research, more data needed and also some educational programs for the research community. I am happy to say I am from the government and we are here to help. We have, in fact, two mechanisms that I think would be extremely useful and very timely.

The first is a program announcement with three receipt dates per year. It is called Research on Ethical Issues in Human Studies and this could really fill the gaps in our knowledge, where we just do not know the impact of what patients would feel if they hear very explicit information versus very general information. Would this affect recruitment at all?

We talk about it and we theorize but we, in fact, could benefit greatly from research that is done, research on IRBs, as well as investigators and the conflict of interest that we are beginning to think about with institutions as well.

The second is to support the development of short term courses. We have heard a lot about education and there is a T15 mechanism available so that investigators and other members of the research community could develop courses that would be very beneficial and particularly in the area that we have been discussing on conflict of interest.

Thank you.

DR. BLUMENTHAL: Thank you.

This side.

MS. ATKINSON: I am Claudia Atkinson from Emory University School of Medicine. I would like to make a point that I haven't heard addressed at this meeting, but I think it is really important for academic institutions if we are to be asked to implement what I hope will be guidelines and not regulations.

In 1995, the government required academic institutions to put in place policies and committees that would manage conflicts of interest or eliminate conflicts of interest for all research supported by federal funds, not just human subject research.

We have done that, some better than others and we are still learning because it is a very complex matter. I am here to tell you that basic scientists have conflicts of interest that are just as complex and just as important as clinical researchers and they are based on exactly the same concepts and principles.

Those of us who are down in the trenches with conflict of interest committees, seeing what they have to do on a case by case basis, learned very quickly that a university can't be seen as applying a high standard to all research supported by federal funds and some different standard to research supported by industry or not supported, not funded at all; that is, mainly supported by the university.

We have to apply the same standard to all. So, the conflict of interest committee has to review all of these faculty who have conflicts of interest, whether they are involved in human subject research or not.

And I would like to make the point that, again, if you are down in the trenches and you are dealing with the conflict of interest committee, looking at cases involving faculty on a case by case basis, you quickly learn that you can't just stop when you look at a conflict of interest involving research. You have to look at about ten other areas, conflict because they are an administrator, conflict because they are using the university name or university resources, conflict because -- and I could go on with about 12 university policies that apply.

So, the conflict of interest committee, which started out to be conflict of interest in research quickly became conflict of interest in research and commitment and, finally, now has become just conflict of interest because there are so many areas that impact upon each other and the conflict of interest committee has to look at all of those in order to reach a decision about management or elimination.

The point I want to make is that the proposal that the IRB be the conflict of interest committee is not feasible and is not practical at the university level. The IRB, by definition, in federal mandate deals with human subject research. So, if you come forward and say the IRB is going to be the conflict of interest committee making all the final decisions, then the university has to put in place a separate conflict of interest committee dealing with the same concepts and policies but possibly coming out with totally different inconsistent results. It is not a good and practical way to work.

The other point I want to make about this proposal is that a conflict of interest committee requires very special knowledge and expertise. Pretty quickly, back in 1995 and 1996, it became clear to us that these individuals who include clinical researchers and basic science researchers and others, who are not researchers at all, had to master tax and security regulations. They had to master concepts of equity and escrow. They had to master concept so intellectual property law, proprietary information, confidentiality.

They have to read and review consulting agreements, licensing agreements --

DR. BLUMENTHAL: I am sorry, but our time is just about up.

MS. ATKINSON: Well, my point is you can't ask the IRB to do all these things. They should work closely, IRB and conflict of interest committee. They need to share information and develop mechanisms to do that, but you can't put that on the IRB.

[Applause.]

DR. ANGELL: I think this speaker just underscored what to me is a certain unreality about our proceedings here. We are spending an enormous amount of time and energy and brain power trying to do something that really can't be done, trying to go through all of the possible financial connections, the dollar amounts, what they are and so forth, to accommodate something that shouldn't exist in the first place.

These IRBs in particular are working in a vacuum in the absence of any community-wide standard about what is acceptable and what isn't acceptable and what isn't acceptable. So they are punching tar baby there and it is an extraordinary thing to me that we spend so much effort dealing with a phenomenon that is not inherent to the enterprise, that is on the margins of the enterprise and that risks what is central to the enterprise.

Abbey Meyers with whom I agree in almost everything she said said that I was unrealistic. I find that as this questioner just said, that in a sense dealing with it, managing it, rather than extricating it, is what is unrealistic. We were told yesterday the world is changing. We have to adjust. The world is changing.

Well, of course, the world is changing, but I think everyone of us should do what we can to try to shape those changes. We can do that.

[Applause.]

DR. BLUMENTHAL: We have about a minute and a half.

PARTICIPANT: I just want to associate myself with Dr. Angell's comments. I think they are very realistic.

Now, back to my mundane question. Secrecy breeds mistrust and accountability requires that openness in the system. How does the panel react to the requirement that IRB membership should be open and their minutes minus the propriety information should be also open for public scrutiny? At present, neither of them is open for public scrutiny.

DR. BLUMENTHAL: Any brief comments on that?

PARTICIPANT: I think it would probably be a very good idea because the patients that we deal with are on the Internet constantly and if they were to ask their local hospital to go into a clinical trial, they would be able to get on the Internet and look up the minutes of the IRB meeting and see what everybody said about it.

PARTICIPANT: They are not on the Internet. You can't even get them in most universities. In my own university there are two IRBs. I don't know the membership. I cannot get the name of the membership. I called Johns Hopkins. They don't give me the membership nor the minutes. So, there is no way you are going to get the minutes. You don't even get the IRB membership and it varies from place to place.

PARTICIPANT: What you are proposing is a very good idea to make it public. So, I think you would get a lot of public support to make it public. Now, should that be required in the regulations? Everybody that has come up to speak has said don't make a regulation. I disagree. If you make it a rule that some investigators can ignore, you are not going to get compliance. I think it should be in regulation. It should be required.

Congresswoman DeGet's(?) law that will codify the common room and cover all clinical research, that will be a law and there should be penalties if you don't obey it.

DR. BLUMENTHAL: Thank you. I want to thank our audience for a wonderful set of questions. I want to thank our panelists and it is time now for us to make way for our next speaker.

[Applause.]

Agenda Item: Plenary: Concluding Remarks

DR. RAUB: Thank you very much, David and panel.

As they say on Capitol Hill, I have the high honor and great privilege to introduce our final speaker for this program. Dr. Greg Koski is the director designate for the Office of Human Research Protection within the Office of the Secretary, Department of Health and Human Services.

During a course of 30 years in academic medicine, Greg has been involved in basic research, clinical investigation, teaching, administration and patient care. His most recent position had him as associate professor for anesthesia and critical care medicine at the Massachusetts General Hospital and director of human research affairs for Partners Health Care System, Incorporated.

In that latter capacity, he found himself up close and personal and as extensively involved as one can be in the issues associated with the ethics and the regulatory oversight of human investigation. In spite of all that, when Secretary Shalala called, he said "yes."

Please welcome Greg Koski.

[Applause.]

DR. KOSKI: Well, Dr. Raub, thank you very much for the introduction. I guess I will simply say at the outside that during those 30 years in academic medicine with all the various things that I have been involved in, I will tell you very honestly that I have never felt a sense of responsibility greater than that that I feel both at this very moment in talking to this group, but also as I begin to look toward the day after Labor Day when I begin my official role as the director of the new Office for Human Research Protection.

Like Dennis DeRosia, I have not really had the luxury of preparing remarks in advance for this because I was asked to essentially react as the last panel did to many of the remarks here. So, I have basically prepared some thoughts that I would like to share with you to say that certainly my own thinking has been modified as we have moved through these very important discussions during the last day and a half.

So, I am making these as personal observations, comments, reflections on what I have heard. They are unavoidably entwined with my own expectations for what I see ahead. This is not the time or place to give a detailed description of, you know, specific initiatives that are likely to emerge from OHRP. Certainly, once I assume my official capacity it would be more appropriate to do that.

But with that as an introduction and also a warning that this may take a bit longer than the 15 minutes that I have been allotted, I would ask that you bear with me and we will try to cover a few important points.

I will start out by thanking Stuart and the members of the organizing committee for inviting me to come, be a fly on the wall basically and listen to what everyone had to say because I have found that listening is one of the most important steps toward finding solutions.

Clearly, the timeliness and importance of this meeting is apparent to all of us. It is underscored not only by the presence of the most senior officials from the government's research agencies, as well as from the Office of the Secretary, but I think more importantly by the incredible response, over 700 people from across the country during one of the slow times and vacation time. The fact that people have come this distance and have been willing to make this commitment is very moving to me. I think that it demonstrates both the level of concern and the level of commitment that people have toward finding the solutions to the many problems and challenges that have been mentioned in our discussions.

Like all of you, I have listened carefully, intently to what has been said during the excellent presentations that we have had and I guess to sort of organize things a little bit in terms of my own thinking, I wanted to start with the conference itself, sort of what I would call the scope creep of the conference.

The conference was entitled "Human Subjects Protection and Financial Conflicts of Interest." We are not there any longer. With respect to conflicts of interest, it is very clear, I think to everyone in this room that we must look well beyond financial conflicts of interest as we begin to look toward effectively protecting not only the integrity of the research, biomedical research, enterprise, but also the public confidence and trust upon which the continued success of that venture is dependent.

I think David Korn in his remarks made it very clear that certainly within the academic setting, yes, money talks. We know that money talks. The drug companies will tell us that money talks because it is the single most effective way to encourage human research subjects to participate in research. It is the single most effective way to encourage individual investigators to complete enrollment goals for studies.

So, money clearly talks. That is a problem in my mind because the money can drown out other concerns. But money is not the sole source of the conflicts that arise in research. You know, certainly in academia, publications, professional advancement become very important parts of the currency of the realm.

We started out talking about human subjects protection, as I said, but to a much larger extent, and I think it has been underscored by our final panel and the comments of Marcia Angell and Sid Wolfe and Abbey Meyers, that we are really looking beyond just the protection of human subjects in research, that we are really looking at protection, as I said, of the entire research enterprise. That clearly is I would at least say a very daunting task.

I guess I have to say, again, reflecting on the conference that I am somewhat dismayed by the fact that at least two important viewpoints have not been adequately represented here -- and the malpractice lawyers was not one that I actually had in mind -- but I think that we have failed in our discussions to capture in a very effective manner the actual concerns and positions of the individual research subjects and the true public at large, society.

I think it is important to note that, again, while it is society that truly benefits from the productivity of our biomedical research enterprise, it is still the individual subjects that bear the risk for society so that our obligation to protect them is something that we simply cannot ignore.

It saddens me that we didn't have more people who have actually served in that capacity come forward and talk to us about their experiences, their concerns. I think that we have gotten some sense of it from listening to Abbey Meyers, but I would have liked to have heard more of that. I can tell you that in my own professional activities, serving on not only IRBs, in actually participating as an advocate, a patient advocate during the consent process for Phase 1 gene therapy trials, I at least, and many of you certainly do, have a sense of what it is like on that side.

But I think we would have benefited from having that view represented here. I am further struck by the fact that we have not heard input from many investigators, at least the people who would, you know, step up to the plate and say I am a clinical investigator. I have had consulting relationships. I have stock in a company for which I am developing a device. And listen intently to their side as well, okay, so that we can understand that perspective and perhaps learn from them where there are critical issues. In other words, try to incorporate some of that thinking as well into our policy development as we go forward.

Failure to do that, I think, establishes or further intensifies, if you will, a confrontational sort of relationship that can truly prevent us from achieving successes in some of these very difficult areas. So, I would have liked to hear more and certainly during the commentary period that will be open at least until I believe September 30, if I am correct, I hope that we can get the message out and ask people who have been in those particular roles to come forward and share their thoughts and views with us so that we can truly say that we have in a very comprehensive and complete manner looked at these issues from both sides and know that as we step down a new course, which we must do, that we will be doing it with complete confidence that we are taking steps in the right direction.

Many of the presenters in our panels have spoken about the ubiquitousness of conflicts of interest in biomedical research. I guess I would have to go a bit further on that. Ubiquitous is one thing. I would call the conflicts of interest pervasive. Indeed, in an enterprise where we have embodied in single individuals, dual conflicting roles, the physician investigator, the patient subject, there are going to exist conflicts of interest that simply are not something that we can eliminate.

They are inherent. They are intrinsic and unavoidable to the research process. Now, Marcia has indicated that there are certain financial conflicts of interest that can be avoided and I believe that that is true and I believe that they probably should be avoided in most instances. But one of our challenges continues to be how to manage those conflicts that we cannot eliminate. So that as in so many other complex environments, where it would be nice to have an ideal, clean situation, I don't believe we are truly going to see that in this domain.

So that the continued emphasis on management of conflicts of interest in an effective manner, when elimination of the conflict, which would be our first goal, is simply not possible, it is going to be one of our big challenges. How are we going to be able to do that?

I think it was perhaps Pearl O'Rourke, who first mentioned in her commentary the fact that increasingly research is not being done in the setting that is largely represented here; that is, academic centers, research that is funded by NIH. The truth of the matter is -- and I think Dennis DeRosia got closer to it, is that most of the clinical research and, indeed, some of the research that in my mind poses some of our greatest challenges and risks is being done outside of the academic setting right now.

It is being done in private physicians' practices. It is being done in the private research centers that Dr. Wolfe referred to that I think admittedly may not always fall under the same kind of administrative oversight and public scrutiny that is essential. This, to me, underscores the need, again, as Abbey Meyers has pointed out, to develop guidance that covers all research, guidance that will make sure that there is also a level playing field regardless of where the research is being done, who is sponsoring the research or the professional stature or status of the individual investigator, whether it is a private physician or an academician.

Many have pointed out that we currently have multiple sets of regulations regarding conflicts of interest, that they have been promulgated by different agencies under different regulatory codes. I don't believe that is a situation that we can continue to allow to prevent us from achieving the goal of having consistency of process across the board.

So, I think that, you know, the opportunity with the creation of the new Office for Human Research Protection, which, of course, has been charged with providing leadership for not NIH or any other single agency, but all of the agencies within HHS and, indeed, leadership for all of the other federal agencies that are sponsoring research under the common rule. There is an incredible opportunity here and an opportunity that has not existed before.

So that if we can take advantage of this opportunity to catalyze the kinds of interactions among these agencies that have been very challenging to do in the past, if we can capitalize on this opportunity, I think we will be able to do things that many of us have never imagined being possible.

This, to me, I think is one of the most promising aspects of the landscape as I look forward to assuming my official responsibilities, a promising aspect that give some hope and even confidence that we are going to be able to address some of the difficult issues that have come up at this conference, including issues that go well beyond those of simply conflicts of interest.

If I could digress for one moment, I simply have to say that as the faculty and administrators at Harvard Medical School considered the possibility of revising their conflicts of interest policies, an ongoing process for the past year, I read accounts that indicated that one of the concerns was that stringent policies on conflicts of interest would make it difficult to retain and recruit faculty.

It is almost hard for me to say that, but I think that it is a sad commentary on the status of science and academics to say that stringent policies on conflicts of interest to protect the integrity of science and the well-being of research subjects would be an impediment to recruiting and retaining faculty.

Thank you for allowing me that one digression.

We have often heard during our discussions mention of trust. Trust has been cited as essential to the process. The public belief in the good of science and in the good of our academic institutions, I think that is right. I think trust is essential. The sad fact is trust has been eroded to such an extent that we can no longer simply accept it or cite trust as something that legitimizes our activities. We have to proactively take those steps that are going to reestablish the public trust in the goodness of our endeavors.

Now, we have heard many times mentioned the Jessie Gelsinger affair. This particular tragedy has perhaps done more to galvanize not only public response and concern but also institutional concerns than any other single incident during the past two years. In response to that incident, I noted when I first met Savio Woo, that the American Society for Gene Therapy had taken the bold stance, without being forced into doing it, of simply saying "no." All right. Just "no." Our society simply will not allow its members to have conflicts of interest that would undermine the integrity of the research or the protection of human research subjects.

I would like to recognize the step that that organization has taken and, indeed, I would like to challenge other organizations that are represented at this meeting to take that message back to their organizations and take the bold step of also voluntarily enacting specific policies within their organizations that just say "no." We are going to do this right because it is the right thing to do.

I hope that message does go back. Angus Grant, if you are still here, I hope you will take it back to Bio. I want to see the responses. We all want to see the responses because the responsiveness of these organizations and of industry to this challenge, I think, is going to give us a sign post. It is going to tell us sort of, you know, what lies ahead. If organizations who are in leadership positions are not willing to do those things on their own, then there undoubtedly will be increasing calls for not just guidance but for additional regulations and legislation that will do it for them.

So, I look forward anxiously to seeing how these organizations respond. It is impossible in the limited time that I have here to really address each and every point that has come up in the conference, but it seems to me that there are certain areas of consensus that have begun to emerge from our discussion. I am going to run through those quickly.

First of all, it is quite clear that conflicts of interest are very real. They are very serious and they are a threat to our entire endeavor. These conflicts have certainly been intensified over the last two decades and certainly during the last five years the system may have gotten entirely out of control. There is a need to very immediately at least begin to get the system back into some kind of control, but I have to say I am very much moved by the comments that I have heard during this discussion, that it may not be possible to simply drive the Edsel much farther and so the first point in the OIG's IRB report of the need to redesign the system is very, very much on my mind.

To make a system that is not purely based on regulations and compliance, but a system that truly is going to focus on what we really want to do, which is to be able to allow society to reap the benefits of our biomedical research enterprise without ever, to the extent possible, without ever allowing a single individual's well-being or interest to be put in jeopardy.

More on that will be coming after Labor Day.

As I said, I think there is a consensus that conflicts of interest cannot be eliminated completely in most instances. So, we do need to have mechanisms, effective mechanisms for management. We have seen in the presentations here what many institutions are currently doing to try to manage conflicts of interest, recognizing the additional burden that that imposes upon an already strained and underfunded mechanism for trying to meet various compliance and oversight activities.

Nevertheless, we must continue to do that. The fact that all institutions are not doing it and the fact, as I mentioned earlier, that many research organizations function outside of the academic institution, as do many investigators, again, it underscores the need for uniform guidance --

[Tape flip from tape 7, side A, to tape 7, side B -- text lost.]

-- national level and if guidance itself is not effective, then it seems to me that rules, regulations and legislation must follow.

I think there is also a strong sense that the institutional review boards, which I would like to consider in a broader context, that is, not institutional review boards, but human research review boards, because many of these are not based in institutions, simply cannot be the sole implementer of the protections against conflicts of interest. They clearly play an important role. Again, I think, Dr. O'Rourke mentioned this from their panel discussion, that there are certain points in the research process where an unavoidable conflict of interest is most likely to produce a negative impact.

It is at the level -- in many instances, it is at the level of the interaction, the direct interaction between the investigator and the research subject that the greatest potential for doing harms exists. A second point is during the analysis and interpretation of data, these hot spots that were cited in that very insightful group, I think can help us to identify specific targets for some of our guidance and policies so that we truly do what an IRB is supposed to do and that is make sure that we have in place appropriate provisions to minimize risks and optimize benefits.

Clearly, the consent process is another area where we could do this. It goes without saying that everything I have just mentioned in the usual research context is amplified by orders of magnitude when we are dealing with research that involves vulnerable populations of subjects, which hasn't even come up in the discussion over the last day and a half.

So, we need to be sure that our policies and the protections that result from them are up to the task of meeting that goal of protecting the individual research subjects. Our hope, of course, is to do that in such a way that we don't at the same time completely halt the successful process of biomedical research.

For the last sort of philosophical comment that I would -- actually I missed two little points. Let me mention these. I am sorry.

I think it is clear that disclosure is not enough in most instances and, yet, at the same time openness is essential. So, again, we need to make sure that we get these concepts incorporated into what we will do and clearly also education is essential, but the education needs to go well beyond the investigator. We certainly need to make sure that there is education that is occurring on these important issues at every level of the institutions, at the corporate sponsors, the drug companies, the research coordinators, Dennis, as well as not only individual research subjects, but at the public at large. This is going to be essential to rebuilding, again, the trust that is absolutely critical.

We have heard about the culture of compliance as being something that we would hope to achieve. It has come up in several presentations. I think I heard Dr. Kirschstein mention it and others. I have to say that from my perspective, that is not our goal. Yes, compliance is important, but a culture of compliance is not what we want to achieve.

Why do I say that? Well, to my way of thinking, our research activities have to based on the highest standards of responsible conduct, based on ethical principles, by each and every individual taking part in that process. Compliance is one element of that. Compliance is one of the ways that we demonstrate our respect for those subjects who are participating in the research, taking the risk so that society may benefit. Failure to comply is a symptom. It is symptomatic of either an individual's or an institution's unwillingness to accept their responsibility.

What we want is not so much a culture of compliance, although we certainly want compliance. We want to establish a culture of conscience and responsibility so that each of us who are engaged in the research enterprise are, again, doing things for the right reason and if we do that, if we establish that culture, where we all truly embrace doing things because it is the right thing to do, compliance will not be an issue.

But let me make it unmistakably clear in case anyone has any doubts, institutions, individuals, who fail to truly accept their responsibilities and in good faith work to achieve them, simply should not be permitted to engage in this endeavor because as was pointed out again by David Korn early on, when science goes wrong in one instance and there is a conflict of interest or there is non-compliance or something else undermines the integrity of that work or the confidence in the process, it is not the one study that suffers. It is all of science. It is what we do.

So that, again, compliance is a symptom. As a physician, I believe in looking for symptoms as indications of disease and when disease is found, it certainly needs to be treated in an effective manner.

With that, I guess I will stop the philosophical ramblings and recognize that it is time for all of you to sort of want to get out of here at the conclusion of this. But I would like to just comment on what next steps are already sort of planned.

I have been informed by my colleagues in ASPE that all of the public comments that are received up to the end of the September 30th closure period will be actually put up on their web site so that they will be available for public scrutiny and to the extent that they may elicit further comment, that is wonderful.

The proceedings from this particular -- I guess, the text, whatever, the transcript from this meeting, as soon as it is available will also be put up there, along with slides and other materials from our panelists and other participants.

My hope is that this body of information will provide a foundation upon which OHRP will be able to lead an interagency initiative to begin to formulate what will be a broad Department of Health and Human Services policy regarding conflicts of interest and we hope that that will then be something that we could take back again to the public for further comment and, again, set the stage for approaching these kinds of issues in a much more collaborative fashion across all the federal agencies in the future.

I have been assured by Stuart Nightingale and Bill Raub that they will try to get all of this up there on the web as soon as possible. So, I think that is more or less what lies in store.

I would like to on behalf of the entire organizing committee of the conference and all of the panelists and participants thank each and everyone of you for all that you have contributed to this meeting and I will say that in departing, as I started out, I am very much aware of the incredible burden and responsibility that certainly lies ahead for not only me, but all of you because we are all in this together, but apart from this burden that I feel undeniably, I also will leave this conference today with a sense of confidence that we are certainly not facing something is impossible.

I know when I was a young kid growing up, my father always used to tell me that if you have a job that you know is impossible, give it to someone who doesn't know that it is impossible and it is more likely to get done. So, I don't think this is impossible and I look forward to working with all of you to try and do it.

Thank you very much.

[Applause.]

[Whereupon, the meeting was concluded.]