TRANSCRIPT

Conference on Human Subject Protection
and Financial Conflicts of Interest

August 15 & 16, 2000

National Institutes for Health

Bethesda, Maryland

August 15 Opening Session

Table of Contents

Welcome and Introduction of Panel: Dr. Stuart Nightingale, Senior Medical Advisor to the Assistant Secretary for Planning and Evaluation

Perspective from the NIH: Ruth L. Kirschstein
Perspective from the FDA: Jane E. Henney
Perspective from the DHHS: William F. Raub
A Non-Federal Perspective: Thomas S. Bodenheimer

Panel: Perspectives of National Organizations

Moderator: Dr. Wendy Baldwin

Mark L. Brenner
Angus Grant
David Korn
John R. Seffrin
Savio L.C. Woo

Panel: Approaches Taken by Institutions and Institutional Review Boards (IRBs)

Moderator: Sanford Chodosh

Panelists Representing IRBs:
Susan Kornetsky
Steven Peckman

Panelists Representing Institutions
Theodore J. Cicero
Julie Gottlieb

Public Comment on Questions in Federal Register Notice of Conference

Moderator: Yvonne T. Maddox

Mr. Kenneth Trevett, Chief Operating Officer and General Counsel of the Scapins Eye Research Institute in Boston, Massachusetts
Mr. Ronald Collins, Director of Integrity and Science Project, Center for Science in the Public Interest, Washington, D.C.
Mr. Alan Schipp, Assistant Vice President for Biomedical and Health Sciences Research, the Association of American Medical Colleges, here in Washington, D.C
Dr. Howard Mann, Chairman of the IRB of the Intermountain Health Care, Salt Lake City, Utah.
Dr. Holder Baumgartner representing the Research Ethics Committee of the Innsbruck University, Austria, the World Federation of Neurology, and the European Federation of Neurological Societies.
Dr. Marie Cassidy is professor of physiology and experimental medicine at George Washington University and is representing the Citizens for Care and Research of New York, New York.

PROCEEDINGS

DR. NIGHTINGALE: I am Dr. Stuart Nightingale, and I am the Senior Medical Advisor to the Assistant Secretary for Planning and Evaluation. I have the pleasure of serving as the chair of the planning committee for the conference.

I'd like to welcome you to the conference on behalf of Secretary Shala1a, as well as the senior officials in the Department who are responsible for this area. And of course, I want to welcome you on behalf of our planning committee.

At this point, I'd like to recognize a senior HHS official who we are pleased to have with us at this time, Harriet Rabb, who is the General Counsel of the Department.

Our planning committee wanted to hold this meeting at Natcher Auditorium when we met in late May. However, we were told that only two days were available, back to back, during the rest of the year.

We took these dates because of the urgency of beginning a public discourse in this area. Some on the group were afraid that if we held it at the time of summer vacations, that there would only be a small group that would actually attend the conference. Most of us however thought that the topic was so important and so timely, that we would have a good turnout anyway.

Well, we are very gratified by this overwhelming interest that we have in our conference. We are very grateful to all of you for coming, and to our expert speakers and moderators, who have joined what we believe is a landmark conference, bringing together for the first time, I believe, representatives of institutions, members of IRBs, clinical investigators and others to review human subject protection and financial conflict of interest, within the framework of PHS, NIH and FDA requirements, regulations and guidelines.

As a consequence of the participation that has greatly exceeded our expectations, and the last I heard we had about 762 registrants, Natcher's physical facilities are being stretched quite thin to accommodate our needs. We are sorry that having concurrent breakout sessions this afternoon means that the balcony had to be enclosed for the entire day, because of this afternoon's breakout sessions. But you will need to view the proceedings on a video closed circuit. And because of the overflow, it may very well be that others are viewing this as well in some of the other overflow rooms. We are very sorry that this is the situation, however, this is the only way that we could really accommodate this group.

For those who are in the balcony or in the other areas that are video only, staff will actually pick up questions from you and take them to the session moderators during the plenary sessions, where there will be questions and answers. There are papers in the back of your notebooks that you can use for questions.

We ask that those in the auditorium come to the microphones and ask their questions directly from the floor microphones, since all the proceedings in the plenary session are going to be recorded and transcribed. It would be helpful if you would identify yourself when you ask questions.

By the way, we will make the transcript publicly available for the plenary session.

Finally, please remember that the six questions that you have really form the framework for this conference. The planning committee has assembled pertinent federal requirements, regulations, guidelines and guidance as background information in the notebook to assist you in addressing the six questions that we have before us, as well as to assist you in getting the most out of the conference. We hope that this notebook will also by an important resource for you in the future as well.

Again, thank you all for being here.

I'd like to now turn to this first panel, that will set the stage for the entire conference. I'd like to note that our federal presenters have all had long careers in the various combinations of government service and in academia, facing the issues that we are grappling with today from multiple perspectives. The full bio sketches are in your notebooks for all the speakers.

With that, I'd like to turn now to Dr. Kirschstein. Dr. Kirschstein has had a long and distinguished career, primarily in government. Early in her federal career she served as an FDA official. For many years, she served as director of NIH's National Institute of General Medical Sciences, and more recently served as NIH's Deputy Director and now as its Acting Director, a position she also held in 1993.

Dr. Kirschstein.

DR. KIRSCHSTEIN: Thank you, Dr. Nightingale, and good morning. Today I want to talk to you about the mission of NIH, which is to advance science so as to improve human health. But NIH is truly as all federal agencies are, an institution and an instrument, a public entity endeavoring to serve the public good.

It is the public, the citizens and the taxpayers of this country who support our goals and who with its money supports science.

NIH is the steward of this public money and has therefore a deep obligation to protect the public investment in science. This means that we must work with many partners to develop policies that assure ethical conduct of research and to remain vigilant so that those policies are followed.

There will be frequent references today to conflict of interest. This may cause us to forget that the NIH policy as published in the Code of Federal Regulations is actually about objectivity in research.

There is a very real distinction. Objectivity is a broad term, and by a standard dictionary definition, what is objective is based on observable phenomena presented factually and without bias. Objectivity lies at the heart of science, and it must not be compromised in any way, by financial considerations, nor in the pursuit of fame, nor for the desire to produce an important insight into the processes of life.

These last two, the pursuit of fame and the desire to produce insights, are familiar to those of us in science. If anything historically has been a threat to objectivity, it has been these items. Why not win a Nobel Prize? Why not astound the world with a fresh and freshly published insight? In some measure, such pursuits and desires are always present in scientists, perhaps in all of us, and in moderation are absolutely essential to inspire us. But good researchers know how to control the desire, how to mitigate the passion, and they have always been able to do so.

Financial considerations are a newer phenomenon, particularly for biological scientists. It has only been in about the last 20 years or so that substantial consulting fees or equity interests in a new company have become widely available to scientists in the biological field. It has usually been the case that researchers have hoped for a favorable outcome from their work, followed by wide recognition. But it is only recently that immediate and sometimes quite possibly substantial financial gain has also become a possibility. The world is changing, and we are not going to be able to stop the way it does.

But we can pose questions. First, how can we distinguish the financial interests that compromise a scientist's objectivity? It is possible that some of these interests will not do so, but some may.

Second, how do we make sure that all scientists understand the risks that financial consideration can raise? Some transactions may appear innocent, and in fact some may be innocent. Some may be one, and some may be neither.

Finally, how do we make scientists understand the importance of declaring those risks? This last question is at the heart of the matter, namely, disclosure. It takes more than one person, especially one interested person, to decide on the innocence of a transaction. That is, disclosure is a part of a process. While protecting an individual's privacy is an absolute requirement to make the process of disclosure work, the core of disclosure is that it is public, in a limited way to be sure, but nevertheless disclosure cannot be carried on in the privacy of one's own conscience, but only in a discrete and confidential public forum. To be precise, it may be carried on in many forums.

The NIH has many of its own responsibilities, but so equally have academic institutions, professional associations, industrial concerns and other institutions, because all of us must assure objectivity.

So how do we go about this presently? Peer review subjects research proposals to the scrutiny of scientists with no tie to that research. Research oversight requires a government project officer, knowledgeable about the science, to review progress reports, study analyses of data, and confer with the researcher, and ideally includes on-site assessments of the research and the team.

Publication of research results after peer review makes more and more research analysis available for interpretation and assessment by other scientists. But we must concede that this is not always enough.

Therefore, we also rely on institutional review boards and we require data and safety monitoring boards, DSMBs, for studies in which risks to objectivity are high.

To complement these institutional protections, we also have educational requirements for investigators and we require them to send us monitoring plans for the clinical studies for which there are not needed the SMB's. So this is what we are doing. How can we do it better?

This year, NIH began proactive compliance site visits. These are different from the site visits we conduct when we have a reason to suspect that there is a problem. These proactive site visits are not for cause and are not punitive. The reason for doing them is to require a better understanding and appreciation of how the NIH policies are working, and what difficulties there are or may be for researchers and their institutions to implement the policies and to fulfill their requirements.

Proactive site visits make us ask significant questions. For example, have we focused on the most important issues? How we missed something? Have our policies enabled institutions to effectively monitor financial activity that could interfere with objectivity? And are the policies working?

These site visits give us knowledge of the daily experiences and the business of both the individuals and the institutions that implement and observe our policies. Without that direct knowledge, the odds that our policies will be reasonable, that is, will cover the areas that need to be covered without creating needless red tape, diminish greatly. And since NIH is the steward, a good steward of public funds must know when something is amiss and how to fix it. Particularly valuable therefore are the site visits.

But a truly good steward must know when things are working as they should, or better yet, how to make them work as they should. If we visit when there are no problems, we will be able to be a partner as well as a steward, and develop the best policies and the best strategies for making our policies work. Such visits also allow us to learn by seeing how others grapple with the same issues that engage us here at home.

Now, we have only completed a few of such site visits, so it is premature to draw hard and fast conclusions. But already, we have some insights.

First, some institutions have developed a culture of compliance which I believe is absolutely necessary for the effective day to day working of policies related to objectivity. Compliance is not a single event. It is a way of life or more accurately, a way of doing business. Thus, it must become both a habit of mind and a habit of work, since it is continuous. Everyone must be aware of what is required, must understand the importance and seriousness of individual requirements and institutional requirements, must understand how in the practical sense a requirement is implemented, and must understand that awareness of and compliance with the requirement must be habitual.

The second insight is that some institutions promote a culture of compliance. This also implies that some do not, or have not yet developed and promoted such a culture. NIH requires that each institution have a policy and requires that it make the policy available to everyone. It is impossible to promote a culture of compliance if requirements are not widely understood, and if the practical demands of these requirements are not clear to everyone.

A third insight is that sharing the lessons learned from compliance site visits will be both essential and valuable. Everyone has a place at the table. We need to work together to communicate our experiences, especially the good ones, or those which show the way to improve our policies and their implementation, and we must be able to show them to each other. If there are gaps in our methods, it is better to have more than fewer eyes looking for them. If one institution implements policies more effectively than others, then we should all learn from that institution.

Are our policies capturing real risks, or are they squandering time that could be better spent on research? If this is truly a partnership and the proactive compliance site visits are one part of it, then they must continue, and they will.

Now, this is not to imply that we should be causal in our observations and turn a blind eye. Our requirements assume the possibility that an investigator can have financial activities that could compromise objectivity, and in such cases the institution must eliminate, reduce or manage that conflict.

Some financial activities may be managed merely by making them and especially the extent of them public. Others may be managed in other ways, and it might be useful if you considered this in your deliberations today. But there are other things that need consideration.

We require through our regulations under 45 CFR 46 that no IRB have a member participate in its initial or continuing review of any project in which the member has a conflicting interest, except to provide information requested by the IRB.

While we might all agree that having a financial interest in a study could compromise an IRB member, we do not have a specific regulation at present relating to the financial interests of IRB members. This is I believe also worthy of your consideration.

It is this intersection between the regulation of objectivity of the researchers and the role of the IRB that brings us here today and tomorrow. I challenge you to help us articulate how these different protections work together to support the strongest and best science and the most ethical research. Institutions no less than individual researchers must accept responsibilities when they support and conduct research. Like individuals, their reputations are on the line, and they risk liabilities if they do not comply with federal regulations and policies. Also like individuals, institutions have financial interests in the outcomes of research, and they have the same problems. They too have had more time to deal with the problems of pursuing fame and producing grant results than on dealing with financial ties.

We know what this means today. Biology is fueling the biotech industry. The Bayh-Dole Act has given incentives to those best able to commercialize biotechnology, and the likelihood that a scientific finding can quickly product a usable and salable product has increased amazingly.

As I said, the world is changing. We are not going to be able to stop it. Nor do we want to. But we must have the means and the will to see that the benefits to all of us are bringing scientific advances to market do not jeopardize objectivity, which is the way that scientific discovery has been fueled.

So what can institutions to do avoid that jeopardy, to preserve the public trust in their objectivity? Even if no institution today is violating that trust or putting profit above discovery, we know the risk is real. The risk moreover could affect individual researchers, should they lose confidence in the institution in which they work. Indeed, it could jeopardize the risk and have the risk of the confidence of the public.

We are after all talking about human motives, and some motives oftentimes are unspoken. We do not really like to speak about some motives. To some scientists and others interested in in science, profit is a tainted word. Yet the truth of the matter is that profit is what operates a market economy like ours and produces the wealth that supports the nation, that supports the taxpayers, that support NIH. It is also the prime motivation for producing the drugs that improve human health and reduce human suffering. Profit, or making money generally, is not the problem, nor will it ever be. The problem is the collision of financial considerations, including the desire for personal or institutional profit with the essential objectivity of science.

That collision, I believe, is avoidable. Financial interests and objectivity in science can be managed. So over the next day and a half, I hope we can get some sense of how we can go about such management and avoid the collision. We all have different understandings of risks and many of us have specific knowledge of risks and problems not known to others. This is an opportunity to share them.

You can help state the questions. Finding and asking the right questions is ever easy, but always important. You can help us find the answers to those questions, and you can share some best practices that you know have been successful. We need to listen and to make sure that our ethics and our science are equally strong. Equally is the operative word. For otherwise, we may erect protections that hamper the scientific enterprise and its discovery, or worse, we may weaken or neglect protections in the name of letting science go where it must and turn a magnificent enterprise into a suspect operation. The possibility that either could happen is very real.

Now, I haven't given you a technical address with long quotations from the regulations and policy papers. Nonetheless, what we are dealing with is not abstract. The problems of objectivity and profit, of the equal strengths of science and protection, are daily realities in research. We need realistic and practicable methods to promote science in an atmosphere of objectivity and trust. And I mean we. We are all in this together. It is a grand partnership on behalf of the future of science.

So we will be acting on behalf of our own futures, and the future trust of the public which has supported our science so magnificently. The effort is critical, and I am glad to be in such good company as we get started on it.

Thank you very much.

DR. NIGHTINGALE: Thank you, Dr. Kirschstein. I will now turn to Dr. Jane Henney. Dr. Henney held federal positions at both NIH and FDA before becoming the FDA Commissioner. She was Deputy Director of NIH's National Cancer Institute and later FDA's Deputy Commissioner for Operations. Dr. Henney has also held a number of senior positions in academia, most recently as Vice President of the University of New Mexico Health Sciences Center, making her especially qualified to understand and to deal with the multiple dimensions of the issues we are dealing with at this conference.

Dr. Henney.

DR. HENNEY: Thank you, Stuart, and good morning to all of you. Financial conflicts of interest on the part of sponsors and investigators and their potential impact on human subjects is an area that is becoming increasingly complicated, at the same time that it is becoming a more visible and carefully scrutinized part of the process.

The purpose for this conference is not only to consider approaches that are being used at present to manage conflict of interest at the level of the institution, the IRB and the clinical investigator, but also to see if there are additional ideas and actions we need to consider. These deliberations will be helpful to any refinements out guidance requires to insure existing disclosure and conflict of interest regulations are effective, patients protected, and interpretation of research data as bias free as possible.

Clinical trials, as we all know, play a very vital role in new product development, for it is through this mechanism that data are generated to support the safety and efficacy of products and technologies that will improve patients' well-being, ease suffering, extend lives or improve quality of life.

In making many of its review decisions on new products, particularly pharmaceuticals and biologics, FDA relies on data regarding safety and effectiveness that have been generated by clinical trials. It should be obvious that the quality of decisions FDA makes relies heavily on the integrity of the data submitted.

That means that every part of the clinical trial process has to be above reproach, from trial design to recruiting the appropriate population of subjects, to obtaining adequate informed consent of human subjects, to unbiased interpretation of the resulting data.

The protection of human subjects who participate in research is fundamental. Research study volunteers who may or may not even benefit from participating in a clinical trial, and who accept some degree of risk in doing so, deserve the assurance that their protection is top priority. If a crisis in confidence in how clinical trials are being conducted develops, particularly if research subjects no longer feel safe or protected, then new product development will grind to a halt.

While the financial conflict of interest of clinical investigators is but one of several aspects of human subjects protection, it is the sole focus of this meeting. This isn't the first time that we collectively have considered the issue of financial conflicts of interest and their impact on human subjects protection. It has been a discussion for many years. However, when Jesse Gelsinger died last fall during a gene therapy clinical trial, questions were raised about the financial interests of investigators, and whether these interests clouded judgment or influenced decisions that were made.

The nature of the environment in which clinical trials are conducted is also changing. Dr. Kirschstein alluded to this, but the Bayh-Dole Act of 1980 gives grantee and contractor organizations title to inventions resulting from federal research funding. Thus, the act encouraged more cooperation between government, academia, the private sector, and it provides incentives that stimulate the transfer of knowledge and technology that would lead to new product development. With academic health centers more involved in drug development at the interface now between NIH and FDA, there is a need for heightened awareness, and perhaps for more surveillance to insure that financial conflicts of interests are not resulting in bias or even the perception of bias in obtaining and interpreting study data.

Relationships between industry and academia are also becoming more complex. Academic researchers now serve not only as clinical investigators, but also in the role of sponsors of IND investigations, inventors named on patents, and product manufacturers.

Industry's mechanism for paying clinical investigators to conduct research vary widely from one organization to the next, and can be quite complicated. Often, they are murky at best. Patients become increasingly vulnerable as individuals assume the multiple roles of physician, investigator, sometimes sponsor, and when research institutions stand to benefit financially as well. Legitimate questions arise about an investigator's objectivity, and concern for patients, versus his or her concern about the bottom line.

Let's be realistic. Profits do drive this business. Financial incentives have long been an important and necessary motivating force behind medical advances. As a result, financial conflicts of interest, whether real or perceived, are now an inherent part of the process, and we must deal with them.

It has never been more important to insure that adequate controls are in place to guard against improper behavior or bias, conscious or not, caused by conflicting loyalties on the part of clinical researchers.

FDA recently issued a regulation regarding financial disclosure by clinical investigators. The requirements for these regulations are relatively straightforward. Sponsors who are submitting applications to FDA for review -- and I would underscore that, it is at the time of review, not at the time of initial investigations -- are to certify that the investigators have no financial interest in the product or the sponsor, or if they do, to disclose those interests.

But beyond simply adhering to these requirements, there is a larger question for us as a government regulatory body, and for the academic community that desires to participate in new product development.

How do we foster and maintain a culture of compliance, where our commitment to human subject protection and our respect for patients' well-being are first and foremost? The public's trust in new technology and in those who develop it erodes when research institutions fail to adhere to the standards of good clinical practice, do not adequately protect human volunteers and do not adequately inform them about the risks of participating.

While it is critical that our agencies, FDA and the NIH, earn and keep the trust of the public, so too it is vital that the scientific and clinical research communities have the same vote of confidence. Discovery will only result in products if patients are willing to participate in research studies.

Some questions I hope you will thoughtfully address over the next day and a half are the following. Is it enough to inform human subjects of the investigator's financial interest in the outcome of a trial or the success of a particular product? Will disclosing that information to patient volunteer affect their decision to participate? Will it make the process any safer for them? Can financial conflicts be managed in a way that doesn't adversely affect patient safety or influence the objectivity of the research conclusions?

They are big, they are difficult questions, but the answers are critical to the success of the clinical research process. I would encourage you to share your ideas and experiences over the course of this conference, tell us how you or your institution that you are representing are addressing these issues. We need to find an answer that is mutually satisfactory and effective -- answers that will result in human subjects protection and safe and efficacious medical products for the American patient and consumer.

On behalf of the Food and Drug Administration, I thank you for your interest, your attendance and your participation.

DR. NIGHTINGALE: Thank you, Dr. Henney. Now we turn to Dr. William Raub. Dr. Raub has spent most of his carer in the Department of Health and Human Services. For an extensive period of time, he was at NIH, serving as Deputy Director and then Acting Director. While at NIH and in his current position as Deputy Assistant Secretary for Science Policy, he has dealt extensively with various aspects of human subject protection.

During the current transition of OPRR from NIH to the Office of the Secretary, Dr. Raub has served as the Department's principal spokesperson on the Secretary's new initiatives to strengthen human subject protection.

Dr. Raub.

DR. RAUB: Thank you, Stuart. Good morning, ladies and gentlemen. I echo the earlier welcomes from my colleagues, and I thank you for foregoing a couple of the lazy days of summer to join us in this important conference.

We meet today and tomorrow at the initiative of Secretary Shalala, and in part as a consequence of several trains of events set into motion in Philadelphia last fall, with the death of a patient in a gene transfer clinical trial. The immediate aftermath of that tragedy featured intensified debate about the readiness of gene transfer for prime time, and stepped up efforts by the National Institutes of Health and the Food and Drug Administration to oversee this emergent medical technology.

Both the debate and the evolution of gene transfer oversight continue. But from our focus on gene transfer, a larger truth emerged, namely, that many of the issues raised by the events in Philadelphia were not unique to gene transfer research, but rather were applicable to essentially all forms of human clinical investigation.

This caused us to broaden our scope. In particular, we took account of the fact that clinical research in recent decades has grown ever more complex in terms of its goals, its substance, its financial underpinnings, and its associated ethical considerations.

The overarching issue therefore became the adequacy of our rules and processes for the protection of human research subjects in the face of these ever more formidable challenges. That phase of our efforts culminated in May of this year when Secretary Shalala announced the following five initiatives designed to enhance protection for human research subjects.

First, an intent to promulgate new requirements for clinical investigators and IRB members and staff to undergo continuing education in clinical research methods and bioethics.

Second, an intent to issue new guidelines, making clear that research institutions and clinical trial sponsors are expected to take stringent continuing review actions such as audits of research records to promote compliance with current informed consent requirements.

Third, an intent to expand requirements to study monitoring, while streamlining procedural requirements, thereby increasing the quality of information available to IRBs for use in their mandatory continuing review function, while reducing IRB work loads.

Fourth, an intent to seek new legislative authority for the Food and Drug Administration, to level civil monetary penalties against those who fail to obey federal regulations for protection of human research subjects, thereby filling a significant gap in the current spectrum of sanctions.

Fifth, an intent to undertake an extensive public consultation to identify new and improved means to manage financial conflicts of interest that could threaten the safety of research subjects or the objectivity of research itself.

The last of this list obviously is the reason we gather today. As indicated in the advance materials for this conference, are hopeful that the insights gained here will help us formulate new guidance for the research community and ourselves regarding what information about the financial interests of investigators and research institutions should be disclosed to prospective and enrolled research subjects and others.

The regulations to which I refer are three, and my colleagues already have mentioned them. First, the conflict of interest regulations applicable to research funded by agencies of the United States Public Health Service. Second, the conflict of interest regulations promulgated by the Food and Drug Administration for sponsors and investigators engaged in the development of new clinical products. Third, the regulations for protections of human research subjects, commonly known as the common rule, as a result of their adoption by 17 different departments and independent agencies of the federal government.

As thoroughgoing as these rules are in many respects, they are not explicit as to what information if any about financial conflicts of interest of investigators in research institutions ought to be shared with prospective and enrolled human research subjects.

Through this conference, we are seeking insights about what interpretative guidance might be offered for the research community to make these current regulations more effective.

In the main, this conference marks the collision of tradition and change. For more than a half century, counting from the Nuremburg Code, the people of this nation and others have been almost unanimous regarding the importance of protecting human research subjects from undue and undisclosed risks.

Informed consent has been and surely will continue to be the bedrock upon which our infrastructure of protections rests. Anything that threatens to erode or compromise informed consent is a threat to the entire system of human clinical investigation. Financial conflicts of interest on the part of investigators or research institutions are cause for such concern.

Yet now more than ever before, financial conflicts of interest, at least in appearance if not in fact, seem to be an inescapable part of the modern clinical research scene. Legislation passed in the early 1980s to foster commercialization of research results emanating from universities and other traditionally noncommercial entities has spawned a host of new financial relationships that continue to evolve in scope and complexity with no end in sight.

On the whole, these new financial relationships are making possible the kinds of knowledge transfer and technology transfer that the legislative architects envisioned. In view of the stream of impressive new products entering the health care marketplace, the public clearly is better off as a result of these new relationships.

Our challenge therefore in my judgment is not to arrest or reverse these trends that attend the commercialization of research results. Rather, our challenge is to understand and modulate these trends, such that our traditional commitment, the protection of human research subjects, is continually reaffirmed in word and continually exercised in practice.

In taking up this challenge, we dare not forget that human clinical investigation almost always is the final common path for knowledge and technology on the way from the laboratory to the health care milieu. Without strong protections for human research subjects, enrollment in clinical studies inevitably will decline. What is now a wide and heavily traversed passage will become a bottleneck instead.

Moreover, the losses resulting from a diminished clinical research dream will be to the detriment of current patients as well as future ones. For in many areas of medicine today, the interventions offered in the context of randomized controlled clinical trials are the best health care available.

Thus, my colleagues and I accept the challenge of harmonizing the new financial realities with our longstanding commitment to effective informed consent. The fact that you by your attendance today apparently stand ready to contribute your experience and insight is highly gratifying.

We are grateful to those that signed up for the public comments section at 2:30 p.m. today. We urge everyone to take full advantage of the breakout sessions to make your views known. Between now and September 30, we invite you to share in writing any further thoughts you have about the six questions that drive this conference. For what you are about to do, we thank you.

DR. NIGHTINGALE: Thank you, Dr. Raub. I'm going to turn now to Dr. Thomas Bodenheimer. Dr. Bodenheimer practices general internal medicine and is a clinical professor of family and community medicine at the University of California-San Francisco. He has written widely on health policy and is a national correspondent for the New England Journal of Medicine. He is probably best known to most of you at this conference for the New England Journal of Medicine article that was published May 18, 2000, entitled, "Uneasy Alliance: Clinical Investigators and the Pharmaceutical Industry."

Dr. Bodenheimer.

DR. BODENHEIMER: Good morning. I am a clinician in private practice. I have never conducted a clinical drug trial. I have never served on a single NIH panel or FDA advisory committee. But I do see a lot of patients and I make a lot of clinical decisions based on clinical drug trials.

Now, some patients when they found out I was coming here said to me, tell those people in Washington that our medicines are too expensive. We are suffering. In fact, the drug price crisis has changed how I practice medicine. I used to writ prescriptions saying, take one pill twice a day. Now I write, take one pill as often as you can afford it.

But really, I am here to talk about conflicts of interest and clinical drug trials. Before I start, it is only fair that I disclose my own conflicts of interest. I write with a pen from the makers of Biox. I receive weekly free lunches from pharmaceutical sales representatives featuring pizza and roast beef sandwiches with mayo. I thought they were trying to kill me with all that cholesterol, but now they provide Lipitor for dessert.

Merck was actually nice enough to give me a free Merck manual, which is a terrific clinical reference book. They were persuading me to stop --

(End of Tape 1, Side 1.)

-- things about them in the New England Journal. I think after my talk today, Merck will probably come and take my Merck manual away.

So what is conflict of interest? It can be simply defined as a conflict between the private interests and official responsibilities of a person in a position of trust. In clinical drug trials, conflict of interest really takes place when an investigator has a financial relationship, and it is usually research funding, with a company whose product the investigator is studying.

Now, conflict of interest is virtually ubiquitous, as we have heard, in clinical drug trials, because so many trials are funded by the manufacturer of the product being studied. So what is the real problem?

The real problem in my opinion is that conflict of interest may be a risk factor for scientific misconduct. What do I mean by that?

Let's consider the analogy of smoking, a risk factor for coronary artery disease. Not everyone who smokes has coronary artery disease. Not everyone with coronary artery disease smokes, but people who smoke have a greater chance of having coronary artery disease. Similarly, not all conflict of interest situations create scientific misconduct, not all misconduct is associated with a conflict of interest, as Dr. Kirschstein wisely said, but conflict of interest situations may increase -- I hypothesize -- the chance of scientific misconduct.

Now, scientific misconduct is a very serious term, and I really do not use it lightly. But if there are problems with the clinical drug trial process, these problems can impact the practice of tens of thousands of physicians and the lives of millions of patients, and that to me is serious.

Now, this slide illustrates the spectrum of scientific conduct. Investigators who perform research free from bias and error are contributing to objective science. Investigators whose work is marred by unintentional bias or error practice imperfect science, and this is very common, because doing clinical drug trials is an extremely difficult undertaking. But investigators who intentionally allow bias or error to infect their work are in my opinion practicing scientific misconduct.

This includes a number of things: designing studies to insure a desired result, making statements that are not justified by the evidence, publishing only part of the evidence, suppression of research, and of course the most important is outright fraud or fabrication of evidence.

The bulk of this talk will attempt to show that conflict of interest in clinical drug trials is indeed a risk factor for scientific misconduct. In other words, the conflict of interest is associated with intentional bias in the conduct and publication of drug trials. It doesn't have to be, but it can be.

Now, before I proceed, I want to say something very clearly so not too many people are squirming in their seats. Many clinical trials are well designed, well executed, correctly analyzed and appropriately written for publication. But we don't need to talk about those; we need to talk about the problems.

One thing about the evidence I am going to present -- there are two kinds of evidence I am going to try to show. One is quantitative studies of the drug trial process. The second is citations from respected authors in quality journals who have studied the drug trial process. I also want to say, it is hard to determine whether some of the examples I am going to be giving you this morning concern phases two, three or four of the drug trial process. I suspect that more of the problems I discuss are related to phase four.

Now I am going to present some general evidence showing a bias in industry funded clinical drug trials.

Davidson did an analysis of 107 drug trials in five leading medical journals. He found that 89 percent of company funded trials favored the new drug compared with traditional therapies, whereas only 61 percent of trials not funded by industry favored the new drug. He concluded that there is a significant association between industry funding and the outcome of the study.

Stelfox analyzed 70 articles concerned with the safety of calcium channel blockers, and reported in the New England Journal that 96 percent of authors who are supportive of these drugs had financial ties to calcium channel blocker manufacturers, whereas only 37 percent of authors who were critical of the safety of calcium channel blockers had such financial ties. Again, the result demonstrate a strong association between authors' opinions about the safety of calcium channel blockers and financial relationships with the drug's manufacturers.

Friedberg looked at 44 articles performing an economic analysis of oncology drugs. Only five percent of those articles funded by pharmaceutical companies had conclusions unfavorable to the investigational drug, the company's products, whereas 38 percent of those without industry funding had conclusions unfavorable to the drug under investigation.

Roshan looked at randomized control trials of NSAIDS between 1987 and 1990. She found that virtually all of them were funded by the drug's manufacturer, and that the efficacy of the manufacturer associated drug was comparable or superior to that of the comparison drug in 100 percent of the trials. The manufacturer associated drug was safer than the comparison drug in 86 percent of the trials.

Shoe and Barrell reported in the Annals of Internal Medicine that 98 percent of company sponsored drug trials published in peer reviewed journals or symposium proceedings between 1980 and 1989 favored the company's drug. Can you imagine an election in which someone gets 98 percent of the vote? I guess if you pay for the votes, it is possible. Maybe the drug trial world needs some kind of campaign finance reform.

So the take-home message of these studies is that company funded trials have a high likelihood of favoring the company's products. The editor of the BMJ, the British Medical Journal, said that these studies begin to build a solid case when conflict of interest has an impact on the conclusions reached by papers reached in medical journals. Is this scientific misconduct? Well, it is something we need to think about, for sure.

Now I want to move on to some more specific examples. What I want to do is, I want to break down the clinical drug trial process into its component parts. Which trials get funded? Who designs the trials? How are patients enrolled? Who analyzes the data? Are the results published, and who writes the articles? And what is their content?

So let's start with which trials to fund. Alan Hillman, John Eisenberg and others looked at pharmacoeconomic studies, concluded that pharmaceutical companies fund projects with a high likelihood of producing favorable results. Former FDA Commissioner Kessler wrote about phase four trials. Some company sponsored trials to approve drugs appear to serve little or no scientific purpose. They are in fact thinly veiled attempts to entice doctors to prescribe a new drug being marketed by the company.

Let's move on to the design of clinical trials. Levy has written, almost all clinical investigations of the comparative efficacy and safety of medicinal agents require financial sponsorship, mainly by the pharmaceutical industry. Many, perhaps the majority, of such investigations are actually designed and initiated by medical or clinical pharmacology departments of pharmaceutical companies. The studies tend to be designed, with outside investigators' help sometimes, by the companies themselves.

I want you to think about this particular example, because this is a very important example about trial design. Johansen and Gaucha performed a medical analysis of the effect of Fluconizol versus Amfliterazin-B on total mortality in patients with cancer complicated by neutropenia.

Now, both of these drugs can prevent and treat systemic fungal infections that occur in neutropenic patients. Ninety-two percent of the patients were enrolled in trials funded by Pfizer, which makes Fluconizol. Important. Seventy-nine percent of the patients received the Amfliterazin-B orally, even though the drug is poorly absorbed from the GI tract, and is not used for systemic infections.

I have been in practice for 30 years, and I have never seen Amfliterazin-B used orally for systemic infections. It is an intravenous drug. So this design virtually guarantees that Fluconizol, the funding company's drug, will produce better results. Unless there is something I do not understand about this, this design qualifies as scientific misconduct.

Let's take another example of trial design. Roshon found that in 54 percent of company sponsored NSAID trials, the doses of the funding company's drug were higher than those of the comparison drug. Well, if a dose is higher, the drug is probably going to look more effective. So this design creates a bias in favor of the sponsor's drug.

Let's move on from design to the question of how patients are enrolled in clinical trials. As we know, the pharmaceutical industry wants trials done as quickly as possible, and many private physicians are now involved in doing clinical trials. They may receive $1,000 or $5,000 per patient enrolled in a trial. Physicians may stretch the inclusion and exclusion criteria to enroll as many patients as they can, thereby compromising the trial's validity. Physicians have been reported to enroll patients who do not even have the disease being studied, and physicians with no knowledge of a disease being studied are participating in trials. For example, psychiatrists studying cardiac drugs, resulting in data not always being adequately collected. That doesn't mean that there are not many physicians and other researchers who aren't extremely competent at conducting trials, but this kind of stuff does go on, and it is a problem.

Hutchins and associates reported in the New England Journal an issue about the under representation of the elderly in clinical trials. What they found was that 63 percent of people in the United States with cancer are over 65 years old. But only 25 percent of people in cancer therapy trials studied by these authors were over 65.

Now, elderly people have less response to chemotherapy and may tolerate chemotherapy poorly. So in the population most likely to use these cancer drugs for the elderly, the drugs may be less effective and more toxic than the clinical trials indicate.

Let's move on to the issue of analysis of data. When I started looking at the how the clinical trial process works, I was really surprised to learn that trial data usually belongs to the sponsoring company, and the company decides who sees how much of the data. Even investigators conducting a trial may not see all the data from the multi-centered trial. The question is, does that matter?

Let's look at this example. Loretson et al. were investigating a new drug compared with Ranitadine for use with gastric ulcers. They write in The Lancet that the new drug was inferior to Ranitadine at one trial site, that healing rates between the two drugs were similar at most sites, and that the new drug was superior at only one site.

Now, the site with the drugs favoring the manufacturers drugs submitted its results for publication separately, which made the sponsor's drug look very good. Had the results of all the different sites been pooled, the sponsor's drug would not have done as well. Moreover, two years after the trial concluded, unfavorable data from the site where things looked bad for the sponsor's drug were still being considered in the company's headquarters. The take-home message: To get results you want, get control of the data.

Another method that industry uses is to design a trial with multiple end points. By controlling data analysis, the companies can choose to publish those end points favorable to their products and bury data on those end points that are less favorable.

I want to move on to a final phase in the drug trial process, which is publishing; are the results published.

In 1996, a pharmaceutical firm threatened Canadian investigator Nancy Olivieri with legal action if she published research results unfavorable to the company's products, which Dr. Olivieri was studying.

In 1990, a company funding University of California Betty Dong refused to allow Dr. Dong to publish her findings. It is unknown how often outright suppression takes place.

Chalmers says substantial numbers of clinical trials are never reported in print. Failure to publish an adequate account of a well designed clinical trial is a form of scientific misconduct that can lead those caring for patients to make inappropriate treatment decisions.

Does it matter if trials go unpublished? Here is an example of how it does. Symes published in the Journal of Clinical Oncology in 1986 that if you looked at published trials about treatment of advanced ovarian cancer, you would find that combination chemotherapy is superior to a simple alkylating agent. But if you looked at all registered trials published and unpublished, then combination therapy, lo and behold, is not superior to an alkylating agent. If you were a physician treating ovarian cancer in 1986 and you only had the published trials to look at, you would probably give your patients highly toxic chemotherapy with little or no benefit.

Now, I want to turn now to a fascinating and almost ridiculous aspect of clinical drug trials, which is who writes the articles. Here we are entering the world of guests and ghosts. The Lancet commented on this phenomenon in 1993. A typical sequence of events begins with a publisher -- by that, they mean a medical communications company -- agreeing to prepare a review article for a drug company. A staff writer prepares the review to the sponsor's satisfaction, whereupon the publishing house contacts a doctor with a special interest in the relevant topic, to inquire whether he or she would like to be the guest author, subject to approval of the content for an honorarium.

The pinnacle of success is to sign up a prominent academic. The final version of the article may contain no clues about its origins.

Renney has written, ghost authorship occurs when those who write the article contribute in important ways to its publication and not named as co-authors. Known instances are becoming common, as is the practice of paying big names to appear on the byline in place of the ghosts, though they contributed nothing except their prestige.

There is a survey done by Flanagan and others looking at the corresponding authors of 809 articles published in six major journals in 1996. Nineteen percent had guest authors. Eleven percent had ghost authors, and 29 percent had either guests, ghosts, or both. Not all of these guests and ghosts were recruited by the pharmaceutical industry. But these figures really show that guests and ghosts are not endangered species in the publishing world.

I'd like to read the second half of this slide from two JAMA editors. Editors like us have had the disheartening experience of telephoning the senior author, only to be switched back and forth from the scientist, who had no idea what had been written, to the writer who did not understand the science.

Brennan in a commentary in the New England called "Buying Editorials" says, I was recently surprised to receive a call from a representative of Edelman Medical Communications, a public relations firm in New York, asking whether I would be interested in writing an editorial for a medical journal. The caller said I would not really have to do much work on the project. They would have a professional writer compose the editorial, which I could modify as I see fit. I would earn $2500, and the entire project would be funded by a pharmaceutical manufacturer.

Now, the question is, I have spent a lot of time on this, does it matter who writes the articles? Well, yes it does. This was a letter published in the JAMA. Several authors refused to place their name on a study whose results were unfavorable to the sponsoring company, because the sponsor was quote, attempting to wield undue influence on the nature of the final paper. This effort was so oppressive that we felt that it inhibited academic freedom.

In her study of NSAIDS, Roshon looking at manufacturer sponsored trials, had claimed less side effects for the manufacturer's drug. Forty-five percent of those claims of less side effects were not supported by the trial data. She says, these data raise concerns about selective publication or biased interpretation of results in manufacturer associated trials.

In another study of 196 NSAIDS trials, Gaucha found that not full or invalid statements were made in the conclusion or abstract in 76 percent of the articles, with the statements overwhelmingly favoring the new drug rather than the control drug. Clearly, articles have been written with a spin not justified by the data.

Basically, we are coming to the end. But this is a statement made by an academic clinician recently. The science has been lost in the rush for money. We have lost our way. We have terribly, terribly lost our way.

So to summarize, in many aspects of drug trials, biases can be and have been intentionally introduced that favor the company funding the study. How often this happens, we need research to find out. But I think the evidence that I have shown makes a reasonable case that scientific misconduct does take place in clinical drug trials, that conflict of interest is a risk factor for scientific misconduct, and that something has to be done about it.

Very briefly, what can be done about it? Now, pharmaceutical companies with a little help from their friends in academia and NIH, have created products of great benefit to the world. That activity absolutely must continue. But to reduce the risk of scientific misconduct, investigators and authors need to gain more independence from their funders.

Trials should be designed by academic investigators outside the walls of the sponsoring companies. Academic investigators should control the data, and it should be publicly accessible. Publications prepared by academic investigators rather than by the sponsoring company, and without pre-publication review, because a lot of times in pre-publication review, a lot of changes get made by the sponsoring company.

Now, other mechanisms are needed to protect reasonable patent and property rights. The use of industry-paid ghost writers and guest authors should be prohibited. And more trials should be NIH funded. There really should be a better balance between company funded and NIH funded trials, so that clinical investigators are less dependent on industry funds to pursue their careers.

Ideally, a pharmaceutical firm with a drug ready for trials should transfer to the NIH a budgeted sum of money for the trials, and the NIH should sponsor the trials without industry influence.

Einstein has said, the right to search for truth implies also a duty. One must not conceal any part of what one has recognized to be true.

DR. NIGHTINGALE: Thank you, Dr. Bodenheimer. I want to thank the entire panel for their presentations and for setting the stage for our important deliberations. Certainly many important issues have been raised, and we are ready to go. Thank you also for staying on time.

We are going to reconvene at 10 minutes to 11. Thank you.

(Brief recess.)

DR. BALDWIN: I'd like to begin the morning panel. We are closing the doors. I'd like you to take your seats, please. This is the first of the panel discussions for today's and tomorrow's meeting.

Now, I am very pleased to see the level of discussion and the fact that it is hard to get you back in the room, because one of the purposes of a meeting like this is to give you a chance to talk about the issues that you hear here, to talk amongst yourselves and then to prepare yourselves for the discussion in the breakout groups.

We have amassed I think a wonderful series of speakers for you, but this meeting would not achieve its purposes if you just came, sat in the audience and listened. This really is a very interactive meeting, and that is both in the breakout groups and in the fact that we have built in enough time for people to visit and talk about these issues. We are raising very difficult issues, and I think they do take the kind of deliberation and talk that we can afford at this meeting.

This first panel represents the view of national organizations. A theme of this morning's speakers was that the kinds of issues we are dealing with here are the ones that require partnerships. They are not going to be solved by the federal government alone. They are not the responsibility solely of individual investigators. They really are a partnership from us at the federal side to the academic institutions, the commercial organizations, the individual researchers, the IRBs and of course the participants or the subjects in research. We are trying to find ways to negotiate that partnership.

National organizations are especially important, because they are a framework if you will for many of these different actors. They provide organizations of the medical schools, for example, or of pharmaceutical companies. They provide that kind of organization that is a comfortable home for educational activities. They are a wonderful venue for developing shared values, shared believes, sharing practices and best practices and ways of doing things. They are a way to go to your peers and say, I am trying to do it right. Do you have any clues, can you help me talk with others who have dealt with these same problems. So I feel that whether it is from the very specific things of how we share educational programs or they are more amorphous things, how we develop shared values and norms and practices, that national organizations are very important to us.

This meeting really is going to focus on that intersection of conflict of interest of financial conflict issues and how IRBs function. That is our overarching theme here. So our speakers are going to talk about those issues from their different perspectives.

I think there are three ways that we can branch out from where our existing policies are. We have policies regarding objectivity in research or financial conflict of interest for individual investigators, and we have certainly set the stage for dealing with that for IRB members; is there something more that needs to be done?

We have also looked at whether IRBs -- it is enough that IRBs are familiar with their organization's policies when they exercise their due diligence; is there something more that they need to do relative to conflict of interest. I guess the final area that I think is a stretching of our boundaries right now is that our current policies primarily deal with conflict of interest for individual investigators as opposed to institutions.

Now, of course that is already a situation for commercial organizations, but for our academic partners that may be a bit of a new issue. I think those of us who were involved in the development of the PHS regulations for individuals understand what a difficult process that was, and how much discussion that takes with a community in order to find a way to develop a policy and practices that are understood by all and can be adhered to. I think that would be the same if we proposed to stretch any of these boundaries, too. This is really the first step in that discussion.

We have a very nice varied panel today. For those of you who are very good in math, you may have noticed we were supposed to have five speakers; we only appear to have four. That is okay. Dr. Seffrin has called from the cab, so he will join our program in progress, not a problem.

Now I would like to introduce our first speaker. Dr. Mark Brenner is at the University of Indiana, and he is representing the American Association of Universities, but his academic career began at the University of Minnesota, and he rose up out of a science background into administration and is now vice president for research, moving on to Indiana, where he has had even larger experiences and responsibilities with the oversight of the research at a major university, and is also chair of the executive committee for the IRB.

So I think Mark brings a really good, broad perspective for this, and I am happy to hear him come and share an AAU perspective with us.

DR. BRENNER: My friends in the room know I can hardly operate and speak to a room without a Power Point visual to help me.

I am really honored and feel very privileged to speak on behalf of the Association of American Universities. I think it is probably appropriate to celebrate our 100th anniversary this year. The Association of American Universities is 61 lead research universities. The relevance to this meeting is, we comprise 65 percent of NIH's academic research grants. So the impact of the AAU on NIH and its research -- we think we have a close partnership.

This past spring, the AAU formed a task force. The task force was on research accountability. It was formed in March. It has had several foci, the first of which is the protection of human subjects. I will briefly summarize the actions of that first product.

We are now beginning to position ourselves to address conflict of interest. The report that is the product of our efforts on human subjects was the report on university protection of human subjects, who are the subject of research. It was issued this past June.

The task force members are 15 members, two staff and one ex officio, comprising of six presidents and other individuals including myself. Essentially, the overarching statement is, the American Association of Universities believes it is vital for leaders in the academic community to insure that research conducted on our campuses meets the highest ethical standards and promotes public health.

Our report -- and I am not going through all of it -- has a background statement, guiding principles, recommendations and conclusions, a call to action and promising practices. I think it is probably unnecessary to go through all of this. Clearly we have long been committed to the statement that we wish to have our administrators, faculty, researchers and staff all participate in maintaining protection of human subjects.

Specific recommendations that have come out of our report, to increase vigilance by the senior university management, to promote training and examining of all staff and researchers involved in human subject research, to strengthen the IRB training itself and provide appropriate support for its activities and operations, thereby appropriately increasing resources as necessary. And finally, to insure public accountability by increasing information to the public about the universities' systems for protecting human subjects.

Now, included in our report is a section entitled promising practices. This really addresses managing individual potential conflicts of interest. This is just one of a number of promising practices that is relevant to the discussion today. We note that on some of our campuses, an increasing number of our campuses, our IRB include questions to researchers submitting protocols for review and consideration by the IRB to questions about conflicts of interest.

So we are saying that when a scientist submits a protocol, they also need to disclose potential conflicts of interest. We already do that as mandated for our conflict review committee, but it now should also be going to an IRB. The IRB and conflict review committee should be reviewing these potential conflicts of interest.

The IRB then should determine if the consent form should disclosure a potential conflict of interest and what other appropriate actions should be taken; should the management practice be further refined to assure the maximum protection of human subjects. That is essentially what we are speaking of.

Dr. Baldwin asked me earlier today, what about disclosing conflict of interest for our individual members of the IRB? We didn't address that explicitly. The reason is, that is standard practice now. So it is normative for all of our institutions to expect our IRB members to recuse themselves for decisions in a panel discussion if they have a conflict of interest, whether it is technical or financial.

What are the next steps then for the task force to be addressing conflict of interest? The task force will more formally convene on this subject in October. So we just had framing discussion by some of the members on where we are going. It is likely though that we will address the following points.

We more than likely will articulate general principles regarding conflict of interest. We more than likely will review the existing definition of conflicts of interest and see if for our institutions do we wish to modify the definition to see if what we are currently using doesn't need to be updated.

We wish to review existing management practices related to community conflicts of interest. Specifically on this point, we have been living with policies and procedures that have been required of us for the past five years. We implemented those policies and procedures based on well reasoned principles. But at the time when we implemented those policies, we did not really have a lot of experience behind us.

We have now had five years to reflect upon our experiences of those procedures, and we have done this though because much of these issues have been confidential, case by case at our own respective campuses. It is time now to see if we can share with each other, recognizing the confidentiality of the issues, but there are probably some principles beyond that, to determine, are there some better management practices that some institutions have found, and are there some management practices that have turned out to be problematic, although they were perfectly logical when we conceived of them, but we didn't have the experience behind us.

Secondly then, we need to be thinking about principles related to institutional conflict of interest. This is a different perspective of the institution's role, as institutions have become more promotive of entrepreneurial activities, in some cases in our technology transfer activities, institutions have taken equity positions. Does that then affect our objectivity in the review process? It is an important question that needs to be addressed.

There are guiding principles that we clearly should be thinking about. We are committed to insure the integrity and objectivity from many perspectives. Clearly we wish to do this for our research activities and our reports. We wish to insure the integrity of our researchers. Equivalently, we are committed to the highest standards for the educational experience of our students and conflict of interest should not allow us to affect their educational outcome. We clearly must insure the integrity of our institutions. Without that credibility in the public sector, we would be lost.

We clearly need to assure that the integrity and objectivity of our work does not affect our research subjects in a negative way. We must assure that our collaboration with our research partners, whether they be other institutions or the private sector, is done at the highest level of integrity and objectivity.

We also must assure that our sponsors, including NIH, will continue to be confident in our work product.

So there are important considerations as we move forward. Specifically then, universities are committed to advancing the frontiers of knowledge within a framework of the highest ethical standards. Research universities consistent with the Bayh-Dole Act are committed to the transfer of technology to the commercial marketplace. Finally, new regulations and policies need to be recognized, that there are a vast array of different organizational, cultural and operating practices that exist among our research universities.

In a survey that I have initiated, and I have some partial results, I have been looking at management practices on conflict of interest at our various research universities. I am learning that in fact, the management practices represent a large array of different approaches. It is not in my mind a problem. In fact, it reflects the astute management of the respective institutions to fit important principles to the local culture and practice of the institution, to make sure that it works in their respective setting.

The point then being is, as we move forward, we must be careful not to over-regulate on specifics of how institutions manage conflict of interest, but continue to articulate what are the overarching principles that we should be guided by.

What are possible products of our task force activity? Clearly, I think one can assume that we will have a formal report similar to the one that we issued on human subjects. The survey that I mentioned we hope to have as a product that will guide us to more fully understand how we are currently operating, and are there some practices that we should promote and share with each other.

One way of doing that will be to organize workshops and conferences on management conflict of interest. We do this now informally. I think it is probably appropriate that the AAU institutions come together on this very important subject.

One of the reasons, as I have already been implying, but it is my mantra, is to identify promising practices for managing conflict of interest. We need to promote the creation of educational materials on conflict of interest and share them. We are all challenged with coming forth in a time manner with education materials. I strongly promote that we do this in a collaborative way to use the best talents of our institutions, and not simply duplicate independently of each other.

Finally, to create principles regarding institutional conflicts of interest. I think as I stated before, I want to reinforce, this is an important issue. At this point, I don't think we can come up with specifics, but it is probably well to consider what are overarching principles that we need to put in front of us to guide us through this important issue.

Thank you very much for the opportunity to share comments.

DR. BALDWIN: Thank you, Mike. Now we are going to move from an academic perspective to an industry perspective. Angus Grant is our next speaker from Eventus Pharmaceuticals, where he heads gene therapy trials. He has past experience at NIH and at FDA, and is here representing the Biotechnology Industry Organization.

DR. GRANT: I was asked to represent the Biotechnology Industry Organization, and accepted. I wasn't sure if they chose me because everybody was on holiday, but seeing the size of this crowd, not everybody was on holiday. So a bit more people than I expected.

After I agreed to do this, then Bio told me that they don't have a position. They are working on one. So I thought, this is great, I'm representing no position. So I talked to them and I said, what should I do. Then we thought, there weren't a lot of industry representatives, so I thought I would give you a little bit of insight as an industry representative, and being that I am in the field of gene therapy which has lately come under pretty intense scrutiny and been the cause of a catalyst for certain meetings, I thought I would give you a little bit of insight into what we do.

We are here today talking about financial conflicts of interest. So from the industry perspective, a couple of things need to be kept in mind. This was pointed out a few minutes ago.

Industry is in the business of developing, licensing and selling drugs. To date, I have been involved in some projects that the drugs turn out not to work. The company has decided not to develop them. I hate to break that to you. That may seem like a surprise.

We tend to pick drugs that we think might work and might be marketable. Otherwise the investment is not there. My job as a regulatory affairs individual is to try to insure that we comply with laws and guidelines. In drug development, I am most often involved in late phase development. While in your booklets you have numbers of documents that deal with the investigator as an academic investigator --

(End of Tape 1, Side 2.)

While you are looking at these various guidelines, again focusing on the FDA guidelines, because most of the clinical trials in big pharma when you are heading towards licensing, are funded by the company and not by HHS. So we have to fall under the FDA rules.

What I have found is, most of our investigators are generally free from FDA defining conflicts of interest, because we are paying them to run the studies, but we are not paying them for other things, not in all cases, but what we are seeing now is that the financial conflict rules that the FDA has put into place are a big aid in helping us as regulatory affairs staff help convince our marketing teams what they can and cannot do with their funds.

So we go through a regulatory compliance process. These contain three elements, basically: to educate the investigators, to obtain documentation and them to evaluate the utility of these investigators.

To educate the investigators, we start off in the beginning -- as was mentioned earlier, the FDA rules require that you submit documentation just before you file your license application. That is what the FDA gets, but as an industry development team, we have to try to assure that by the time we get to that submission stage, we don't have investigators with financial conflicts of interest that will compromise our data and compromise our ability to get a drug licensed.

So we discussed this up front with investigators, and in fact asked them to sign a contract agreeing to disclose financial conflicts of interest and agreeing to provide the paperwork for the FDA.

We then hold investigators' meetings and hold training sessions. As the NIH has recently put out some guidelines, the NIH would like to see some more formalized and established training for investigators for possibly certification for them to receive HHS grants, to receive funds to do clinical trials. We also provide similar kinds of training, and we will probably take some tips from this new NIH initiative to do what we can to make sure our investigators understand where the boundaries are in financial conflicts of interest.

We also then go to the individual sites and do what we call site initiation. Our clinical monitors, our field monitors, actually see the site, sit down with the investigators and review this information yet again.

We also then have to check with our own internal records, because as you know, drug development is a huge and complex process, and large companies have many different arms of the company, dispensing funds for different activities, whether it be teaching symposia or the operation of clinical trials.

So we go to our records to also check and make sure whether or not we have sent funds to an investigator which might compromise the data set at the end of the day. Then we collect the same documentation and our field clinical monitors monitor the documentation and the documentation that has been generated, and make sure that it all matches what the investigators were using. At the end of the day, we hope we have got a high quality data set that we can submit to the FDA, and the FDA won't take action, which is fully within their bounds, based on the financial conflicts of interest rules that they have recently passed.

The documentation process, just a quick review. It is the signed contract in the beginning, and then at the end is the signed FDA form 3455, if necessary.

Then the education of the investigators, the signed agreement to comply, the evaluation of the information provided. If we evaluate an investigator, we want to compare it to our records and see what they have told us, and if we find any discrepancies, then that may be an opportunity for us to choose otherwise.

We then evaluate the relationship between the investigator and the clinical trial subjects. We evaluate the impact on the study, even if there is no apparent direct conflict. One of the issues is the size of the data set that the individual investigator may be responsible for. For those of you who have read the FDA guidelines or rules, you can understand why there might be sensitivity if you have one investigator who is responsible for a large set of subjects. Then we make the decisions whether to include or exclude investigators.

The new FDA rules have really increased the tension with the potential financial conflicts. Decisions now to include investigators have been made that might not have been made previously. The rules only exclude investigators to the extent of the investigator or the sub-investigators, but they do not cover the institution itself.

From a regulatory oversight of compliance to financial conflict of interest rules, the large companies running large multi-site trials, it provides some protection for research subjects. FDA financial conflict will have an effect, but at this stage, my position is that it is predominantly in late-stage development.

Thank you very much.

DR. BALDWIN: Thank you. We're going to go from an academic perspective to an industry perspective, and now our next speaker, David Korn, will provide somewhat of a tie-together of some of these themes.

I think David probably needs no introduction to you. He is currently senior vice president for biomedical and health sciences at the AAMC and a past professor and dean of medicine at Stanford. I know these are issues that David has been giving considerable thought to, and I look forward to hearing his views.

DR. KORN: Well, good morning, everybody. I want to say that what I am going to talk about today has really been very much influenced by my years of experience in a biomedical investigator renowned for its research and faculty entrepreneurialship, situated in the very heartland of biotechnology and venture capitalism, and known for its aggressive technology transfer.

The conference has been organized by the Department around six major questions, to which the AAMC has submitted a formal letter of response, which is available on our website. I believe there are or will be copies available to you here. In the afternoon session, my colleague Alan Schipp will actually present briefly some of the highlights of that letter, so I am going to use my time not to deal with the letter, but to offer some personal observations that I hope will be helpful.

First, I want to remind you all of the existence of three very thoughtful and helpful monographs on the topic of conflict of interest and commitment in academic medicine, which were published in the early '90s by the AAMC and by the AHC respectively. In preparing for this meeting, I reviewed those documents and found them as insightful and compelling now as they were when first disseminated.

So I would argue that a sound framework of recommended investigatorial policies and procedures for managing and mitigating these kinds of conflicts exists, and I would comment them to you. I agree that the framework may well need some fine-tuning for bringing into conformity with remarkable changes that have occurred in biomedical science and technology in the last decade, but I do suggest that we do not have to start from scratch, and I find that comforting.

Among the important points emphasized in those documents were several that bear repetition and reflection. First, conflicts of interest and commitment are ubiquitous in academic life and indeed in all professional life, and conflicting pressures inherent in the academic milieu, for example, for faculty advancement, obtaining sponsored research funding when in the acclaim of one's professional peers, competing for prestigious research prizes and yes, even the desire to alleviate human pain and suffering, all, all may be more powerful in influencing faculty behavior than the prospect of material enrichment.

These intellectual conflicts tend to be amorphous and are not of much concern to the public. But they are widely recognized within the Academy and institutional policies and procedures, as well as scientific review procedures have long been in place to try to manage them.

In contrast, financial conflicts tend to be very discrete and quantifiable, but they are often unrecognized unless specifically disclosed. Moreover, financial conflicts are well recognized by the public, well understood by the public, and for this reason I think pose a special vulnerability and risk to our enterprise that demand our attention.

Second, it follows that although these conflicting pressures are diverse in origin and may be subtle, their oversight and management at least for academically conducted research are primarily the responsibility of universities and academic medical centers. The federal government naturally has a legitimate interest in this issue, but I would suggest that interest is circumscribed and should be limited to first insuring itself and the American public that research involving human subjects is performed in compliance with the requirements of the regulations, 45 CFR 46 and the homolog in the FDA.

Second, that federally supported biomedical research is conducted and reported with integrity, and third, that data that buttress decisions that affect the health of the public are sound and trustworthy.

Accordingly, the federal involvement with conflict should continue to be proportionate and carefully tailored.

Third, it also follows that since these conflicts can never be eradicated from professional life, their existence has to be accepted and not equated with scientific misconduct. This last is a very important point. The presence of conflict of interest or potential conflict of interest does not equal scientific misconduct. Unfortunately, the history of Congressional and media experiment in this issue arose from several well-publicized medical research scandals in the early and mid-1980s, and some of which scientific misconduct and blatant conflicts of financial interest seem to be inextricably intermingled.

Ever since, whenever the public has become aware of an allegation of scientific fraud or the occurrence of a major adverse event in clinical research, the search immediately begins to ferret out even the whiff of financial conflict of interest on which to lay the blame. Indeed, it is the publicity about recent tragic deaths in gene transfer trials that have re-aroused the Congress, and is probably responsible for this meeting.

Fourth, I think a remarkable feature of U.S. science policy during most of the post World War II era has been the relatively light hand of federal oversight of the processes of scientific research, the deference shown to scientific and academic self governance and implicit trust in the integrity of scientists.

It has helped that the vast majority of federal funding for basic research has flowed through universities, which have benefited enormously from their public image as independent and disinterested producers and arbiters of knowledge.

It is instructive to recall a principle that was embodied in a 1915 declaration of the American Association of University Professors, quote, all true universities whether public or private are public trusts, designed to advance knowledge by safeguarding the free inquiry of impartial teachers and scholars. Their independence is essential, because the university provides knowledge not only to its students, but also to the public agency in need of expert guidance, and to the general society in need of greater knowledge. These latter clients have a stake in disinterested professional opinion, stated without fear or favor, which the institution is morally require to respect. End quote. I would only add, which both the institution and the faculty are morally required to respect.

Today, I fear that the image of academic virtue and the public's readiness to trust may both be shaky, as major research universities and academic medical centers struggle to meet the ineluctably contradictory demands from government and industry, that they become engines responsible for regional economic development, while at the same time assiduously avoiding even the slightest suspicion that their growing financial entanglements with Congress might distort their conduct and reporting of research.

Nowhere is this dilemma and exposure greater than in academic medicine, in which the breadth, depth and intensity of commercial engagement have grown especially rapidly and very visibly.

I do not suggest that the commercial exploitation of faculty discovery is limited to biomedicine -- far from it. But when faculty or institutional conflicts of interest occur in computer science or microelectronics or the law school or business school, they don't make front page stories in the national media and in headlines on the 6 o'clock news.

I suggest that neither our biomedical science professions nor our academic medical centers have yet stepped up to the challenges posed by this profoundly changed relationship between the Academy and the world of commerce, nor have they devised mechanisms that would successfully enable them to meet these conflicting public demands, while remaining free from suspicion and protecting their image of virtue from blemish.

This failure is particularly dangerous, when one considers that biomedical research has flourished because of public trust in the good of the enterprise. That trust is nowhere more fragile than in medical research involving the participation of human subjects, where even the perception of faculty or institutional conflict of interest cannot be tolerated.

Now, I acknowledge that this sets a very high standard, more stringent than that faced by any other faculty, or indeed by almost anyone else in society. But the future of the enterprise demands that we meet it.

In the time that is left, let me just make a couple of comments about the substance of today's agenda. The oversight of financial conflicts of interest in federally sponsored research has been accomplished by federal guidance rather than prescription through the mechanism of institutional assurance.

Some of us remember back in the late '80s or 1990 when the first proposed rules were published for comment by the Department in response to their new Congressional mandate. That proposed rule was roundly denounced by the scientific community, academic medical centers and universities for being over reaching, unacceptably prescriptive and intolerably intrusive into matters that traditionally were reserved for academic self governance.

In the fact of that outburst, Secretary -- then Louis Sullivan withdrew the proposed rule and subsequently issued the more general guidance that is now in place.

All responsibility for oversight and management under this guidance is deferred to the institutions. I think it is legitimate today to ask whether, given the growing entanglement of academic medicine with industry and the sometimes very large financial interests held by investigator faculty and their institutions in startup ventures founded on those very same investigators' inventions, is disclosure alone still sufficient to protect the public trust.

There are two things we have to think about this. First, if disclosure is necessary, to whom, and in what detail? To IRBs only, to prospective research subjects in the informed consent process? Yet to others?

We have to think in deliberating these questions if there is a research literature on the informed consent process itself that calls attention to some of its limitations. One of these is the amount of information that a prospective subject can really integrate and understand, and especially when that subject is also a scared patient.

We have got to take care that we not unduly overburden the process with too much information that may as Yale professor Robert Levine once so aptly put it, could subvert a process designed to elicit informed consent into one that elicits uninformed denial.

With respect to this point, I will just tell you that back in 1982-83, a select committee of Stanford medical school faculty made recommendations that the university adopted that changed in an important way the routing sheets, the application for IRB approval and the informed consent protocol used in the medical school by requiring disclosure of consulting arrangements, line management responsibilities, substantial equity holdings or other significant financial interests in proposed research sponsors, vendors or subcontractors. The detail to be submitted to the IRB was extensive and full disclosure of consultative or financial relationships was required in the informed consent process as well.

To the best of my knowledge, these requirements have not impeded the growth and vigor of biomedical research at Stanford over the last 17 years.

Second, I would suggest that disclosure alone may no longer be sufficient, and argue that some forms of financial conflict that have come to public attention may go beyond the pale of acceptability.

To this point, I commend the recent decision by Harvard Medical School dean Joe Martin to reaffirm his institution's policy on financial conflicts of interest, which is arguably one of the most stringent today in academic medicine. Under Harvard's policy, financial conflicts are stratified in one group defined by the magnitude of financial interests is unacceptable.

We have got to recognize that the stringency of financial conflict of interest policies varies substantially across academic medical centers and to an extent that I suggest is worrisome and may be dangerous. Aside from this variation in the policies themselves, it is also fair for this conference to say how diligently our institutions are enforcing their own policies on the management of these financial interests.

I don't have any data that bear on this issue, but I would suggest once again from the public record that there may be some dangerous inconsistency here as well, and I also feel personally that some of the reported lapses in institutional oversight or better, an institutional judgment, are difficult to understand.

Finally, I note that following the loud protestations in 1990 about Secretary Sullivan's proposed rule, our professional societies have become curiously silent on this issue, which I think is regrettable.

Here, I wish to recognize the position statement recently issued by the American Society of Gene Therapy. I won't go into it because I know Dr. Woo is here, but let me say that whether one agrees or not with the details of their proscription is not the point. What is the point is that that society has taken an unambiguous and courageous stand. Would that others would follow their example.

I conclude by suggesting that if our leading academic medical centers and professional biomedical societies cannot come to reasonable agreement in defining the kinds and limits of financial interests that are prima facie unacceptable in research involving human subjects, and if we cannot convince the government and our public that we are being diligent in meeting these responsibilities, we are inviting the cognizant federal agencies as well as the Congress to come in and prescribe them for us.

I hope these comments will be helpful in further discussions today and tomorrow, and I thank you for the opportunity.

DR. BALDWIN: Thank you, David. I think that was an excellent statement of how national organizations actually are really significant players in this activity.

Our next speaker, I am happy to say, is John Seffrin. I'm going to stay in the order that is in your books so that you can follow along here. We are happy to see that you are here with us. We understand that you had travel difficulties this morning. Dr. Seffrin is representing the National Health Council and he is CEO of the American Cancer Society. His background in American health education I think also makes him ideally suited for this discussion, because much of what we are doing hinges on the effective education, not just of our research participants, but of the investigators and the institutions as well.

DR. SEFFRIN: Good morning, and thank you for your flexibility. Getting up at 4:00 in the morning and getting to the airport and finding out that your 7:30 flight has been cancelled isn't my idea of a good way to start the day. But I am pleased that I was able to make it.

Let me make an important point. I am John Seffrin, and I am the immediate past chairman of the board of directors of the National Health Council, which has over 100 members, 43 of which are voluntary health organizations that in the aggregate represent tens of millions of patients? I also serve as the CEO of the American Cancer Society, which is not only the largest single voluntary health organization in the world, but is the only major voluntary that sees as its job looking out for patients that is in virtually every community in America, nationwide.

The society itself has set some very ambitious goals, as an example, in the area of cancer to reduce mortality by 50 percent in the year 2015, incidence by 25 percent, and concomitantly, to improve the quality of life of everyone facing cancer.

I share that with you, because we recognize that those goals, ambitious as they are, are achievable, but not without well controlled clinical trials.

I will make only three points, and these points are on behalf of patients and the organizations I represent here today. Indeed, I think they are straightforward and relatively simply, although I appreciate that they represent very complex issues that will be dealt with in this conference and many others in the foreseeable future.

The first point is simply that we believe, speaking here again on behalf of eight and a half million cancer survivors, we believe that well conducted clinical trials are essential to further improvement in positive health outcomes for patients. It is not an option; it is certainly highly desirable. But the key point, they are essential

Two quick examples, so that we don't lose sight of the fact that not only are we looking for major improvements in therapy, major steps forward, but even incremental changes are critically important to the public's health. You look at childhood cancer, perhaps the best example one could point to over a period of three decades, perhaps no major step forward, but incrementally changing a disease that is almost always fatal, now to which you get 80-plus percent long term remission.

In addition, from the patient's perspective, well controlled monitored clinical trials are critical for hope. Clearly in the area of cancer, there is probably no better example, but it is also true for other serious and life threatening diseases. It is extremely important for the quality of life of patients to be able to know that there is real hope, and clinical trials offer that.

Indeed, the American Cancer Society is on record of wanting to expand from only three percent of cancer patients involved in clinical trials to 10 percent. We believe, and it is one of our four top advocacy priorities, that those providing care, those providing care should cover the costs of well-designed clinical trials, be it managed care, Medicare or whatever.

The second point, that clinical trials must be designed and conducted in a way that puts the best interest of the patient first and foremost. In that statement is a concept of transparency, that it is clear to all significant parties in the process that patients have been put first.

The National Health Council actually has a trademark, because its major extension for the patient community is a program called Putting Patients First. What does that mean in practice? It means something in practice that is importantly different from some considerations, that science and discovery are of great importance, but are subordinate to what is best for the patient now facing a serious or life-threatening disease.

This does not mean of course no risk or that it is risk free or even risk averse. But it does mean at a minimum a number of considerations that you are all familiar with, such as informed consent, that is clear and well understood by the patient, review and approval by a qualified IRB, institutional review board, infrastructure of proper facilities and human resources, personnel, experienced and trained to deliver the kind of treatment envisioned by the clinical trial, assurance that no other treatment, non-investigational treatment is clearly superior, and available clinical and pre-clinical data are evident to provide a reasonable expectation that the protocol will be at least as efficacious as others available.

Now, the last two are terribly logical. But I would suggest that when you factor in the notion of the patient's perspective and quality of life, answering those questions is not always as easy as one would hope it would be.

A few suggestions following far short of a Miltonian modest proposal, but some thoughts to leave with you today. As we move forward, how can we improve a situation that is good, that we want to make even better, and make sure that we don't end up with a cynical public unwilling to participate in clinical trials, I suggest that we move quickly to take actions on the 17 recommendations put forward by the Investigator General's office at DHHS -- as I understand it, perhaps only three have to date over the past couple of years -- to analyze those recommendations and where appropriate, move with all due dispatch.

Second, consideration, serious study of the IRB system, loosely bound together as it might be, with consideration of consolidation and standardization, so that we truly can have an accountable system. I fear that in this area, we may have the problem of being measured by the exception rather than the rule, if we don't have a system that brings everybody up to high standards.

Significant increase in funding for IRBs. It has already been mentioned by others, but I will say that all investigators and institutions should assiduously avoid financial conflicts of interest. I was glad David mentioned it. Real or perceived, there needs to be sensitivity to how you can avoid the conflict, but also make it transparent so people can see that efforts have been made to avoid conflicts, if indeed it is possible to again eradicate all conflict.

Also, education of the public, something the American Cancer Society knows a little bit about, is extremely important in this issue if we are to get all the clinical trials that they promise for us in the future. It isn't just the difficulty of reimbursement and the complexity of clinical trials that explains why only three percent of patients facing cancer are on a clinical trial. Educating the public about how important clinical trials are, educating the public about benefits and risks and educating the public about what does a good clinical trial look like when you see it, without having to make them investigate it, are all worthy issues.

We learned an important lesson in the American Cancer Society, 87 years old though we are. That is, when we opened a national free call center four years ago, we found that the interest in the public in reliable information is almost insatiable. We are now answering calls at almost the rate of a million calls a year to give you some idea about the interest of the public in these issues. It turns out flying here today, the woman sitting next to me saw over my shoulder what I was working on, and I realized as an official with the IRS, she knew an awful lot about clinical trials.

Number six, to develop a system for continuing oversight and evaluation of IRBs. I ask the investigators in the room to contemplate the following: why would we not have as rigorous a review and ongoing evaluation of outcomes and results of IRBs as we would expect of individual investigators seeking grant funding. So in terms of accountability, a way in which we can monitor effective IRBs shows the public at large that every reasonable effort is being made to protect their interests.

In conclusion and my final point, we must put patients first in any discussion about or possible reform of the clinical trials system, first and foremost because it is the right, the ethical thing to do. But also, it is in the long term best interest in the viability of human clinical trials as a necessary tool in health science.

Thank you very much.

DR. BALDWIN: Thank you. Finally, our last speaker now is Dr. Savio Woo of Mt. Sinai School of Medicine, where he is a professor of medicine and the founding director of Mt. Sinai Institute of Gene Therapy and Molecular Medicine, and who is here to represent the American Society of Gene Therapy, whose policy you have already heard alluded to. Savio?

DR. WOO: Thank you very much, Wendy. I'd like to express our appreciation of being invited to come and address this very distinguished group, and talk about the issues of protecting patient safety in clinical studies, including gene transfer, as far as those that relate to the financial conflict of interest.

First of all, let me introduce to some of you that don't quite know us, what is the American Society of Gene Therapy. It is a very young society that was funded only in 1996. At that time, we had about a thousand or so members. It was founded on the principle of how to put together a professional nonprofit and voluntary organization to promote the science of using genes in medicines to treat disease, and to promote education of this new discipline for future investigators.

Today, we have over 2500 members and the society is doing very well, despite the tragic incident that occurred during the last year, and the ensuing events that occurred, that was amply reported in the press.

Now, I would like to take this opportunity to try to reconstruct what happened last year in September, when a patient died at the University of Pennsylvania, to the subsequent events that led to the meeting today, and what the public's responses have been, what the regulatory agencies' responses have been, and also what is the position of our society in order to address some of these concerns, and how to correct past deficiencies.

Back in September, when a patient undergoing a gene transfer study died as a result of the gene transfer vector, that was in September of 1999, at that time it was obviously reported by the press, and the scientific and the clinical community looked at that report as, my goodness, that is a very tragic event and it is very serious, because this is the first time that we have seen that the gene transfer vectors can cause such severe adverse events.

As it is reported in the public press, as we look back now, the response in the public has been -- I would characterize as being shocked by such a happening, but then it is not so shocking, because the public does understand that this is medical research and it does carry some levels of medical risk.

So the thing kind of simmered along for a few months until the Rabb meeting that occurred in September last year, where the FDA reviewed that there were protocol violations by the investigators.

At that point, the events took a drastic turn for the worse, because now it was obvious that there were at least allegedly rules that were broken. The public understood that gene transfer vector may be complicated, is difficult to understand, but breaking rules in conducting clinical studies is obviously a no-no, because that puts the patients at risk of suffering from these events.

Subsequent to that, there was a report of the serious adverse events that were supposed to be reported to the NIH, was somehow not done so by the investigators, even though all of them were reported to the FDA as they were supposed to. The tremendously high noncompliance rate in terms of reporting to the NIH was astounding. It was like only six percent of the reports did come in, versus only 90 percent that did not.

So we as a professional society are obviously very concerned with these kinds of events, and we are asking the question, what happened here? Why is it? Is it so many of us who are intentional rule breakers or are there circumstances that led to this kind of noncompliance?

I would like to offer two explanations to you. They are meant as explanations, and they are not meant to be excuses, because rules and rules and guidelines are guidelines, and when they are out there, we need to follow them.

One of the reasons that all of these events were not reported was because when the REAC changed its authority a few years ago by then-Director Dr. Varmus, there was a lot of confusion in the community that the investigators thought that because the REAC no longer approved the protocols, therefore there is no need to report the adverse events to the reg; we would only continue to report them to the FDA. That is a wrongful assumption on the part of the investigators, and that is something that we as a scientific community representing the investigators should take responsibility for that.

Also, another reason is that people quite often say, the REAC is the Recombinant DNA Advisory Committee. If there are adverse events related to the gene transfer procedure itself or the gene transfer product, obviously we should report, but if any of it is not related, then we will only report to the FDA. Again, that is a wrongful assumption that needs to be corrected. I must say that at this point, with the heightened sensitivity to this particular issue, I am glad to hear from the NIH these days that the reports have been coming in on a timely fashion and the compliance rate has really skyrocketed.

So during this time, the federal agencies and the Department of HHS have also issued a new initiative that has to do with how to better conduct clinical studies, monitoring plans for regulatory issues as well as for data in patient safety and so on.

We as a society come to the position that we embrace these kind of new initiatives wholeheartedly, because in our view, ultimately if we could improve the quality of the research conducted at a clinical level, ultimately it is going to produce better results that can be utilized to support future developments in new therapeutics for patients, and that of course benefits everyone.

So all of these events were happening, and I would say it was characterized as the straw off the camel's back when the report came out that there was this financial conflict, potential financial conflict, in the trial that might have some causal relationship to the adverse event that took place at the University of Pennsylvania.

Whether that is accurate or not, I am not here to assess whether it is real or not real, but it is very important that we have to deal with the prospect of the perception of the financial conflict of interest.

The public may not understand what is a gene vector. This is high technology science. When we talk about genes, people are very concerned, not just because genes are powerful models. People appreciate the power of the gene, that it can do a lot of good for our society in the future, but it is also concerning to the public because of the power of the gene. So we need to deal with the perception side of it.

When the story broke that there was the investigators who were conducting the studies, who -- and the studies may be sponsored by commercial enterprises, and that the investigators may have equity positions in the commercial enterprise, it is very apparent to the public that this is a closed loop. If the results come out that it is positive, then certainly the commercial enterprise would benefit, so would the investigators.

That can potentially put the patients' interests and the investigators' interests on two sides of the same fence. That of course is a big no-no. We have seen that happening in the HMOs, whether the HMOs can dictate to the physician whether to treat patients or not to treat patients for this disease or that indication.

Here it is the same issue. If the investigator's interest and the patient's interest are not on the same side of the fence, there is just no way in our opinion that a clinical study that produce quality results, that can support future quality development.

So with that in mind, the board of directors of the American Society of Gene Therapy decided to take action. After much deliberations, we have come to the conclusion that we need to issue a policy that deals with this most blatant form of financial conflict of interest. I'd like to read that for you.

Now, this particular policy has been adopted by the board of directors of the American Society of Gene Therapy, and it was then immediately e-mailed to all of our members, and then it was published in the Journal of Molecular Therapy, which is the official journal of our society. The same policy was also sent to the leadership at the NIH, the FDA, as well as the HHS.

Our policy states, and I quote, in gene transfer trials, as in all other clinical trials, the best interests of patients must be always primary. International, national and institutional guidelines on standards of care must be vigorously followed, approved protocols strictly adhered to, serious adverse events promptly reported to all appropriate regulatory and review bodies, relevant federally and institutionally established regulations and financial conflicts must also be abided by, and in addition, all investigators and team members directly responsible for patient selection, the conflict of interest process, and all clinical management in a trial must not have equity, stock options or comparable arrangements in companies sponsoring the trial.

The American Society of Gene Therapy requests its members to abstain from or to discontinue any arrangements that is not consonant with this policy, unquote.

So I am here to try to report to you that we are delighted that since our adoption of this policy published in May this year in Molecular Therapy, that this policy has been very well accepted and supported by members of our society, and the leadership in the society have not received a single complaint from our members about this policy.

So at this point, I thank you for your attention.

DR. BALDWIN: Since we are precisely on time, we will have an opportunity for some questions and hopefully answers as well.

I would first like to give the panel members an opportunity, if they would like to comment on any of the other presentations. Would anyone on the panel like to speak?

The ground rules are, we need to have you go to a microphone. There are microphones down here, and I believe we have arrangements for questions that come from the upstairs balconies as well. Just because we can't see you in the upstairs balconies of course does not mean there are not people there who are watching this on video relay.

(End of Tape 2, Side 1.)

Are there questions from the audience from any of our speakers?

Well, fortunately I have something else to say. I am the keeper of the knowledge as to how we get 700 people to have lunch and get back here and start on time. You may be wondering how we are going to do this. We are going to do this by providing boxed lunches. So when you leave here, the tricky part to this is -- it is pretty obvious that when you go out there, there are boxed lunches in the external area that you can just pick up. They are coded, so you can tell what kind of sandwich is in there. But they are also available upstairs. Just having done a meeting like this a couple of months ago, the tendency is to flock to the downstairs, and if people could distribute themselves, that would make this go more smoothly, and it will also make sure that we get back here in time to start the afternoon session at 1:15.

There is a Room C-1, if the speakers would like to get their boxed lunch and gather there, but there are plenty of boxed lunches. So I encourage you to go ahead, get some lunch, get the blood sugar level up, and come back for the afternoon session.

I want to thank all of our speakers.

(The meeting recessed for lunch.)

DR. CHODOSH: If everyone could take their seats, we could begin, and that will permit us to get everything in that we hope to get in this afternoon. Having committed yourselves to being in this lovely place now, air-conditioned, instead of being out on vacation, -- you can't hear? What do I have to do to be heard?

(Remarks off the record.)

I'm Sandy Chodosh. I am the president of PRIM&R, which many of you know about. If you don't, I welcome you to come talk to me about it.

I certainly appreciate the invitation to participate in this very important conference. It is one which I hope will result in some outcome that will be possible for all the scientific community.

This particular panel is designed to examine steps taken by a few institutions and IRBs in respect to minimizing conflict of interest issues. The outcome of these four presentations should provide the audience with some generalizable methodology for dealing with these problems.

However, with due respect to our knowledgeable panelists, I suspect that you will leave here still having many questions unanswered. If they could give you all the answers, we wouldn't be having this conference.

How conflict of interest in clinical research can or does affect the validity of scientific presentations or investigations, or imposes unnecessary risks on human subjects, has really become a pervasive concern for all of us, perhaps because of the attention paid to it by the press, Congress, and I think many of us ourselves. Such concerns prevail largely because there is little data to demonstrate adverse or positive influence, even when conflict of interest exists. Consequently, one can imagine many greater problems than may exist, or be underestimating those problems.

Examples of areas of potential problems with conflict of interest are really present at all levels. This panel is only going to talk about institutions and IRBs. But let me tell you, having been in clinical research for about 40 years, it begins at the bottom or at the top, however you want to do it, but it begins with the patients. Many who will come into studies claiming diseases they do not have in order to claim compensation, many of them have become professional, quote, subjects. So that even at that level, there can be conflict of interest.

Investigators whose salaries and/or academic positions may depend on the volume of their research, we have already heard a great deal about that. Institutions have the paradox of needing research monies in order to support their mission, while maintaining a high level of academic integrity. We have heard from industry who very clearly stated that their business is that their stockholders expect a profit.

Now, in between all of this is the institutional review boards, the IRBs, who have the difficult job of assessing if conflict of interest at any of these levels can influence the risks and the results of the investigation. And believe me, they already have more than their share of responsibility in making such decisions about all kinds of things, but to have this as part of it doesn't make their job any easier.

The real question which I really did not hear answered this morning so far was, who in all of this should be the judge of what is conflict of interest. This becomes a very difficult thing to sort out. One can say it is the institution's responsibility, but we have already heard that institutions also may indeed be a position of conflict of interest themselves. Well, who does it? I keep looking higher and higher, and I am thankful that most of us have a belief in a higher being that perhaps that is where we need to go.

I have asked our panelists to describe three things. One, what steps they have taken to minimize this problem, how well they feel this is working, and their recommendations to improve the process, not a small order.

Let me introduce the panelists in the order that they will be presenting, and then questions for all the panelists should be kept until they have all presented, and then hopefully you will have questions. In fact, we already have one sitting on my desk, so this is going to break the ice.

Representing the IRBs will be Susan Kornetsky, who is at Children's Hospital in Boston and is Director of Clinical Research Compliance, and Steven Peckman from UCLA, who is the Associate Director of Human Subjects Research. Then representing the institutions will be Ted Cicero, Washington University of St. Louis, who is the vice chancellor for research, and Julie Gottlieb at Johns Hopkins, up the street here, Executive Director, Office of Policy Coordination.

With that, I'll turn it over to them, and we'll see where we go.

DR. KORNETSKY: Good afternoon. I'd like to present to you an approach that the Children's Hospital IRB uses to assist the institution in disclosing and managing actual and potential conflicts of interest.

The approach that we have developed and are still developing presently consists of seven steps. I stress the term assist, because I feel strongly that conflicts of interest in clinical research cannot be the responsibility of the IRB alone. In fact, placing too much responsibility with the IRB will overly burden what may be currently viewed by some as a fragile system.

It is all too tempting to place additional responsibilities on the IRB. After all, many times the IRB is the only place within the institution that even has knowledge of all the clinical research that is being conducted. There is no question that conflicts of interest need to be managed, and as you will see from my presentation, the IRB at Children's and the administrative office play a role, but it is important to emphasize that the role it plays is part of an institutional system.

The approach I also present works because it takes into consideration our institutional culture. Anything you develop must make institutional culture a priority.

We have had certain problems with conflicts of interest for the past 10 to 12 years. The history was that about 10 years ago, one individual IRB member insisted that on a particular protocol there was a conflict of interest. He argued that the IRB could not even get to assess the risk-benefit if a conflict existed. He forced discussions at the level of the medical leadership, the administration leadership, general counsel, research administration and at the level of trustees.

The decision at the time was made that it was not totally the IRB's role to handle the conflict of interest, but the IRB needs to be assured that no actual or potential conflicts exist before the IRB can review the protocol. IRB members at the time commented that they didn't have the experience to determine what constitutes a conflict of interest, and they were also concerned about consistency with frequent IRB members changing. They did feel that it was their decision to inform subjects of things that were determined to be a conflict and under their purview.

Step one that was taken was that there was a disclosure of financial interest form that is completed with each and every individual protocol application. A copy of this form is included with your handouts. I think they were on the back table, and all the slides and form is there.

The form must be completed with each protocol. It is a four page form and submitted to the IRB. The IRB protocol applications contain this form as an appending. The form is reviewed and maintained by the departmental chairperson. The form is not actually returned to the IRB.

As a result of reviewing the form, questions that the department chairman has about what might be construed as a conflict of interest is brought to the attention of the president, either the vice president of research administration, the trustee conflict of interest committee, or general counsel.

When the chairman signs the form on the protocol -- when he signs the application, there is a statement that says, I have reviewed the research disclosure form and determined that no actual or potential conflict of interest exists.

Now, the reason that this particularly works at our institution is the second step -- I'm sorry, the actual content of the form. The form has the investigator list all the personnel on the protocol. The form asks whether the PI or any person affiliated with the projects have any financial interest, financial relationship, governance or administrative affiliation with any entity that is providing funds for or which has rights to intellectual property resulting from the form.

So to summarize, this is a form that gets distributed with the IRB materials. It doesn't get reviewed by the IRB, but the department chairman has the responsibility for reviewing the form and addressing any issues or concerns.

Now, the reason that this works at our institution, why the IRB isn't actually reviewing the form is that all faculty at our institution must comply with the Harvard Medical School's policy on conflicts of interest and commitment that were established in 1996. Children's Hospital itself is an independent institution, but all of the staff members hold appointments at the medical school.

I will not describe the Harvard Medical School policy. It is on the Internet if people are interested in the actual specifics of it. But it is important to note, as was noted this morning, that it is a very stringent and conservative policy which actually defines certain conflicts that are just not permitted.

In accordance with this form, all medical staff must complete an annual disclosure form, reviewed by an institutional official. So in addition to our protocols, for each protocol on an annual basis, they have another form that they have to fill out for the medical school policy. Conflicts have been disclosed and managed during this process. The investigators are expected to abide by the policies which define the conflict. So there is really a two-tiered system, one at the medical school and then one with each IRB protocol application.

There was also a determination made that the informed consent must disclose when the hospital participates in a trial in which the hospital -- this is the institution -- holds stock obtained as part of a licensing agreement between a company and the hospital for intellectual property invented at the hospital. This is a very specific situation involving licensing of a product that was invented by Children's Hospital staff.

The policies that have been established when the hospital, the institution, holds equity in a licensing agreement include the following: The hospital must take steps to insure the patient's safety, and this is usually the requirement of an independent data safety monitoring board, to take steps to insure the validity and integrity of data with interim analysis, and also that there is disclosure of equity interest to the IRB. The trial may not be conducted or supervised by the inventor. The PI may not be directly or indirectly supervised by the inventor, and there is also the expectation that confirmatory clinical trials will be conducted as multi-center studies directed by investigators with no affiliation at the hospital.

So this allows some initial testing in new types of technology and development, but also requires that larger confirmatory trials should be multi-center with other PIs.

The statement that the hospital does own equity that is included in the informed consent is as follows. Children's Hospital licenses certain of its research discoveries for research and/or commercial development. From time to time, Children's Hospital receives equity, either as a capital stock or as options to buy capital stock, as partial consideration for licensing. In keeping with Children's Hospital policy, you are advised that Children's Hospital has equity in a company that is sponsoring the research and may gain financial monetary benefits if the drug, device or technology that is being studied in this trial proves to be a benefit.

As in all research studies, the hospital has taken all necessary steps to insure research subjects' safety and validity and integrity of the information learned by this research.

As the administrative director of the IRB, I am advised of companies where Children's Hospital has this type of equity interest. When a protocol comes through sponsored by one of these companies, I and the PI inform the IRB, this policy requires interaction between the IRB office and the intellectual property office at the IRB. You will see that a lot of the things I am presenting requires a lot of communication between many components of the institution.

Step four. The IRB has established a policy that prohibits what we consider finder's fees or other incentive payments for recruitment. This policy has been in effect for 12 years.

There is a written policy that prohibits investigators and study personnel from accepting finder's fees. Written reminders are sent to staff to not accept bonus and incentive payments. This is done every year. It includes what is known as bonus payments to finish up or recruit subjects at the end of the trial when the companies are interested in trying to finish things up.

The IRB also requests to be notified by its investigators if they are approached about such payments, or there is a question as to whether something constitutes an incentive or bonus payment. We as an IRB want to monitor how frequently this is occurring, and the situations that provoke it. We do allow different things like book donations to the library, pizza for the residents, because these are not significant to any one individual and they usually come at the end of the trial after a trial is conducted.

Our policy statement regarding this is as follows. Clinical research is an important part of Children's Hospital's commitment to providing the best quality of care to its patients. As part of this commitment, house officers, staff and other personnel are expected to assist investigators in the performance of clinical research. Providing staff members or hospital personnel with a direct financial incentive for enrolling a research subject has the potential of adding a strong element of coercion to the recruitment and consent process.

For these reasons, under no circumstances may house officers, staff members and hospital personnel be offered or accept a monetary finder's fee or other incentive for recruiting or referring subjects for clinical investigation. Staff, house officers and house employees are expected to observe this policy as part of their everyday responsibilities at the hospital.

Step five. The IRB will not release final approval for any industry sponsored research until a clinical trial agreement is negotiated and signed. This agreement is negotiated by a separate office by a corporate sponsored specialist. Part of the agreement is the establishment and review of a budget. We want to make sure that the payment for study procedures such as physician and study nurse time are reasonable and are not excessive.

The IRB does not get involved in actually reviewing the budgets for industry sponsored research, but is advised when the process is complete. The IRB does review budgets of federally funded research. We are considering adding additional budget questions to the protocol application for several reasons, such as appropriate billing practices and recovery of research costs.

The clinical trial agreements that are signed between the company and the hospital require institutional endorsement by the Vice President of Research Administration. The budget must be reviewed and agreed upon as part of the process. All funds we receive must be justified. The agreement also addresses publication rights and ownership of the data, so that our staff aren't prohibited from not being able to publish certain things. The agreement requires the establishment of an institutional fund when the money is awarded, and if funds remain at the completion, they are placed in a departmental fund and are not permitted to go directly to the principal investigator.

Step seven. An IRB with a conflict of interest that is involved in a protocol or has some other potential or real conflict must leave the room during the final discussion and vote on a protocol. By regulation, IRBs are required to implement this. We are currently considering some type of disclosure form for our members and an agreement for them to sign that specifically addresses conflict of interest, in addition to just having them self identify themselves and leave the room.

So in summary, we have chosen a model that the conflict of interest is not currently managed by the IRB, but the IRB distributes a tool to make sure the conflict of interest is disclosed and addressed within the institution. I want to emphasize that this works because of our relationship with the medical school and their stringent policies.

There is reliance on the department chairman, and that may be questioned. Each year we do talk with the chairperson and medical staff exec, and remind them of the significance and importance of this responsibility.

No actual conflicts have been disclosed through the special IRB form that is submitted with the IRB, but many questions and issues have been raised. It serves as a forum for disclosure and discussion. It promotes a continued awareness to think about what may be viewed as a potential conflict.

You could consider having this type of form submitted directly to the IRB, and there may be people here in the audience or other panel presenters that do have the IRB that do actual forms. For us, this model didn't work, but some institutions may consider it.

Conflict of interest is an institutional responsibility. It cannot be totally delegated to the IRB. The IRB does have a responsibility to determine whether and how to inform subjects, and they need to be consistent about this. Our IRB role is that our IRB administration office helps as a coordinated function. I have provided some examples where the IRB office is used to assure that a process is ongoing, such as withholding certain types of approval until a process is complete. The IRB's role is also informed consent.

I also suggest that IRBs require and question the establishment of independent data safety monitoring boards for adverse event reporting and interim analysis. If there is some actual or potential conflict of interest that slips through, this is certainly an important safety mechanism. We often now ask a lot of our industry sponsored trials, when they say they have a data safety monitoring board, who sits on the board, what their relationships are, and we have found some very interesting answers, and have actually asked and required that certain things be done or certain clarifications be made. So I think that is an important thing that IRBs can do to reduce potential conflicts, is to ask more questions and to require more information about data safety.

Written policies are important to prepare and widely disseminate. We can't keep the policies private. Investigators need reminding about them. We will also continue to evaluate the IRB role in the conflict of interest.

In closing, this caption reads, notice, the committee's assigned to write the hospital's policies and procedures manual has been upgraded from critical to serious. What effect will conflict of interest responsibilities have on the IRB, and what should their role be? Many regard the condition of IRBs as serious and critical now, and we need to think about how to include the roles of conflict of interest in a meaningful and thoughtful way.

Thank you.

DR. PECKMAN: Good afternoon. My name is Steve Peckman. I am from UCLA. I am the Associate Director of Human Subjects Research. I would like to thank Stuart Nightingale and the planners of this meeting for inviting me.

I didn't realize that when I responded to a posting on McWord many months ago from Tom Puglisi that it would result in my invitation here to talk with you. I might have thought otherwise, had I known what the result might be.

I spoke to Sandy Chodosh last week about what the presentation should entail. He warned me that this meeting would be videotaped, audiotaped and transcribed. That was a frightening thought for me, but being from Los Angeles, I thought I needed to ask, so, are there residuals? He responded that yes, there probably would be, in the form of regulation we would be forced to implement for the rest of our lives. That is what happens when you ask questions.

I am going to briefly outline how six points engaged one institution -- that is my institution's application of the concept of trust, which is an idea we have heard repeated several times this morning, in addressing informed consent and conflicts of interest. I'm not going to get into a lot of the details regarding conflict of interest policy as it is implemented by another committee outside the IRB at UCLA. If you have questions about that kind of policy, we are fortunate to have Ann Pollack here in the audience from UCLA, who is in charge of that committee, and she could answer many questions. Many of our policies are similar to what Susan described in the Harvard schools.

But the six points I will want to discuss are -- one, and probably the most important point for us in creating our policies and procedures was a report that is oftentimes cited but very rarely read, and that is the Belmont report and the concept of trust, respect for persons, and dignity.

A second report -- actually, it is not a report, but a second concept, are the legal mandates in the state of California, a third is a legal mandate that came from the highest court in the state of California, which is the California Supreme Court, and finally, how conflicts of interest are managed on the IRB and the importance of a fair review process conducted by disinterested committee. They are not disinterested in the review, but they are disinterested in the outcome.

In talking about the Belmont report, I think we need to consider and discuss throughout this conference the concept of informed consent and the information a reasonable human research subject may wish to know about an investigator's interest in the sponsor or product being tested, and that a health care provider may also be interested in the outcome of the research.

This is a very primitive diagram of a concept I call the Belmont circle. I think about the Belmont report as a document that embodies the concept of trust. To trust is to rely upon the character, ability, strength of truth of someone or something. Trust is also having confidence in the truthfulness and accuracy of the information given by an individual or entity. To take responsibility for something makes us accountable.

I think it was David Korn who said this morning that human subjects research flourishes due to the public trust. The Belmont circles in my mind is a visual representation of the public trust. It is a non-hierarchical concept, because you can shift the circle any way you want, and it is going to remain the same.

Now, I'll remind you of the three principles of the Belmont report. The first principle is respect for persons or autonomy, the second is beneficence and the third is justice. At the top of the circle, just for the sake of discussion today, we have our research subjects, who we may also call our public.

Coming down this side of the circle is the federal government. The public trusts the federal government. Whether we want to believe that or not, I believe it is a matter of fact, that we trust the government to do many things for us and ultimately to insure our safety in so many ways, that it is hard to comprehend.

The federal government and the concept of human subjects research trusts the institution. I'm going to limit my comments today to academic institutions. They trust the institution through the process of an assurance to carry out their role and responsibility in the safe conduct of human research and the ethical conduct of that research.

The institution impanels an IRB, hopefully of qualified individuals and interested community members, to look at the science and the ethics of proposed research projects. So the IRB and the institution have a very linked relationship. They must trust each other to perform their actions in concordance with the regulations and guidelines.

Now, the IRB of course is going to review research submitted by an investigator. The IRB and the investigator must work in a collegial and respectful manner, insuring the appropriate review of the research and then the appropriate conduct of that research. Then finally, the investigator has to trust the subject and the trust of the subject is paramount, because without it, as Dr. Korn said, clinical trials will not flourish.

This is a rather long paragraph from the National Commission. It happens to be one of my favorite paragraphs in the Belmont report, which I refer to constantly in my job. I have a feeling that someday I'll be able to recite it like the Gettysburg Address.

But this is an important paragraph to our topic today. To respect autonomy is to give weight to a person's considered opinions and choices, while refraining from obstructing their actions, unless they are clearly detrimental to others. To show lack of respect for an autonomous agent is to repudiate the person's considered judgments, to deny an individual freedom to act on those considered judgments, or to withhold information necessary to make a considered judgment, when there are no compelling reasons to do so. I'd say the National Commission was pretty smart.

P.K. Gonzales of the University of Illinois put it in a much shortened version which is, the best practices for there to be disclosure. If it didn't influence you, why is there a problem to disclose it? That was essentially the position of our IRB.

Now, there are legal mandates for disclosure at least to institutions and to federal agencies. We know the NSF and NIH have disclosure policies. The state of California actually has laws. We have a UC policy, a University of California policy, on disclosure of financial interests and private sponsors of research, and guidelines for disclosure and review of principal investigators' financial interests and private sponsors of research.

The state regulations by the Fair Political Practice Commission under the Political Reform Act, requires disclosure of privately sponsored research interests. Finally, UC campuses have a conflict of interest committee at each campus, and it is called the independent substantive review committee, a rather longwinded title for a conflict of interest committee, but we are also the something system that has a committee called the Committee on Committees.

The relationship of the independent substantive review committee to the IRB is one of parallel review. Several years ago, one of our IRB members became concerned about conflict of interest, but not just in one protocol, but every protocol. She considered that her charge on the IRB, to find out what that conflict of interest was, and how it might impact the decision making of the investigator and the subject.

So we resolved as an IRB -- actually, we have three at UNCLA -- that we would look into the ability of the IRB to gather information on conflict of interest.

UCLA has struggled over many years to create a system by which the institution can discuss and have input into how we react to issues regarding human research. Several years ago, we created what is called the human research policy board. That board does not review individual protocols, nor does it become involved in IRB decisions. What it looks at is institutional policy in carrying out human subjects regulations and guidelines.

At the top is our institutional official, Albert Carnesal, who is a very busy guy and does a lot of travelling, so that responsibility has devolved to the executive vice chancellor, Wyatt R. Hume, also known as Rory. He is the official institutional designee. He is also the chair of the human research policy board, and he is the boss of my direct supervisor, Judith Brookshire, the director of the Office of Research Protection at UCLA.

As you see, the IRBs -- and we have a data safety monitoring board; they are off to the side there, and they have a reporting line to the institutional official. Then there is the campus at the bottom. The human research policy board has involved about nine different members. Two are ex officio, one is the member of the Academic Senate, the other is Judith Brookshire of our office. It incudes the three IRB chairs and several distinguished faculty. They decided to look at what adequate disclosure of conflict of interest might be.

So we went back to one of the things that UC is renowned for. UCLA in particular is very much renowned for basketball. We have gone to a few Rose Bowls, we have had a Heissman Trophy winner. We have a Nobel Prize winner, and we also have a Supreme Court decision in the state of California called Moore versus the UC regents.

In the case of Moore versus the UC Regents, where an individual's part of a spleen was used to develop a very lucrative cell line, the California Supreme Court ruled that a physician who treats a patient in whom he also has a research interest has potentially conflicting loyalties. This is because medical treatment decisions are made on the basis of proportionality, weighing the benefits of the patient against the risks to the patient, the possibility that an interest extraneous to the patient's health has affected the physician's judgment is something -- again we go back to the Belmont report -- that a reasonable patient would want to know in deciding whether to consent to a proposed course of treatment.

In response to this 1990 decision, the California Association of Hospitals and Health Systems said, prior to consenting to treatment, patients have the right to be informed of any potentially conflicting interests, medical interests or economic interests, that a physician may have related to such treatment.

So UC counsel and office of the president in Oakland responded, the failure of an investigator-physician to quote disclose research and economic interests is sufficient to state a cause of action for breach of a fiduciary duty to disclose matters which are material to patients' consent. Furthermore, we should be sensitive to the patient's personal autonomy in selecting medical treatment and/or potential for conflict of interest, where the treating physician is also a researcher.

So a few years ago when we developed our investigator's manual, and some of you may have heard this story in the past, we engaged four lawyers to help us write an investigator's manual, the first document ever to give guidance to UCLA investigators on the IRB requirements, and was finally published in 1997. It was distributed at a town hall meeting at UCLA, the first ever by a chancellor at UCLA to discuss human subjects research.

The night before the manual was to be distributed, I got a call at home. My colleague at the office who was waiting for delivery at 10 o'clock at night said, the manuals have arrived. We were very grateful, because the renewal of our MPA was predicated on the distribution of this manual. Our MPA had been restricted for almost five years; it was last approved in 1988.

So we received the manual, and this woman started to cry. I thought, this is someone who is really dedicated to the cause. But I got the feeling it was more than that, and I asked her what was wrong. She said, you know how the manual is called the investigator's manual for the protection of human subjects? I said, yes. She said, there is a problem. The title on the spine says The Investigator's Manual for the Protection of Human Objects. This was to be the crowning element of our MPA.

Fortunately, it was reprinted overnight and distributed the next day at our town hall meeting. Hearing you laugh brings joy to my heart, because it means you understand the irony of the situation.

Anyway, this is in our application form that was published in the manual. It says that according to UC policy, informed consent requires that A, a physician must disclose personal interest unrelated to the patient's health, whether researcher or economic, that may affect the physician's personal judgment, and B, a physician's failure to disclose such interest

may give rise to a cause of action, as I discussed earlier.

Now, this may promote some kind of cognitive dissonance for people. We'll go to the next slide, please. This is where the therapeutic misconception comes into play. That is where the clinical investigator may also be the health care provider for the subject, and we start to conflate treatment and research.

I draw your attention to Franklin Miller's et al. article in JAMA from 1998, where they posit that it is ethically problematic if both investigators and patient volunteers see research from an exclusively therapeutic perspective. In the fact of this potential divergence between pursuing patient-centered beneficence and scientific knowledge, the orientation of investigators as clinicians can promote a form of cognitive dissonance.

So our IRB took this to heart and tried to address it. We thought it was important to disclose to subjects in consent forms where the health care provider was also the clinical investigator, that there may be an issue here.

So we developed boilerplate language for our consent forms, where it says, your health care provider may be an investigator of this research protocol, and as an investigator is interested in both your clinical welfare and in the conduct of this study. Before entering this study or at any time during the research, you may ask for a second opinion for your care from another doctor who is in no way associated with this project. You are not under any obligation to participate in any research project offered by your doctor.

Just like so many language statements put together by a committee, it is the response of the human research policy board. It may not be everything some would expected from a disclosure statement about a researcher-health care provider conflict, but it is what we came up with, and it will be reviewed again in a year by the policy board.

As a part of our investigator's manual, we have this paragraph about conflict of interest. The UCLA IRB is concerned about the potential for abuse when investigators have a financial obligation or interest that may pose a conflict of interest. The IRB requires that investigators disclose within their application -- and we have forms for this -- all potential financial conflicts of interest and explain how the potential conflict of interest will be minimized or resolved. In these situations, the IRB may require disclosure of conflicts of interest in consent forms.

So we come back again to consent, and what Dr. Korn referred to earlier as the ever-expanding content of consent forms, which we are all concerned about. Except I have to note that many members of our IRBs have noted that there are at least two kinds of research subjects, and many members of our IRBs are clinical investigators. As one clinical oncology nurse noted, there are information gatherers and there are information deniers. There are those people who want to go on the Web and seek out every kernel of information about their disorder and come to you more prepared as a clinical investigator than you have ever imagined a potential subject to be. There are those who say, I don't want to hear about it, I don't want to know about it, just do it.

But we should never forget that consent is more than a discrete moment in time, where we offer a person a piece of paper and ask them to sign away their next two months salary in order to get the keys to the car. It is an ongoing process, and the information denier today may be the person who really reads the consent form tomorrow or next week or next month over a long term project. The information seeker may be the one who knows all the information before they come in, and be very concerned about any conflicts of interest. Nevertheless, everyone as a matter of dignity is entitled to complete information about the project they are about to ensue on.

The next thing that we did in looking at conflicts of interest is, we looked at when the financial interest is in a product being tested.

I am a little over, I'm going to speed this up. This is a possible conflict of interest that we found that was a financial conflict of interest. This was the language included in the consent form. You are asked to participate in a research study conducted by Jane Doe, M.D. from the Department of et cetera. Dr. Doe has formed a company to support this research. It is her product. If a reliable means of diagnosing breast cancer early is developed, she may benefit financially from this.

To take that a step further, I would like to give an example of an investigator who not only developed a device, but is a surgeon and may be the only person on the planet who knows how to use it.

We spent two and a half months discussing it in our IRB, about whether this investigator should actually see subjects. There was a question of the conflict of interest versus the welfare of the subject, because wasn't the subject entitled to the best possible medical care? And wasn't this person who developed the instrument the most qualified to use it, and wasn't he going to train three other investigators as a part of the project? That is a real conundrum.

Here is one that we really liked. Company X has designed this device to measure the cornea of the eye. Dr. Smith, a principal investigator of the study, is an officer, director, paid consultant and stockholder of Company X. I think he was also the janitor.

What the IRB does, we rely on the independent review committee to determine whether someone's conflict cannot be managed and therefore should be removed from the protocol. If they have not determined that, IRB considers it their obligation and responsibility to insure complete disclosure.

Finally, I'd like to add that IRB members who have conflicts of interest which may be a part of the interest in the sponsor or a product being tested, a co-investigator or a faculty sponsor, is recused from the discussion and the vote. They must leave the room.

To wrap it up, I'd like to say that trust is what it is all about. We have to be able to trust each other in order to be able to conduct review, conduct clinical research, and to insure the protection of the rights and welfare of the subjects.

The Belmont report is commonly invoked to guide IRBs and investigators in the ethical conduct of research. By creating the Belmont circle that links all the parties together, the federal government, the institution, the IRB, the investigator and the subject, we are in the process of upholding human dignity. We will create an environment of trust that insures the protection of human subjects, as well as the advancement of science.

Thank you.

DR. CICERO: Hi. I'm Ted Cicero, Vice Chancellor for Research at Washington University, a position I have held for three years. I want to state that as I go through these things today, all the bad things you may hear occurred before I took over, and all the good stuff occurred since I've become Vice Chancellor.

I too when I first reacted to this conference, was to come and talk about some of the good things Washington University is doing, as well as some of the problems I think we face, not only us, but others in the country face in the management of conflict of interest and how we go about handling those situations. I intend to still continue to do that. I think there are some things that Washington University does very well, and some things that I think the university needs help on, and I think this is a great forum for us to bring up some of those issues that I find to be most troublesome as we go along in this process.

The first thing I want to point out, conflict of interest is not a four-letter word. You can substitute any four-letter word you want into that; I didn't take the liberty of putting one into it. I think we heard a great deal this morning, and I think we heard a carryover into the late morning session as well, that in some way or other, conflict of interest in and of itself is a thing to be avoided at all costs, and that it is in fact a bad term.

I think that we are being unrealistic to think that we don't have conflicts of interest in our everyday life. I think in fact, the acceptance of an R0-1 grant for example could create a conflict of interest to that faculty member, who is then trying for promotion, trying for publications, trying to enrich himself in the environment. So I think conflicts of interest per se, it would be very dangerous for us to make a generalized statement that these are bad things and should in some way be avoided completely. They are there; I think we have to deal with them.

Bayh-Dole has certainly put requirements on universities to deal with these issues, and by their very nature, as we get into these relationships, conflicts of interest are generated.

With that stated, we must I think be very careful to make sure that all those conflicts are managed to the extent they can be. Management I think is not simply disclosure. I think we have adopted a very tough stance at Washington University that disclosure is not enough. In some cases, if equity is held for example, we may require that equity be put in escrow, it may be put into a third party's hands, or we may demand that the faculty member divest himself of any equity. The same thing would apply for a consulting role with the company, or a membership on a scientific board.

Our management has tended to be very aggressive in this respect. Very often, disclosure is only allowed as the only mechanism when in fact -- the disclosure in publication -- there are limited sums of money available and the overlapping of interest is rather insubstantial.

I think it is careful that we also distinguish -- and this gets confused a lot in an academic setting, a conflict of interest is a conflict of commitment. There is a strong interest on the part of faculty and of department heads in particular that as a faculty are becoming more entrepreneurial, they are spending more and more time away from activities at the university, or they are making more money.

(End of Tape 2, Side 2.)

-- faculty members, and I think the second is institutional.

I think the faculty conflict of interest is handled very much the way you just heard from the last couple of speakers at Washington University. We have a disclosure reform. In fact, we have an annual review in our fiscal year. Every July 1, all our faculty members must submit a disclosure review form -- I've got an example of what that looks like on the next slide, which you may have trouble reading. You just see the background information at the top. Also, this is the most critical box here, can you see that at all?

What we are fundamentally asking at this point are two columns, for a company that supports your research activities, for a company in your research field, but that you get no research support from. So we are asking two critical questions there. Then it asks whether any one of those six or seven items there -- equity, holding, consultation activity, whatever. These boxes are checked off yes or no, and then there is a four-page addition to this document that the faculty member then must fill out, if he has answered yes to any of these questions.

The faculty member signs the form at the end of it, and then we do send all these forms off to the department head, whose responsibility in our system is to review the accuracy of the data, and then after he is satisfied that the faculty has disclosed all the conflict, signs the document himself, that to the best of his knowledge no conflicts exist.

We don't stop there, however. We send those off to our disclosure review committee, which is a faculty based committee, which reviews all of the faculty conflict of interest disclosure forms. They in turn then again conduct a separate examination of the potential conflicts.

The committee as I said has been very aggressive, has frequently called faculty members to come forward and explain their positions, particularly the involvement of students in any positions they may hold in the company which seem to be confusing to the committee.

Again, the range of managements applied in this situation has been very aggressive to the chagrin of the faculty at times. Very often, as I mentioned in the beginning, disclosure is not sufficient, but we have actually demanded additional interest.

For example, if someone has formed a company, he is a scientific advisor to that company, receives compensation as a consultant to that company, funds research in this company, has equity in the company, we may ask him to divest himself of all those positions. The only way which I will sign any grant that that individual submits is if he goes ahead and divests himself of the equity or his position with the company before we can proceed.

We will also put equity into a trust fund. It will stay there indefinitely until a conflict is resolved in some manner. If the company becomes publicly traded, at that point we turn all of the stock over to the company if stock are involved. The money manager himself decides at what point the university should divest itself of our shares in the equity. So the decision making power is not left with the university at all.

I think the system has worked, but I think it has got some problems with it that are inherent in any model you have heard today, or any that exist in any other part of the country.

It really is a voluntary reporting system. Even though we have got all these mechanisms in place, where the faculty fill out all the yeses and noes. We have a department head review and then a committee review. We are still relying on the basic integrity of the scientist. This is a system based upon truth, and he will actually disclose all of his conflicts in that document.

We have no way with 3,000 faculty members that we are dealing with to go back in any way and audit, with respect to the second point, whether all of the information presented was correct.

The third point in our system and others, it is not necessarily tied to a grant submission. Let me explain that for a second. I said in a fiscal year we required a disclosure form that comes in to the faculty member, discloses all of his potential conflicts that are managed. There is a file then kept on that faculty member until the following July. Any grant during that 12-month period of time -- again, Washington University submitted 3600 grants last year; any grant that is submitted, and this applies to human studies across the board, is checked to verify that there is an appropriate, duly executed disclosure form on record. If there is, the box is checked off.

There is no audit. I'm not suggesting we can do this. There is no one who can take the grant itself and compare that to what might be in the actual disclosure review form. There is a linkage that is missing there, and I'm not sure how we close that linkage. If anybody has any suggestions of how we do that, I'd like to hear them.

I think the coordination problems within the university are significant. I think we are in the process now of trying to get those collated and put together.

Specifically, the technology transfer office, our technology management office is involved actively in negotiating licenses for out faculty, trying to help them with startup companies, and a lot of research agreements that come with it.

There is very little communication at this point. We are trying to encourage that between the two offices. What we are finding very often now is that our technology transfer office creates a conflict of interest that our disclosure review committee sometime down the orad will find to be unacceptable, and will basically suggest a different strategy.

Clearly, we are developing a mechanism now to get a very upfront look at this, as our tech transfer office is negotiating an agreement with a company, that the appropriate conflict of interest issues have been resolved. What that is going to require now is a two-step process, a reconfiguration of our committee, the disclosure review committee, which will actually get the copy of the agreement that is going to be signed, so that they can determine whether they would view any conflict of interest to exist on that particular application.

The human studies conflict of interest committee. We are struggling with how we relate the full disclosure. One slide I slipped over was, the IRB from does contain one statement about financial interest in companies on the IRB form.

The IRB relies to some extent upon the disclosure review committee to make sure that the full range of disclosures of the faculty member have been disclosed. But there is really no way of us verifying that as we go through the process. We need to tighten up that base considerably, to make sure that as we are signing off on it, the only things signed off on by the human studies or the IRB is consistent with the disclosure review committee.

Institutional conflicts of interest fall under many categories, and these are really troublesome to me, and I want to spend a bit of time on that.

Equity positions. Washington University has been a very conservative institution, and has only recently begun to take any equity position in a company.

What do we do with the equity once we get it? That is a big problem. In many cases, it is a startup company and it has zero dollar value. It has no value whatsoever. We hold those certificates until the company does issue an IPO. At that point, the stock is turned over immediately to a money manager again, whose instructions are, in the next two year period after the IPO, you make a determination with no consultation with the university to any extent about when you should sell that stock. But sell it as quickly as you deem reasonable, but certainly no later than two years after the IPO.

The faculty member's equity is also now tied up in that, because we have a revenue sharing agreement in our university, as do others. The faculty's equity is also tied up in that system. That sometimes poses a problem to our faculty, but that is the way the system will work.

For the sake of time, I want to move to one slide that illustrates a very complex, real life example of a very difficult conflict of interest situation, how I think we dealt with both personal conflicts of interest and institutional conflicts of interest.

First of all, the university was given a 10 percent equity in a startup company by one of our faculty members. He was one of our prominent faculty members, a department head. This was quite some time ago. He was the scientific director, CEO, was the head of the company, our faculty member. The company funded some research at an early stage in the development of some material which I won't discuss. Obviously it is a material that could be used in humans.

The faculty commitment over time to the company began to grow and grow and grow. As the activity became a real one, the company looked quite promising. Now the company acme to us, wishing to actually fund the clinical trials at Washington University. We had an extremely complex situation here, and we dealt with it in the following way.

We had both an institutional conflict of interest here, as well as an individual conflict of interest. We therefore appointed a special ad hoc committee made up not only of people within the university themselves, our typical faculty committees, but we called in two outside experts from other school systems around the nation to help us evaluate the institutional conflict of interest that we might have in that situation.

We eliminated step three as the progress of this research progressed. The company could not indeed fund research that was going on in that laboratory. The university had equity in the company, as did the principal investigator, and we would not accept any direct funding from that company to foster that research.

The answer to the last question in response to that request was unequivocally no, the university will not conduct a clinical trial which could patient the definitive piece of data for the FDA. The company had to seek some other sponsor to do that clinical trial. We would not deal with that clinical trial.

The last point I want to touch on in my 26 seconds that I've got ticking away here is, I think a conflict that the NIH has generated for us -- and I think this is a difficult one that has to be resolved -- specifically, small business innovative research program, the SBIRs and the STTRs, specifically read as follows. There is a PH regulation on the conflict of interest; 45 CFR 50.603, definition: significant financial interest does not include -- this is for these two types of awards -- any ownership interest int he institution or the company, if the company is the applicant under the SBIR.

Now, we have situations in which some of our faculty members own companies, and in fact are presidents of companies, and are getting these SBIRs, these STTRs, and then are subcontracting back to the university to fund research in that laboratory. Our DRC, our disclosure review committee, is very unhappy with this mechanism, and finds it to be actually putting the university in a position, if we accept these awards of creating a conflict of interest that is probably a real conflict of interest -- if it isn't a real conflict of interest, it certainly is a perception of a conflict of interest.

Now we are in a difficult spot. The faculty member wishes to apply for these awards. We understand the purpose of them. They were devised about 20 years ago under Bayh-Dole and to try to facilitate technology transfer, but they pose unique problems for universities. In fact, they seem counter to the entire philosophy that we are developing now with respect to conflict of interest. These grants in my estimation generate conflicts of interest that we need to find some mechanism to manage.

If we don't manage these conflicts appropriately, we then have other faculty members coming who -- you recall, we have a policy now; if a faculty member has an equity position in a company, he may not fund research in his own laboratory. This contradicts that completely and allows that mechanism to take place.

We are trying now to manage the conflicts as best we can, but to be honest with you, we have not come up with a mechanism yet that we feel very comfortable we can manage this conflict of interest. I would really like NIH to help us out in this respect, and help us with this seemingly inconsistency in policy between these two types of awards.

I've gone a little bit over time. I think it gives some flavor for what Washington University is doing. I think we are grappling with these issues. I think the issues that we are facing, I think we have done an admirable job. I think our committee has worked hard, and I think we have straightened things up as much as we can. I think with respect to human studies and other activities that we are doing, we are managing conflicts, and I think our committee is probably one of the roughest committees that I am aware of, in terms of managing conflicts of interest.

But it is quite clear we just don't want to meet the letter of the law here. I think we personally are very convinced that the whole credibility of science is at stake here, and that we want to go as far as we can in meeting not only the letter of the law, but the spirit of the law. If we have time for questions, I'd love to have some discussion of the issues I've raised.

Thank you.

DR. GOTTLIEB: In the interests of time, I think I'll start without the Power Point slides, and maybe they will come up, and if not, we'll do without.

I'm Julie Gottlieb, Executive Director of Policy Coordination at Johns Hopkins University School of Medicine. It is a pleasure to be here and participate in this important discussion. We are all abundantly aware that financial interests have become a major feature of the academic research landscape. Corporate sponsorship of research often involves as we know provisions for licensing, and opportunities for investigators to serve as consultants. Since the adoption of the Bayh-Dole Act, academic medical centers have stepped up their efforts to license inventions, which leads of course to royalty and often equity for the investigator/inventor as well as the institution.

Increasingly, conflicts of interest are even discussed during the faculty recruitment process. In response to these forces and related federal policies, many academic medical centers including Johns Hopkins have adopted policies that allow investigators and the institution to accept financial rewards for their creativity, while at the same time attempting to diminish and manage the conflicts of interest with research.

As other speakers have pointed out, we have no clear evidence that allowing financial interest in human subject research is inherently harmful to patients. Nevertheless, the possibility and even the appearance of that possibility of harm must be assessed and managed.

Institutions have adopted various approaches as we have heard this afternoon. I have been asked to share with you what we do at Johns Hopkins, where the policy on conflict of interest has been in effect for over eight years.

We have on our docket a history of over 250 cases or sets of arrangements which have been reviewed under our policy. However, to put these numbers in some perspective, let me add that only about four percent of the clinical research protocols submitted last year, for example, involved a financial interest. I should also add however, the denominator is very large. Our IRBs review some 2800 protocols every year.

Our focus here today is on the fundamental issue of protecting human subjects in the presence of conflicts of interest. We also need to acknowledge however that conflicts of interest carry a potential and perceived risks to the research record, and to interactions among colleagues, both of which ultimately may affect or be seen as affecting patient welfare.

The Hopkins approach is best summarized as a three-part process. I am going to give you a fairly nuts and bolts overview. The three parts are reporting, review and management. This process is labor intensive, and it requires an ongoing effort to educate investigators regarding the institution's policies and procedures, as well as their obligations under those policies.

I'll begin with reporting to the institution. This applies to all personnel who are participating in research. We do not specify a minimum threshold for reporting of financial interest. Doing so, we believe, would be problematic, because for example the value of stock is subject to change, and similarly income received under consulting arrangements can fluctuate. Hopkins has chosen not to leave the determination of these values up to the investigator.

The reporting requirement to the institution is very broad. It encompasses federally funded studies as well as corporate sponsored studies, and conflicts of interest and clinical research are subject to the same policy as those involving basic research.

Reporting to the institution is achieved in several ways. Investigators must respond to a question on financial interest when they submit a proposal to the IRB, and also when they apply for outside research funds. In addition, virtually all licensing arrangements include provisions for revenue, which is then distributed among the institution and the inventors. These are reported to us by the licensing office.

Outside consulting activity must be disclosed to our committee on conflict of interest, which then determines whether there is related research activity. In addition, on completing their annual appointment letters, all faculty members must agree to abide by the conflict of interest policies and to report their relevant outside activities.

The next step in this three-part process is review. Arrangements involving financial interests are reviewed at the time they are reported. The conflict of interest office works very closely with the IRB. In one concrete, important example, the IRB will not approve a consent form until the conflict of interest has been completely reviewed and managed, and I'll come back to that later. Again, there is additional review when a clinical protocol is up for annual renewal.

The heart of the review process occurs in the monthly deliberations of the committee on conflict of interest. This standing committee consists of 15 senior faculty members and administrators, and its membership changes infrequently, so there is a steep learning curve, but there is a lot of experience on that committee.

The committee which is advisory to the dean reviews the details of each arrangement, typically including the research protocol and the various relationships that create the financial interest. The committee tries to maintain consistency in its approach to various types of cases, but it is also careful to review each case individually on its merits.

A variety of factors of examined in assessing the role of the conflicted investigator and the potential risks involved in a project. Does the investigator have a substantial equity stake in a startup company, or does she have instead a small number of shares in a publicly traded company? Is the subject of the study the company's only product, or is it one of numerous therapies under development? Does the study involve a potentially toxic therapeutic agent, or is it instead a quality of life questionnaire that is involved? Is it a multi-centered trial with several levels of built-in oversight, or will it be done only at Hopkins.

The third step in this process is management. For each case, the committee may recommend one or more of the conflict of interest management tools at its disposal. These include public disclosure of the financial interest, placing stock in escrow for a specified period of time, limiting the role of the individual with the conflict, overseeing the research, selling the stock or of course, rejection of the arrangement. I am going to discuss each of these tools in turn.

Public disclosure of the conflict is required in every case. For human subject research, the disclosure must be included in the patient consent form. The actual disclosure statement must be approved by the conflict of interest staff before the form is released by the IRB.

In what type of situation would disclosure alone be considered sufficient management of a conflict of interest? A licensing arrangement with no equity or consulting activity is one example. In a hypothetical, let's say that Dr. Venture invents a new assay for a cytokine, which requires obtaining serum samples. The university is licensing the assay to Immune Biolabs in exchange for future royalty payments. Again, there is no equity, no consulting.

Dr. Venture may be allowed to participate in the study of the assay, provided that he discloses his financial interest as well as that of the university in the consent form, in presentations and in publications.

Methods for disclosing to the scientific community, to the public and to patients are fairly well established under our policy, but disclosing conflict to trainees and co-investigators is also important, and we are experimenting with various ways to insure that this occurs as well.

Tailored disclosure statement are provided to the conflicted investigator, and they must contain specific information. Our committee has worked hard to try to achieve a balance between over informing and under informing the patient, and has come up with a formula that is still negotiated from time to time with the faculty member. But basically, the consent disclosure statement must include the investigator's name, specify the type of financial interest that he has, as well as any interest the university has, and in some cases indicate how the conflict in question is being managed.

Next in the conflict of interest management arsenal is stock escrow. In another hypothetical case, Dr. Wise is going to serve as the local PI of a multi-center study of a new anti-nausea agent for patients receiving chemotherapy. ABC Biotech, which is developing the therapy, is asking Dr. Wise to consult for them, and is offering him 2,000 shares of company stock.

In this sort of case, it is likely that Johns Hopkins would require that Dr. Wise place his stock in escrow until two years after the first commercial sale of the drug. In this multi-center trial, Dr. Wise's control over the study is diluted by the involvement of many other centers and investigators, so he might be allowed to serve as the local PI in the study.

Naturally, the consent form and any papers that he co-authors would have to include a disclosure of his consultancy, his stock ownership, and the fact that his stock is subject to certain restrictions.

Another option for managing a conflict of interest is to limit the role of the investigator who has the conflict. While it may be important for the inventor of a technique or a device to participate in a study in order to give technical advice, removing him from direct involvement with patients and with control of the data may go far in protecting study subjects and data integrity.

Let's suppose for example that Dr. Derma develops a new wound repair technique. The university has licensed the technique to a startup company in exchange for both stock and royalty, and Dr. Derma gets both cash and equity for serving on the company's scientific advisory board.

He is proposing to conduct a small-scale trial of the technique, and the company will sponsor the study. If his participation is considered essential, it is likely that several restrictions will be imposed on it. These may include disclosure, stock escrow, as well as limiting Dr. Derma's role to that of co-investigator and not allowing him to enroll patients or to analyze the data. In fact, the committee may also require that the technology undergo a second study at another institution prior to its commercialization.

In a handful of particularly complex cases, Hopkins mandates oversight of the research project. This typically involves appointing a disinterested individual or a committee of disinterested individuals to review the research design and protocols, to meet periodically with the investigator, and to review manuscripts. This is of course in addition to disclosure, stock escrow, and possibly placing limits on the role of the conflicted individuals. It is our maximum management outline, but oversight is very labor intensive.

Naturally, there are some arrangements that carry unmanageable conflicts, or too high a level of risk. These are rejected, forcing the investigator to choose between the financial interest and the research project. Many such cases are in fact weeded out in advance by staff and never reach committee review.

Violations of the conflict of interest policy are taken seriously, and they are subject to review under the school's misconduct policy and procedures. Violations for example might include conducting research without reporting a conflict to the institution or failing to comply with required conflict of interest management measures. However, we don't desire to or in fact police the faculty. Again, their trust is considered basic here, trust in participating in the policies.

With respect to the university's own financial interests, when they relate to potential licensing revenue, these are managed under the conflict of interest policy. This includes publicly disclosing the institution's stockholding or royalty interest, as we discussed, including on consent forms, and also placing the university stock in escrow under the same terms as the inventor stock is placed in escrow. Not included in this policy are the university's endowment holdings, which are managed professionally by separate outside money managers.

With respect to conflicts of interest among IRB members themselves, we are developing a policy to deal with this issue. The policy calls for IRB members with financial interests relevant to a specific protocol to recuse themselves from participating in decisions on that protocol.

In closing, I think some final observations are in order. The Hopkins policy involves close scrutiny of financial interests and results in concrete restrictions. It is a very labor intensive policy, involving substantial resources. It is considered essential to the research enterprise and is strongly supported by the institution's leadership, and in particular it benefits from very close working relationships with the Office of Technology Licensing, the IRB, the Grants and Contracts Office, and others as necessary.

While a history of scrutiny and fairness has led to general acceptance by the faculty, and we believe we are well on the way to fostering what was earlier described as a culture of compliance, we think our procedures are still a work in progress, and they benefit from continual examination by both the institution and our investigators.

If you would like to refer to the specific policy, it is on the Web. Thank you very much.

DR. CHODOSH: I have good news and I have bad news. The good news is that I think we heard four very excellent presentations of what is going on out in the real world. The bad news is that we are over time, and therefore our question period is -- we can't compromise the next session. So there was one question that was submitted in writing; please attack one of us or all of us as we leave, and we will be glad to address your questions. I had a number of questions which I'll now have to corner some of these folks.

I want to thank the panelists for an excellent presentation.

DR. MADDOX: I would like to remind people that there is one more session left before we begin the breakouts, so we would like to get started as soon as possible.

It is my view that protecting human research subjects and maintaining research integrity are two of the major cornerstones of the biomedical research enterprise. Hence, this conference is timely, it is important, and I don't believe that its importance can be understated.

It is my view that a conference on human subject protection and financial conflict of interest would be incomplete if there was not an opportunity for public comment. The input received throughout this meeting will certainly provide information for the Department of Health and Human Services to develop additional guidance to implement current regulatory requirements as well as to establish where needed new guidance.

On the issue of appropriate disclosures to human research subjects, guidance is essential. Thus, we called for members of the public to respond to six questions that were posted in the Federal Register on July 3, 2000. These questions were posed in such a way to give the public an opportunity to respond, and we are pleased to report that we had several responses that either showed up as written comments or as notices of participation. These comments will be a part of the official records of this meeting, and will be a part of the proceedings.

I'd like to take a moment now to go over the format of this session, because it is going to take place as a formal town meeting. We'll ask six of the individuals who have agreed to speak to come forward as I introduce them. They will either be reading their statements or they will be giving us some highlights from their statements, all within a period of about five minutes.

We really are going to honor the clock here, because as you know, we are already off schedule. We wanted to ask the participants, and we did alert them ahead of time, to try in their remarks to address the six questions.

Following my introduction of each of the speakers, I will then signal, it is time to go, and they will have five minutes. They should be looking at this little clock here on the podium. When the light gets yellow, they should be cautioned to slow down, and to get ready to stop, and then we've got a red light here, and I will be monitoring with our official timekeeper down there below.

If you have particular comments or you have some discussion that you would like to bring to the table as part of this session, we ask because of time that you bring these issues up in the breakout sessions. I'll be giving you at the end of this session where you are to go for the breakouts. We will stay on schedule.

So now it gives me great pleasure to introduce our first speaker. Our first speaker is Mr. Kenneth Trevett. Mr. Trevett is Chief Operating Officer and General Counsel of the Scapins Eye Research Institute in Boston, Massachusetts. It gives me great pleasure to invite Mr. Trevett to the podium.

MR. TREVETT: Good afternoon, it is a pleasure to be here. In a sense, I am here representing no one officially, but I am wearing several hats, and I'd like to just briefly explain those.

As Dr. Maddox said, I am the General Counsel at Scapins and Chief Operating Officer, which is essentially a basic biomedical research organization in Boston. I also served in my previous position as general counsel at Dana Farber Cancer Institute, where I was a nonvoting member of the IRB for six years.

I serve in a voluntary capacity as chair of the Commercial Relations Committee of the Association for Research in Vision and Ophthalmology. I am a lawyer amidst vision researchers and company representatives, a humbling experience, if you can imagine the concept of a humble attorney.

I have a recently clarified financial disclosure policy, which is required for all papers and presentations. Finally, I teach a course in technology transfer at Suffolk University in Boston.

Before summarizing my responses to the six questions, I want to share some general observations. First, I believe strongly in the Bayh-Dole regulatory framework in effective company and academic relationships. I also believe in very, very open conflicts of interest policy disclosure. I think these positions are in fact compatible. I would be looking at a regulatory framework in addition to the regulations that are now in place that balances these various interests.

It is important that we are not so burdensome in our regulatory development either institutionally or federally that we force conflicts underground. This is a real issue, and believe me, you get into issues where people -- or regulations where people feel as though their privacy interests are unreasonably jeopardized, you are going to force some of these conflicts underground. Obviously, none of us want to stagnate the research enterprise.

Secondly, we need to allow for institutional differences. We simply can't adopt a right-wrong approach. With this view in mind, I also want to emphasize that there are special considerations and concerns of smaller institutions versus larger. I'm not saying the overall issues are different, but remember that smaller institutions like ours and many other quality institutions throughout the country have very, very much smaller staffs, and the extent of regulatory burden rests heavily on those kind of organizations.

Regulations need to be flexible. I do believe, and I hope this isn't jeopardized, that grantee institutions remain responsible for this enforcement of these policies, and I do believe they should be held responsible for creating an environment of objectivity and integrity.

I believe that we should require brief but accurate disclosures during the informed consent process with human subjects, both about the nature and the extent of the investigator's conflicts, as well as institutional conflicts.

Next, I think it is extremely important, and there really hasn't been enough attention developed, certainly there hasn't been much discussion of it today, the systems that we put in place must protect the individuals who are responsible for enforcing these regulations. I can tell you from personal experience and also the experience of colleagues and friends that some of the most vulnerable people obviously are the patients, but additionally they are the staff people who enforce conflicts of interest policies. I think many of you would agree with that.

We need to create a regulatory environment, I agree, that balances these various issues. I don't think we can create something that is going to ferret out reprehensible behavior. Simply to have a quasi-criminal system, criminal justice code kind of system in place is going to so burden our institutions that we are going to end up really with an unworkable system, and I don't think it is particularly going to add to the efficacy of the system.

That is my five minutes. As you can imagine, I have many more five minutes that I would like to share with you, but I appreciate the opportunity to talk.

Thank you.

DR. MADDOX: I'd like to thank Mr. Trevett. Also, I just want to bring to your attention that the six questions are in your materials, in your packages. If you haven't had a chance to look at them, please do so.

Our next speaker is Mr. Ronald Collins. Mr. Collins is Director of Integrity and Science Project, Center for Science in the Public Interest, Washington, D.C. Mr. Collins, five minutes.

MR. COLLINS: Thank you. My name is Ronald Collins. I am with the Center for Science in the Public Interest. The comments that I am about to make, abbreviated though they are, were part of a memorandum that I prepared with the co-authorship of Professor Sheldon Krimsky from Tufts University. So I do so as one of the two voices.

In a recent article in Science magazine, Donald Kenderby wrote, scientific culture is by its nature oriented toward disclosure. The irony of course is how much of science, especially conducted in universities, is secret and therefore contrary to the nature of science.

Much of what goes on, although there is said to be disclosure, one wonders how much actual disclosure there is. For example, how much of what is disclosed to public universities, the internal review boards, how much is available to the public and press. This is an important question in terms of public oversight.

In all of this, there is the question of the contractual arrangements entered into by universities, quite often public universitas and industry. How much of those contractual provisions, how much of the information contained in those contracts, is publicly disclosed or maybe disclosed to the public and press? In all of this, disclosure is an important principle, but the question is, what is disclosure?

For example, when Boston University or other universities enter into partnerships with drug companies to create research centers in which industry directed research is conducted, what do we know about those arrangements? Or when Vanderbilt University enters into venture capital funds to create campus companies, how much do we, the public and the press, know about such arrangements? Or when there are Draconian confidentiality restrictions in contracts that prohibit for example, as the contract with Brown University prohibited certain disclosure of information or certain disclosure about methods and results, how much of those contractual provisions, with their Draconian secrecy provisions, can be made available to the public?

In all of these cases and others, something must be done, even with the baseline concept of disclosure. Because of this, I think there are a number of things that universities can do. First and foremost, they must develop rigorous rules concerning enforcement measures, and those rules must have, as many universities don't have, rigorous enforcement measures. In all of this, there must be public disclosure, and by public disclosure, I mean disclosure available to the press and public freely for public scrutiny.

Also, I think it is essentially that conflicts of interest be required by universities, such conflicts to be disclosed in any publications done in journals, books, or otherwise.

Finally, I think the federal government should consider doing something in this area. They should develop ethical guidelines for institutional conflicts of interest at universities. They should require -- it should be required, I should say, that all federally funded research disclose those who receive federally funded research, disclose conflicts of interest. Also, I think that the responsibilities for overseeing conflict of interest should rest with the Health and Human Services Office of Research Integrity, an office better suited to deal with those problems. Anybody receiving federal funds should also be required to disclose conflicts of interest when they testify before government regulatory agencies, for example, the Environmental Protection Agency, or before Congressional committees.

Finally, I think that there should be consistent with what has been said, disclosure of conflicts of interest in so-called -- and I say so-called -- informed consent. The question of what constitutes meaningful and full and real informed consent, I think, is a big question, one that slips through many loopholes that we have been discussing today.

Thank you for your consideration. My comments, my recommendations along with those of Professor Sheldon Krimsky, are contained in the recommendations we submitted in a memorandum. I urge the members of this panel and others to take them into consideration.

Thank you again for allowing me this opportunity.

DR. MADDOX: Our next speaker is Mr. Alan Schipp. Mr. Schipp is Assistant Vice President for Biomedical and Health Sciences Research, the Association of American Medical Colleges, here in Washington, D.C. Mr. Schipp.

MR. SCHIPP: I was asked to highlight some of the key points in our letter, which as David Korn noted was available in the lobby earlier and available on our website.

David Korn did a nice job of conceptualizing conflicts of interest. In particular I want to reinforce three points, that they are ubiquitous, they are not uniquely financial in nature, and they are not equatable with misconduct.

Dealing with conflicts of interest must begin at the level of the individual. In stating this, the AAMC believes that the vast majority of investigators are earnest, honest, ethical and seek to adhere to principles of professionalism as they understand them. A sociopath who is intent on deception and on defeating the system will probably succeed, at least to a limited degree, no matter what kinds of controls are in place, and such individuals need to be dealt with sternly and severely, through mechanisms designed to deal with deliberate misconduct, distinct from procedures for dealing with conflicts of interest.

Honest scientists on the other hand have to be sensitive to the potential for conflict of interest, and must monitor their own behavior. Disclosure and possibly self elimination from a conflicting activity is a course of action that every scientist, IRB member and other professionals should consider.

That said, federal research dollars are awarded to institutions which of course bear ultimate responsibility for their stewardship. Therefore, institutional controls, policies and procedures are obligatory. Institutions however are highly complex and extremely diverse, and often conflict situations are equally diverse when all of the relevant factors are taken into account. So whatever systems are set up to control conflicts of interest, they must accommodate this variability, and it is for this reason that there is no single correct procedure for handling conflicts of interest, as has been amply demonstrated in the previous panel.

Nonetheless, based on a content analysis that the AAMC conducted some time ago, we did identify certain elements that do seem common to sound institutional policies. These include an explicit definition of conflicts of interest, often cited with examples, a clearly defined scope of policy detailing the affected individuals, institutions and activities, effective procedural elements including timely disclosure of relevant information, thorough review of disclosed information and a mechanism for management and/or resolution of the conflict situation as appropriate, and then finally, sanctions for policy violations.

Conflict management per se is almost always a case by case matter. This is due to the very complex matrix of interests that can be involved in the differential impact that a given interest can have on a given individual, depending again upon all the factors that may be in play.

As for disclosure to patients, which is another matter raised in the Federal Register notice, this is a matter that must be handled in concert with the IRB of course, which is best equipped to determine if such information is relevant to the research activity, and also in the patient's best interest to know.

The Federal Register notice asked specifically for example if patients should be made aware of all investigator financial interest, and the answer to that question is clearly no. Apart from being an unwarranted invasion of privacy, patients will have no way to deal with this information often, and will respond to such information differently, and thus conveyed, the information may even have an adverse impact on policy recruitment and selection, and thus on the quality of the research activity.

So to conclude, identifying and managing conflicts of interest are extremely complex matters that cannot be entirely handled through pat prohibitions or simply and universally applied thresholds. The federal agencies in working on both the PHS and NSF policies came to recognize this after several years of deliberation, and left fortunately much latitude to the institutions in how they would handle this matter on a case by case way.

Second, having a conflict of interest is not an act of professional misconduct. The key to resolution of these conflicts is to exercise integrity, use good judgment, manage the conflicts effectively, and disclose.

Then finally, IRBs have a very specific mission in their involvement in overseeing and handling conflicts of interest, and has to be considered accordingly. As entities designed to insure that research subjects are not exposed to undue risk, they must evaluate an investigator's financial interest in this narrow context.

In our current system of research oversight, we have trusted the IRB to make appropriate judgments about the kind of information potential research participants have a right to know or need to know when making decisions about becoming part of a clinical study. So the question of investigator financial interest should be handled in the same manner.

Thank you.

DR. MADDOX: Our next presenter is Dr. Howard Mann. Dr. Mann is chairman of the IRB of the Intermountain Health Care, Salt Lake City, Utah. Dr. Mann.

DR. MANN: Good afternoon. As mentioned, I am chairman of the Human Subjects Research Committee, an IRB with Intermountain Health Care in Salt Lake City. The IRB operates under a multiple project assurance and serves four community based hospitals and outpatient and walk-in clinics in which clinical trials are conducted.

I would like to comment on items within questions three and four. I'll leave printed copies of my comments, which includes the references I mention.

If information about financial interest is disclosed to potential participants in clinical trials, what information should be disclosed and at what level of detail? First, an IRB should include sections in the application for research that are comprised of specific questions relating to the source, the mechanism and amount of funding associated with the trial, the contractual hierarchy associated with the trial, for example, sponsor, contract research organization, institution, investigator as applicable, and information concerning any ownership or other beneficial interests the investigators or the institution with which the investigators are affiliated have in such sponsoring organizations.

I shall reference a copy of my IRB's application for research containing the questions that we ask. Our guiding principle in this regard is well enunciated by Article 7.3 in the Canadian tri-council statement, ethical conduct for research involving humans, which states in part, budgets for clinical trials usually are calculated by per capita costs, that is, the sponsor pays the researcher a fixed sum for each research subject recruited.

Per capita payments raise ethical concerns because of the potential to place the researcher in a conflict between maximizing economic remuneration and serving the best health interests of subject patients, especially if the researcher also holds a therapeutic or clinical or other fiduciary relationship with subjects.

This principle of disclosure should also apply to applications for other kinds of research, for example, research involving human biological material. It is increasingly common for researchers and their affiliated institutions to have direct ownership or other beneficial interests in for-profit biotechnology companies, in which the research is actually conducted in whole or in part.

Should disclosure information and institutional policy be provided in the informed consent document or in an entirely separate document? In formulating our policy, my committee adopted the approach explicated in an article by Dr. John Lacuma, a medical ethicist, entitled, How Much Do You Get Paid If I Volunteer. It suggested institutional policy on reward, consent and research.

Our template for formulating the consent document contains a section entitled Who Is Sponsoring the Study, in which the source, the mechanism and the amount of funding must be disclosed. Suggested language includes the following. This study is sponsored by XYZ Incorporated, which produces the study drug. It is contracted with ABC, a contract research organization, to conduct and monitor the study.

ABC has contracted with the hospital and the study's clinical investigators to perform the study. ABC pays the hospital X dollars for each subject enrolled. The money is kept in a research fund, and is used for the direct costs of conducting research, such as maintaining a research office and provided salary support for study coordinators and monitors. The investigator does not receive direct payment, or ABC pays the investigator X dollars for each subject he enrolls in the study. The investigator's own direct cost is X dollars for each subject enrolled, and he plans to use the funds for it.

Similarly, any ownership interests or beneficial interests that the investigator or the institution has in the sponsoring company should be explicated in similar language.

Thank you for affording me the opportunity to comment on these questions and share my committee's approach with you.

DR. MADDOX: Next we will hear a statement presented by Dr. Holder Baumgartner. Dr. Baumgartner is representing the Research Ethics Committee of the Innsbruck University, Austria.

DR. BAUMGRATNER: Dr. Maddox, Dr. Nightingale, thank you very much for allowing me to address this audience.

I have been also asked by the President of the World Federation of Neurology, Professor James Toole from Forrest University, North Carolina to speak also on behalf of this association, and also to talk on behalf of the European Federation of Neurological Societies.

I would like to make two points. Please consider whatever you decide in the United States will sooner or later have an impact on patients and research subjects worldwide. Why? Because the USA has entered into a treaty called ICH for International Conference for Harmonization.

This is an agreement between the United States, the European Union and Japan, and it aims at the mutual acceptance of clinical trial data for regulatory purposes, for the registration of drugs by FDA, the European Agency and (word lost) Japanese.

This means that competition for industry-sponsored drug trials is on. It is on worldwide, on a worldwide basis, and it means competition for money, for dollars. So the conflict of interest potential is not only concerning the investigator, IRBs, institution, it is now between countries and even regions.

Please consider the following for yourself. Let us assume an Eastern European country, an impoverished country, and there we test an anti-epileptic drug. The drug data, the trial data are accepted by the FDA or the EMIR. What does it mean? A, it means the patient, trial subject in this country would in all likelihood not get first-class diagnosis or treatment without the clinical trial.

B, the $2,000 fee for the investigator is more than one year's salary. As a consequence, everybody is extremely compliant, the data will be excellent. By the way, the life expectancy there might be 58 years for males, whatever that means. When a drug is registered, this country will not be able to afford this drug because it is too expensive, at least in the foreseeable future. This amounts to me for exploitation of risk. The quality is all right, but the risk is exploitative, contravening the principle of justice. This is as seen in the international context.

The second point I would like to raise is the ICH-GZP guidelines. You have your guidelines, the Europeans have their guidelines and now we have all together ICH-GZP guidelines. In the process of producing ICH-GZP guidelines, an interesting change happened to the European guidelines.

In Chapter 1.6 in the European guidelines it says, the IRB or, as we call it, the research ethics committee, should be asked to consider the following (words lost) the extent to which the investigators and subjects may be rewarded, compensated for participation. These guidelines are enforced in Europe since 1997. We are talking about some recent things. This stipulation has been dropped and does not appear in the ICH-GZP guidelines anymore. This means that the parties responsible for these guidelines, the United States, the European Union and Japan, agreed not to make financial disclosure of this kind obligatory as part of the review process.

If you are interested in ICH and so on, you check on the Internet, ICH, you will find something. You find out about its composition and you would be surprised in whose hands the administration of this worldwide harmonization process rests. I wish you good luck.

Summing up, science is an international endeavor. It is a shared treasure of humanity. Companies act globally. Investigators, institutions, even countries are in competition, but usually they have a local viewpoint. So it is up to the political representatives of our democratic countries to make sure that the values we are standing for, human rights, are being observed properly in a globalized research environment. Active participation of patients, of patient advocacy groups and of investigators, which is very absent in this whole regulatory process as far as I can see, is very essential in my opinion.

Congratulations to the organizers to have brought together all key players and not just the quarterbacks, regulators and industry.

Thank you.

DR. MADDOX: Our last speaker is Dr. Marie Cassidy. Dr. Cassidy is professor of physiology and experimental medicine at George Washington University. She will be representing today in her statement the Citizens for Care and Research of New York, New York. Dr. Cassidy.

DR. CASSIDY: Thank you. I should say, where I am coming from, my perspective, I have been a medical educator and research investigator for almost 40 years. I am speaking today for Citizens for Research and Care. That encompasses almost every aspect of what is the subject of this conference.

We are a small unfunded group, currently pure as the driven snow, which tries to track and keep a database of matters affecting not only the recruitment of subjects into trials, but also their care, their monitoring, the interface of the physician-healer, investigator and pretty much all the aspects we have been asked to look at today.

With respect to the questions that were asked, we believe that disclosure is essential. It would depend on the level of trial, the institution, the resources of the institution. It certainly should be made to the institutional entities. Depending on the volume of trials, this could be implemented in different ways in terms of disclosure to possible patients.

One of the things that we do believe is that any new measures that are undertaken should be prophylactic and preventive. That is, they should be designed with preventing any other abuse that might be likely to happen. Possibly because the regulations or oversight measures currently enforced were designed for wiser, gentler, kinder physician-investigators, the approach that is now often taken is post hoc. It is investigational, with all of the legal implications. It obviously draws enormous media attention, and it is usually — or the design usually is limited to punitive sanctions of one kind or another. A preventive approach — and we have heard some excellent examples this afternoon — would be better.

There is obviously as people have mentioned no longer a firewall between traditional academia and universities have become co-capitalists. As a member of my own institution this is a matter of survival of academic medical institutions, and if they don't survive, there won't be clinical trials.

I think there are some very innovative approaches out there, and solutions to having — much as one might in the computer world, having firewalls between certain segments and having overall oversight at some point, and not necessarily in terms of a federal bureaucracy to do that.

Now, there is one thing that nobody has mentioned today. We have mentioned the Bayh-Dole initiatives to bring research to the marketplace much faster. In 1966, an animal welfare act was passed, and I can't tell you how profoundly it affected the lives of people who do bench research with animal models. We do not have a national human experimentation act, and my group believes that that is a proposition worth looking into.

For instance, were we to be required to come up with some census of how many animals and what kind of research were actually involved right now in the year 2000, we probably could. I do not believe anyone knows how many subjects are enrolled in clinical trials, what the outcomes are, even in profile terms, or how many trials, both private and federal, are being undertaken.

CR Care believes such a national human experimentation act is a topic which should at least be discussed, covering all of the issues that are up at this conference, but also the welfare of people who are enrolled in trials, what happens in the absence of nationalized health care. Someone is in a trial and there is no after care. That is the kind of information we do not have, and should have.

Thank you.

DR. MADDOX: We would like to thank all of our speakers for their thoughtful, enlightening and very challenging comments. As I mentioned to you earlier, these statements will be a part of the official record, so I need want to thank the speakers in particular for adhering to the time constraints.

I also wanted to note that the period of public comment has been extended, and will be extended through September 30. So if you want to have a time to get your comments out there on the website, because they will be posted on the Web, you have until September 30 to do so.

Again, we want to thank you all for your participation, and to remind you that there will be six concurrent breakout sessions after this session is ended. I wanted to bring to your attention where these will take place.

There will be six of them. The first will be in E-1, E-2, and it will be chaired by Dr. Lana Skirball. A second will be in Rooms F-1, F-2, to be moderated by Dr. O'Roarke. A third will be in the auditorium here, and it will be moderated by Dr. Temple. A fourth will be in Balcony A, and that will be moderated by Dr. Lappe. A fifth will be in Balcony B, and that will be moderated by Ms. Russell Eihorn. Then the last in the six will be in Balcony Six, and it will be moderated by Dr. Livingood.

Again, thank you all for your participation. This was a very exciting time to hear the comments from the presenters and to have them be so challenging. So we look forward to meeting with you in the breakout rooms.

(Whereupon, the session was adjourned.)