printer-friendly version


THURSDAY, APRIL 20, 2006


Session 4: Children and Clinical Research

John Lantos, M.D., Professor of Pediatrics and Associate Director, MacLean Center for Clinical Medical Ethics, University of Chicago

CHAIRMAN PELLEGRINO:  It's time to reconvene.  We're actually five minutes late, but we'll get back on schedule.

Our next speaker is Dr. John Lantos, who is Professor of Pediatrics and Associate Director of the Medical Center for Clinical Medical Bioethics at the University of Chicago.

Dr. Lantos is going to address us on the question of children and clinical research, continuing something of the discussion we've had a little bit earlier.

John.

DR. LANTOS:  Thank you.

So I'm a general pediatrician and a former associate chair of the IRB at University of Chicago, and that discussion at the end of Bob Levine's talk was pretty interesting because I quit being chair of the IRB after they imposed the regulation on us that the IRB chair had to read at a convened meeting all of the amendments that had been submitted.  And if any of you have ever been on the IRBs, the amendments tend to be wording changes in the informed consent form.  There would be 200 or 300 every meeting, and at the end of every meeting the chair would go through them reading, "This was submitted.  Here was the old language.  Here is the new language," yelling, "aye," approved.  Here was this, and the rest of us would go get coffee or cookies, and this was read into the tape recorder so that when we were audited it was clear that this was discussed.

So the concerns about IRBs and their current function in relation to their mission, I think, are important ones to address.

In my own work, I look at innovative therapies in pediatrics, things like neonatal intensive care, growth hormone, cancer chemotherapy, bone marrow transplants and that sort of stuff, and really focus my scholarly work on that gray zone sort of just beyond what's usually considered research and sometimes called innovative therapy in some of the dilemmas that arise in that area.

So I'm going to talk today about where I think we are in the regulation of pediatric research, where I think we should go, and come out at a little different place than Bob Levine came to.  I think there are some important things to be done that this Council might be able to speak to, and I'll tell you where I'm going so that you can then listen wondering how I might ever eventually get there.

I think today — and this is partly in answer to Leon's question at the end — I think today we have a marvelous set of moral guidelines for research in children and people like Bob Levine helped develop them, and a lot of the people who he cited helped put them together.  And they're marvelous, in part, because they're half written and, therefore, require interpretation in order to be operationalized.

And unfortunately, there are only about a couple dozen people in the country who understand them and are familiar with them, and they meet regularly and write articles about them and regularly disagree, quite often passionately, and they'll have discussions about 45 CFR 46 and Subpart D and what the latest 407 Committee did, and everybody else will sit there shaking their head going, "What in the hell are they talking about?"

(Laughter.)

DR. LANTOS:  But they're sort of important, subtle distinctions about what sort of research is permissible, and as a result, there's a set of robust but arcane understandings by this small group of highly trained professionals which incorporate ethics, but don't necessarily apply it in their day-to-day work.

In a way it's a little misleading to have someone like Bob Levine come here speaking as if he's a typical IRB chair since most IRB chairs did not participate in this process, are not steeped in the history of the ethics, where it came from.

Our chair at Chicago is an anesthesiologist who proudly claims to know nothing about ethics and works closely mostly with the compliance officer in making sure that the institution is protected from risk management.

So what we have is a good set of rules and an abysmal system for operationalizing the rules, and that's, in my opinion, where the action is and what this Council should think about ways to address.  So let me explain how I think we got there and then offer some specific suggestions about how we might address the problem.

So the modern debate about the ethics of children's participation in biomedical research began as a theological debate.  Paul Ramsey and Richard McCormick, both of whom were rooted in specific religious traditions, but neither of whom was arguing on the basis of their particular theology, framed the issue, and this was in the late '60s, early '70s, as one of finding the balance between the obligations of adults towards children and the obligations of children to their community.

In a sense, their differences about where we should come down on this reflected their different views, in my opinion, of the meaning of children's innocence.  In Ramsey's view, we should treat the child as radically innocent and pure and, therefore, as someone with different obligations to the community than those demanded of adults with full fledged community membership.  To be a member of the adult community is to be autonomous, and by implication both selfish and also capable of sacrifice or altruism for the community.

And this was important because for Ramsey, participation in research was seen as an altruistic sacrifice.  Innocent children were seen as in special need of special protection from the moral taint of selfishness that the adults suffered from.

So the moral dilemma of selfishness versus altruism was something an adult could struggle with, but a child could not.

McCormick's contrasting view was that the child was already a full fledged member of the moral community even though he or she was unable to exercise some of the prerogatives of that membership, and as a full fledged member, he or she deserved the same protection as an adult, but not more protections.

And more importantly, he or she had obligations to that community.  In particular, the child had an obligation as would an adult to participate in relatively low risk research projects that could benefit others.

So McCormick's view was different in that it shifted the notion of obligation and made it reciprocal from day one.  Note that neither theologian based their arguments on any notion that there was benefit to children or to the child who was participating in a research protocol, although both talked about the possible benefits to the community of children, but research was seen by both as an endeavor in which the risks to the subject inevitably outweighed the benefits, therefore as a selfless or self-sacrificing or altruistic act.

So Ramsey's view led to the conclusion that it was always wrong to conduct research with children as subjects, although sometimes we should do so anyway with trepidation but without, as it were, a clear conscience.

McCormick's view seemed to be that it was sometimes right because children, like the rest of us, have moral obligations to the community, and it's sometimes right for each of us to make sacrifices for the community.

Now, a number of things have happened in the world of research and research ethics since this Ramsey-McCormick debate that in my opinion shift in significant ways our understanding of the proper response to this dilemma of research involving children, but their central insight remains powerful, but the fundamental issue is one of the relationship between children in the community and the question should be what, if anything, do we owe our children and what, if anything, do they owe themselves and us, and what can we do theologically or ethically or legally or politically or economically in order to make sure that these obligations are duly and fairly discharged?  How do we best take care of them so that they will some day grow up and be able to take care of themselves and, as these things often go, take care of us some day as well?

This basic argument or responses to this question in modifications to the Ramsey-McCormick framework over the years have been framed in three different ways:  one that I will call utilitarian, one that we might call epidemiologic, and one that is more bioethical.  And I'm going to talk a little bit about these three revisionist framings that present the dilemma in terms that are somehow, I think, more conducive to, say, a secular society's preference for scientific or utilitarian language or other non-theological conceptual framings.

The utilitarian framing of the issue looks at the net welfare for children, and we talked a little bit about that in Bob's session.  We say that the community of children overall would be worse off if no children participated in research, and then by a sort of implicit calculation that is the central and centrally problematic feature of all utilitarian reasoning, we argue that since each child is a member of the community of children, each might be harmed by the use of inadequately studied medical interventions, and that each child would or could be incrementally worse off if no research gets done.

So the direct and immediate harms of participation in research are weighed sort of metaphysically against the potential benefits that might accrue if the results of that research benefit each particular child.  We carry out the mediphysical calculations in our head and end up with the result that some research must be permissible as long as it doesn't involve too much risk or as long as the benefits to the individual research subject outweigh those risks.

And then we go to great pains to try to precisely quantify the risks even though we know that the very nature of research dictates that the most important risks may be unanticipated and, therefore, unquantifiable.

Sincere attempts to balance risks and benefits in order to minimize the net risk and to stipulate some threshold of permissibility seems to me to be the correct solution to this problem, and this is the line of reasoning that generated the federal regulations that talk about minimal risk and minor increase over minimal risk and some increase over the minor increase over minimal risk that leads to the 407 committee review and incorporate, in my opinion, deep moral insight and are full of wisdom.  I even like the vague fussiness of terms like "prospect of direct benefit" or "minor increase over minimal risk," and the wisdom here seems in part to be the wisdom of moderate ambiguity.

I was reminded of this at our Seder this year where we were talking about what constitutes a cup of wine as a moral obligation to drink four cups of wine, and at least according to one source the rabbis gave a precise answer to that.  A cup of wine has to be at least the size of two olives.

(Laughter.)

DR. LANTOS:  And that sort of answer that is a non-answer but that gives, you know, an inclination.  Some guidance, I think, is what we're looking for.

Another modification of the Ramsey-McCormick framing of the issues and the one that I call epidemiologic has been through discussion of a concept that I called in a paper I wrote, and it's in the notebook, "The Inclusion Benefit."  And the conception of inclusion benefit is both an empirical one and a theoretical one.

Empirically, many people have observed that many subjects in clinical trials who are in the placebo group do better than patients who are not in the trial at all and, therefore, it seems should be receiving the same treatment as those subjects in the placebo group, and yet they have better outcomes.

Why do they have a better outcome?  A number of explanations have been proposed.  All remain unproven, speculative, highly theoretical.  One explanation is that placebos actually work.  Another is that the accoutrements of study design, monitoring, standardization of care, attentiveness to side effects and that sort of thing are what lead to the improved outcome.

Some believe that it's all selection by us or artifact that subjects who were recruited into studies are in some fundamental way different even though they're not supposed to be from patients who are not.

And all of these explanations and a bunch of others seem plausible, some more plausible than others, but epidemiologically it doesn't really matter because epidemiology doesn't necessarily seek explanations, just associations, and there seems to clearly be an association between inclusion in a clinical trial and better outcomes.

Now, the underlying question raised by this concept, the ethical question, the concept of inclusion benefit, is one of the true nature of the risk of participation in clinical trials.  If for whatever reason study subjects who receive standard therapy actually do better than patients who are not in the study, then the whole basis for the concerns of the Ramsey-McCormick debate and the whole justification for the regulatory apparatus that those concerns spawned based as it is on the idea that research is riskier than therapy becomes irrelevant, invalid, and misguided.

Instead, research subjects are in a safer condition than patients receiving therapy.  They don't have to be altruistic.  Instead they are rationally self-interested.  They don't participate out of moral obligations of the community.  They don't need to be protected.  Instead, they should be required to give informed consent not to participate in a clinical trial, and children's whose best interest must be protected in ways that adults don't, might be mandated to be enrolled in clinical trials even against their parents' preferences.

Parents who refuse may be thought of as neglectful.  I push this to an extreme, but maybe not an outlandish one because I believe that in general, and across the board, subjects in clinical trials are today safer than the average patient getting treatment either in the hospital or in the out-patient setting.

And if that's true, then the way we're going about regulation is at least misguided, if not worse.

There's a third revision to the Ramsey-McCormick paradigm for thinking about research risk, one that might be called the moral dictates of equipoise argument, and it goes like this.  If a doctor or the medical community is genuinely uncertain about which of two treatments is best, then the most rational and efficient way to insure that each patient gets the best treatment is to randomize patients, collect outcomes, analyze the data, and alter treatment choices only after one treatment has been proved superior to the other.

By this argument, randomizing as soon as possible and with each and every patient is the best way to do what's in the patient's best interest, and this is an approach that is essentially used in pediatric oncology today, where virtually all patients are treated on protocols because doctors believe and parents come to believe that the protocols are, in fact, the best way to make sure each patient gets the best treatment in a situation where doctors are genuinely uncertain whether Treatment A or Treatment B is better.  Randomizing them maximizes the chance for a good outcome, and by this approach, the Ramsey-McCormick sorts of conflicts disappear as well.

Interestingly, the concepts of inclusion benefit and the argument from equipoise, powerful as they are, have not changed the prevailing regulatory paradigm about research risks at all.  If anything, the paradigm seems to be moving in the opposite direction, imposing more and more protections to still regulate research as if participation added risks to subjects rather than diminishing them.

This seems irrational, but perhaps it isn't.  Perhaps instead it reflects a deeper fear of research than the one we generally analyze and regulate, that is, the fear of direct physical or psychological harm to study subjects.  And what might that deeper fear be?

I think it's a fear that has come up in a lot of the deliberations of this Council, that somehow the whole medical research enterprise suffers from a sort of hubris that we are up to no good, that we are going to get ourselves into trouble, that mad scientists are going to hijack not just our tax dollars, but our cherished notions of what it means to be human.  It's a fear that we're trying to steal fire from the gods, that we're upsetting the order of nature and that we will be punished for it, fears that perhaps medicine is not working to enhance human dignity but to undermine it.

And by this view the only way that we can avoid punishment is by recognition of an anticipatory expiation of our sinfulness by identifying it in our research enterprise and ruling it out.  We don't use hubris language generally, but we are possessed with risk in ways that go beyond quantitative analysis of what these risks might be.

Even if you don't think research is beneficial, the operationalization of these concepts of minimal risk and minor increase above minimal risk has a quantitative absurdity to it that was highlighted in a recent paper in JAMA where they tried to quantify the risks of everyday life which is the standard against which participation in research ought to be measured, and they took all of the data on known risks of being in research projects based on reported adverse events and compared them to things like riding a bike or riding in a car or playing on a school football team and found that our tolerance for risk in research is probably an order of magnitude lower than our tolerance for the actual risks of everyday life.

And part of what we're doing it seems, is, if this sort of fear of hubris and seeking of absolution argument makes any sense, is we're arbitrarily designating one domain of medical innovation, the stuff we call research, which takes place in formal research protocols, as the only place where this sinful hubris rears its ugly head. And then we treat those who engage in research as if they were doing something morally much dicier than what those doctors who are doing routine clinical care are doing, and they need to be carefully supervised.  They are not safe, and by implication everybody else sort of is.

Now, even a cursory look at what goes on in pediatric medical centers today shows what a precious and inadequate view this is.  Most of what we do in tertiary care or pediatrics today is radically and thoroughly nonvalidated.  Most drugs used in children's hospitals today are used off label.  They've never been studied in children and are not approved by the FDA for use.

We don't know, for example, such basic things as the proper dose of oxygen for a premature baby or the best blood pressure at which to maintain those babies, that is, what use of dopamine or dobutamine or other pressors are the best way to treat pain for children with complex chronic conditions or psychological problems.  We don't know the best medications, though we use scores of them, or the drug-drug interactions that might arise.

Nevertheless, we use such interventions willy-nilly, treating problems in ways that may make children better or may make them worse, and we do so with impunity as long as we don't have the hubris to think that we might better care for children if we studied the effects of these interventions in order to use them more rationally.

Better research could answer many of the questions, but oddly, we place daunting regulatory barriers in the way of those who would study those matters, but none in the way of those who would use the interventions without study.  Or as one doctor put it, "I need IRB permission to give a nonvalidated drug to half my patients, but not to give it to all of my patients."

As long as one doesn't have any desire to learn, that is, to create generalizable knowledge, one is free to do whatever one wishes.

The implicit moral question seems to be that harms that result from our ignorance are more forgivable than harms that result from our attempts to lift the veil of ignorance.  Ignorance, it seems, has a pristine, naturalness to it.  Attempts to gain knowledge have a corrupt ambitiousness to them.

I wish I could phrase these matters in terms less hoary or spiritual, but it seems that the phenomena in question, the actions and the descriptions of those actions that come down to us, aren't well explainable in other terms.

Now, my framing of the argument in these terms suggests something perhaps about my own view of where we should go from here.  My colleague in Chicago, Lainie Ross, recently wrote a book about ethical issues in pediatric research and her subtitle to the book was "Access Versus Protection." 

The subtitle implies that these are opposing choices, that if we enhance children's access to research, we somehow diminish their protection and place them at risk.  I fundamentally disagree with that framing.  I think the best way to protect children from the risks of uncontrolled medical innovation is to make pediatric clinical research easier to do.

That doesn't mean it should be unregulated.  It means it should be better regulated.  How might that be achieved?  How might we nudge the system that we have now towards more generous, less inhibiting approach to research regulation?  I think we are today at a very interesting point in the public policy regarding research regulation in children.  As we discussed in Bob's session, we understand the problem.  We have some good and sophisticated federal regulations.  There are quite thoughtful debates about how best to implement these regulations, what minor increase above minimal risk or something might actually mean.  There's little disagreement today, I think, about the underlying paradigm.  Nobody really wants to blow the whole thing up and start over, but what we need today is a way to best operationalize these standards of risk and the balance between access and protection, and the efficient implementation of the procedural paradigms that will help generalize the knowledge that now in my opinion is held by this small coterie of experts in the field.

What we need, in short, in my opinion, is to develop a more formal and robust casuistry of research ethics.  We need case based studies that open up to us the choices about how the principles should be applied in each specific protocol, and we need to find a way to capture  or a better way to capture and to institutionalize the wisdom of these smart people who have thought a lot about this in ways that, as Richard Epstein might put it, increase net social utility.

This can only be achieved through a formal, transparent and trustworthy system of case review, commentary and adjudication, something that does not exist in the current system of research regulation.

The balance to be struck now then in my opinion is quite different from that imagined by Ramsey and McCormick.  It's not about sinning bravely.  It's not about deciding when children should be exposed to risks for the benefit of others.  We've moved beyond that.

Instead what we seem to be debating today is the proper locus of regulation, whether it should be decentralized local regulation as embodied in the IRB system where each IRB sort of has its own autonomy, or whether there should be a more centralized and, therefore, consistent but distant approach that is some sort of appeals process for IRB decisions, reporting of the rationale behind decisions, and opportunity for public discussion.

The former, the local IRB approach, seems more like what bioethicists in the '60s and '70s had in mind, and their idea was that there would actually be sort of moral deliberation taking place at the IRB level about these protocols.  As we heard in the earlier session, we've moved away from that.

The latter approach, the one that I'm advocating, seems more legal than bioethical, and generally I think anything that's more legalized is a bad idea, but unfortunately, I think what we've ended up with today is the worst of both worlds, that is, we have nonaccountable centralized control, that is, the federal Office of Human Research Protections (OHRP) that as best I can tell answers to nobody, but that oversees the local IRBs that operate with inconsistence and idiosyncracy.

Many studies show wide variation in the way local IRBs actually apply these regulations with things like the question that came up.  What counts as a minor increase over minimal risk?  And people have done studies where they have sent IRBs various protocols and shown that they're all over the map in whether a particular protocol would pass or not, based on their understanding of the criteria.

And these are published in JAMA or other peer reviewed journals, and everybody goes, "Huh, yeah, it's random."  Not much you can do about.

Many investigators, I think, correctly view IRB chairs as despots with absolute power and minimal accountability, and even if they are benevolent despots, there's no way to know it.

IRB chairs tend to view OHRP the same way.  That is, they see their task not as applying their own moral insights, but as protecting the institutions from random audits and the fear of draconian punishments for relatively minor transgressions, that is, not reading out every amendment to a protocol at a fully convened meeting leads to the shutdown of entire institutions and all of their research protocols.

And so what we've come to is a climate of caution, self-censorship and the domination of institutional risk management approaches over careful moral reflection.  It's a counterintuitive result because the rules that led to this system were meant to preserve the local control of IRBs to make them exquisitely sensitive to the local conditions in the way that a federal or centralized IRB might not.

But what they have led to is this inconsistent application of highly technical federal regulation in ways that are not reviewed, not disclosed, and therefore, not available for public critique.

The reasons why we have ended up with this are complex, but let me just briefly suggest two, and then I will try and finish up and hopefully have some discussion.

One of the reasons is that the mandate for IRB review is an unfunded mandate.  So IRBs are staffed by volunteers from academic institutions and that may have made sense when the process was more informal and the group could meet, but as it works today, with every protocol having to be reviewed every year, with every amendment having to be reviewed, with every adverse event having to be reviewed, we have an IRB that used to meet once a month, but the meetings were running ten hours.  So how they've split up into three separate IRBs, and even with that they've gotten the meetings down to about four hours.  There's about 15 faculty members on each of the three IRBs putting in four hours a month of committee time, plus all the review time with no compensation.

And I think you get what you pay for.  In this case it's amateurish, idiosyncratic, and often bizarre reviews or the jobbing out of the real work of the committee to a paid staffer who is usually basically a compliance officer.

So that's one, the unfunded mandate.  If this system is going to work, there has to be some way to put some funding in it to make it work better.

The second reason I think they don't work is sort of more interesting in some ways from a legal perspective, and that has to do with the reason why IRBs preserve privacy and confidentiality.

It's hard to come up with a good explanation for this that's framed in terms of the protection of research subjects, and the usual explanation is that it is necessary to protect the careers and the intellectual property of the investigators.  There's no other reason I can think of to make these deliberations confidential, and that's possibly an important concern, but to the extent that research in children is to become a more publicly accountable enterprise, there must be ways to protect investigators' intellectual property that are consistent with a more open, transparent, and publicly accountable review process.

So in place of the current system what I would propose is a system that was more formal in its adjudication methods, but also more transparent to research institutions, to investigators, to parents in the case of pediatric research, and to the public.  In short, we need something like an appeals court system, a system called pediatric research courts, and the goal of such a system would be to help researchers apply agreed upon and quite satisfactory standards, and it would do this by hearing cases, publishing rulings, establishing precedents, generalizing interpretations in a way that was truly public, meaningfully accountable, and transparent.

Such a system would have to be collectively subsidized in a transparent way, too, because it would certainly cost money, but I think it would be a tangible expression of our commitment to protecting children from both the risks of research and the risks of unstudied innovation.

And in that sense, it would return full circle to what I see as the central question raised by Ramsey and McCormick:  What do we owe our children?  And it would answer it by saying we owe them a functional system for the oversight of pediatric research that maximizes the opportunities for children to receive the latest treatments, the best treatments, and the safest treatments, even if and especially when these treatments are given as part of a carefully designed, carefully monitored, and adequately reviewed research protocol.

In my opinion, we owe our children this much at least.

Thanks.

(Applause.)

CHAIRMAN PELLEGRINO:  Thank you very much, John.

Dr. Lantos' paper is open for discussion.  Dr. Meilaender.

DR. MEILAENDER: Thanks very much.  That was nice in a lot of ways.

But I have two questions that probably may sound to you as if they are willing to reckon with blowing up the whole system, which you clearly don't want to do, but they have to do with the inclusion benefit, your discussion of that, and with the equipoise issue.

You said about what appears to be the case with respect to the inclusion benefit that it really makes the issue in that earlier debate irrelevant.  I don't actually think you believe that because of some things you said you wrote in the pieces that we had.  But nevertheless you did say it, and I mean, I understand why you're saying it.

But doesn't saying that depend fundamentally on believing that the only important moral question is the one that has to do with harms rather than wrongs?  At least in some ways of thinking morally, it is possible to wrong someone without harming them.

And the structure of your argument with respect to the inclusion benefit is a structure of argument that deals entirely with the issue of harm.  As it turns out, they're not really harmed.  So they can't be wronged in anyway, but it might be that using subjects who cannot consent in any ordinary way wrongs them nonetheless.

And that was, of course, part of that earlier debate.  I don't think you actually think that's irrelevant because you recognize in your written work that there's a difference between seeking generalizable knowledge and giving particular care to patients, and if I use my patient for generalizable knowledge, I might be wronging them even if in another sense they weren't harmed.

So I'd like you to think about the structure of that argument and whether the whole basis of your argument on the basis of the inclusion benefit already presupposes a certain approach to the moral life that may or may not be acceptable.  That's the one.

The other with respect to the equipoise issue, I sometimes wonder, you know, whether any two possible treatment modalities are ever in equipoise for a particular patient or how one knows that, but in particular, you mention that most of the care in pediatric oncology is done through randomized trials of one sort or another.

I remember reading an article a few years back about breast cancer research with adult women in which in several cases the researchers were delayed by years having trouble doing the research because they couldn't find women willing to be randomized in it, but the article mentioned that this wasn't a problem in oncology research with respect to children.  They had plenty of research subjects.

And it struck me that there's a reason for that.  The risks or whether the risks are not even — I don't know what the right way to describe them is, but the procedures that we're unwilling to run for ourselves we seem quite willing to let children run, and if you think about that, it raises once again the question about whether the issues at stake in that earlier debate don't remain real issues in a way.  I'm not saying necessarily they must be resolved in one way, but I do think that they haven't become irrelevant.

So maybe you could comment on both of those points.

DR. LANTOS:  Thanks.  Those are great questions.

Harm versus wrong, I mean, to a certain extent my discussion today focused on research that has a prospect of direct benefit, and I think the farther you get away from prospect of direct benefit the easier it is to imagine a wrong that might not necessarily have a harm associated with it.

I'm trying to imagine a protocol research project.  Maybe you can help me if you have one in mind where there was a prospect of direct benefit.  There was no harm, but a child would be wronged in some way by participating.

DR. MEILAENDER: Well, if you were Ramsey, unconsenting touching was a wrong whether it resulted in any harm or not.

DR. LANTOS:  Even Ramsey wouldn't have considered clinical care a wrong.  I mean, every day I touch children who don't consent when I do my physical examination on them and they scream and squirm and try to get away.  I'm not sure how my gathering data through my physical exam that I then try to generalize would sort of kick me up a level of wrongness in Ramsey's view.

DR. MEILAENDER: Well, he thought their parents had reason to consent to that because it was part of parental nurture of their well-being.  Now, you may be able to be on your way to an argument that placing my children into a clinical trial, at least if they're of an age where I can explain to them that they're helping others, would be similar parental nurture, but that's what made the touching okay there.  It wasn't anything to do with the consent of the child.

DR. LANTOS:  Well, in that case, if the concept of inclusion benefit has some validity and the parent gave consent for a child to be in a pediatric oncology protocol because they correctly believed that that was the best thing for their child and, in addition, we would gain generalizable knowledge, I don't see, if that's what Ramsey said, then I would disagree that that's some incremental wrong.

Let me say one other thing though about inclusion benefit that's important and may explain a little better why I don't think the notion of inclusion benefit means we should blow the current system up.

I think part of the  reason why there is an inclusion benefit, part of the reason why it's safer to be in research protocols is because research protocols are highly regulated so that a world in which we said, "Oh, it's better to be in protocols than not.  So let's get rid of IRB review." 

All of this stuff about making sure protocols are safe is clearly nonsense.  Everybody who goes into a randomized controlled trial would not achieve that end.  Putting people into randomized controlled trials that had been scrutinized, have been reviewed, I think, would.

As far as the argument for equipoise, I'm not sure I'd draw the same conclusion.  I mean, I think about people willing to subject their children to risks or wrongs or harms that they're not willing to subject themselves to.  I mean, I think the breast cancer example is an interesting one because I think if we're talking about the same study, it was randomizing to a lumpectomy or mastectomy, and I think what was driving that was people's assessment of the cosmetic results and saying, "I don't care if these two therapies from sort of a five-year survival perspective are in equipoise.  From the perspective of how I'm going to look afterwards, they're not, and that's why I'm not willing to be randomized."

So in that sense, the researcher's failure to take that into account as part of what should go into their equipoise calculation just made it a badly designed study, in my view.

I think there are clinical situations where it's probably impossible to get a group of people who are in equipoise.  It is an incredibly fragile and somewhat ambiguous concept, but I think the fact that parents are willing to sign their kids up for cancer chemotherapy protocols doesn't necessarily mean they're willing to allow their children to undergo risks that they wouldn't be allowed to undergo themselves.  It may mean that they're making better decisions for their children than they make for themselves, as parents often do, I think.

CHAIRMAN PELLEGRINO:  Dr. Kass?  Oh, I'm sorry.  Paul.

DR. McHUGH:  This may be a question that's only the product of a long day and since I've had a lot of experiences that you've touched upon and have been touched upon before on IRBs and relationships to institutions and external forces and the like, I reverberate with all of the things you're saying.

For example, not to go too long on this matter, but I was on an IRB about 15 years ago for a number of years, and it was a wonderful experience because it introduced me to all of the different investigators (and investigations) that were going on in the institution from across all kinds of disciplines, and it was terrific.

Not only that.  I appreciate my institution better.  I also appreciate my fellow investigators and my fellow doctors in the sense of the imagination that they were showing and things of that sort, and things seemed to go quite well.

Now, it's very hard to get anyone to go on an IRB because not only is it not of this sort anymore, very, very long and arduous process in the sense of doing a bureaucracy's job is very unpleasant for people who see and appreciate that they have much better things to do for the benefit of patients, given that most of us do still have a vocational commitment to helping patients.

So, as well, from what we said this morning when we see this huge discrepancy between organs and the availability of organs and the demand for organs, it's clear to me that the method that we're using, even if toned up, is not going to solve this problem.  This seems to me this is happening with the same thing here. 

We've just got ourselves stuck into a process which is just going to get worse and worse and worse as the bureaucrats imagine further and further things that could go wrong.  No industry could survive with this method if it hoped to be productive and safe.

The car industry couldn't work this way, given that, you know, it makes mistakes, too, but ultimately makes progress and makes progress sometimes through the mistakes. 

Are we getting to the point of where we should be thinking of another approach entirely?  As I say, this might be off the wall at the end, but you know, in medicine now, we are very accustomed, all of us who were in medicine are accustomed to the Joint Commission's demands on us to look at our practice and particularly look at the ethics of our practice, the quality of our practice, and where it goes wrong.

It does it with this root cause analysis that is, by the way, extracted from the automobile industry, incent events with what you were saying, case studies in which when something does go wrong the case is studied from top to bottom, not just from who made the mistake here, but who gave this person authority to be in that position, what were the resources that were needed to do that properly, what wasn't made, and the like.  And ultimately those get circulated around the country and our medical practice, just like our industrial practices, get better.

Has the time come for us, appreciating that being part of research is very helpful — by the way, there are plenty of examples where you can show that the placebo class really got better.  The whole study of insulin shock for the treatment of schizophrenia was brought to an end when it was demonstrated that the placebo people did just as well as the treated people and both of them were doing well because they were backward schizophrenic patients that were now brought forward and people were now worried about their blood sugars for several hours a day, and when you did that, they improved.

And now we have this ridiculous business about where Paxil is producing suicidality in a small proportion of children, but no one knows what they mean by suicidality, and we have a black box label on this.  The pharmacologists are not being drawn into.

I mean, that's a real signal, by the way.  It really is happening out there with certain forms of the SSRIs, but you know, we're not doing what we ought to do and we would if we were running an automobile industry, that is, saying, "Okay.  What's happening here?"

So the long and short of this tirade — I'm sorry about it at one level, and it's probably not very helpful — is to ask if we are an Ethics Council and if we are saying can we begin to blow a whistle on a process that is inherently going to run in a particular way and run us out of the business of doing better in research for the benefit of our children and for the benefit of everybody else?

DR. LANTOS:  One quick comment, just to be clear, on the placebo versus control.  I mean, many studies show placebo do as well as the control group.  What I was talking about is the placebo group doing better than the people who weren't in the study at all.  So that's slightly different.

I mean, my solution to the overburdened bureaucratic thing would be to say 90 percent at least of what IRBs do could be done just as well by a single compliance officer.  Ninety percent of it could be done by the IRB administrator, and I'll bet if you did a randomized controlled trial you'd find that the outcomes were exactly the same.

It's basically making sure that the protocol meets fairly straightforward regulation.  So one interesting question is why, given that, we have this system where like really expensive people with no expertise instead gather around and listen to the compliance officer tell them what they have to do.

That seems to be sort of a remnant of sort of an idea of what we wanted to imagine the role of a research ethics committee ought to be with regard to some of these more interesting questions about harm versus risk and the moral ambiguity of medical progress and stuff like that.

So if there was a way to sort of flag the ones that need that kind of review, you could imagine using the resources of an IRB to focus on those, and those would be the ones that I think might end up going into my appeals process generating essentially case law that could then sort of capture the wisdom of those deliberations in a way that isn't being captured now.

What's happening now is it's both a waste of time while the committees are meeting and then it's a waste of any outcome because it's kept secret and nobody gets to learn from the deliberation.

So I mean, in that sense I think blowing up the system might work, but I guess I would see that more as a tweaking of the operationalization of —

DR. McHUGH:  And the bureaucrats would find some other way to consume your time even more because it isn't the work of obsessional bureaucrats to imagine troubles.  I mean, that's what they have, and so they could imagine all kinds of stuff, and if good cases are not studied, studied thoroughly about what really went wrong so that you can appreciate how unique those were and how everybody once read about them would remind themselves of how they should practice and do research in the future, we could accomplish just as much, but it requires a trusting interaction that perhaps is impossible for these people to use if this method is persistent.

You know, at Hopkins we had a terrible disaster, but once it was really studied, where the mistake happened and what was the error and how the literature should have been better studied and all came to light for all of us.  It was a distressing set of events, but it wasn't protected by the IRB.  It wasn't going to be protected by making the IRB do more things which we now do.

Many people still don't know what went wrong at Hopkins in the best sense.  So at one level maybe we're — I'm repeating myself, but just this system might not be the right one for an industry that is so beneficial and as much beneficial as engineering and other kinds of things that know how to do this better.

CHAIRMAN PELLEGRINO:  Dr. Kass and  Dr. Foster.

DR. KASS:  There's several things that I could raise with you, John, but let me pick up those quasi-theological remarks suggesting that you might have detected certain things in other works of the Council as possibly explaining why it is that we are so nervous about experimental subjects through discussion of the fear of hubris and the expiation of that sin.

Hey, I by the way don't think that has in any form been operative in here, but that's another matter.  I don't think that people who worry about the use of human subjects in research are primarily trying to use the research scientists as the focus of their anxiety about where science might be taking us.  Maybe some people do and think about the mad scientist. 

I think the major thing really comes down to the human use of human beings on which you yourself are, in fact, toward the end of the article, as I read the passage — I think it's the thing Gil is alluding to — research is considered hazardous because the physicians who conduct research are using the patients as a means to an end.  Their goal is no longer solely the well-being of the patients.  They are also, perhaps primarily committed to the creation of generalizable knowledge.

And let's grant the humanitarian ambitions.  Let's not dwell too long on the fact that in addition to the humanitarian ultimate goals there are careers to be made and self-advancement which depends upon this kind of research.

I think that what we're dealing with here is not so much the nervousness about the ends, but real questions about the means and about quite apart from the hazards what it means to make especially a child an object of investigation for someone else's benefit.

It's very different, as Gil points out, when you bring your child sick to the doctor.  There's a laying on of hands with a specific goal in mind.  The other is, for better or for worse, however lovingly you make the situation, lovingly you treat the child and however attractive the situation is, the child is a research subject.

Now, clinical trials in oncology where it is not clear which treatment is better, that's different.  But what about the things that Bob Levine says are somehow ruled out by the old principles, studies in pathology, pathophysiology, studies in the neonates where you aren't to sort of look at the parent and say, "I don't know really which treatment here is best, but in order for us to get to the point where we can even begin to imagine certain treatments for this, we've got to subject your child for no benefit to him or her so that we can gain this knowledge even, by the way, if no harm comes from it.

Even if no harm comes from it, I as the father of a child would be reluctant, however much I understand the goodness of knowledge, I would be somehow reluctant to make my child an experimental subject, period.

And I think that in your final analysis of the things that we owe our children, that wisdom of this comment and this thing that Gil is trying to distinguish between harms and wrongs — I'm not sure of the language I would use — there are ways of mistreating people even if you don't hurt them, even not laying a hand on somebody.

And I don't think it's egregious.  I think by and large people do a pretty good job.  I'm inclined to think that because the ethics has come through the law, through the regulations that we now have acquired a system which doesn't really do what it's supposed to do, but I think that before you simply say what we really have to do is get out of the way and let the research go forward, we shouldn't lose clear sight of what is the ethical good here in addition to harm.  That's terribly important whenever human beings use other human beings for purposes that are not somehow intrinsic to that relation, and especially with children who we have to somehow give consent — I'm talking about small children — have to give consent for their being your objects. 

However much you pretty it up, that's the fact of the matter, and I think that's a reluctance on the part of parents, and it's not because they're hostile to science, suspicious of scientists.  They have something vital to defend.  They don't even want their children to be subjects of voyeuristic experimentation.

I serve on — the IRB I served on was actually in the social sciences division of our university where — if anybody reads the transcript I'll be flogged when I get back — no one can make the argument that the research being done there is of immediate and important humanitarian benefit to anybody.

(Laughter.)

DR. KASS:  They don't try to make that.  They try to say they want to understand certain things maybe in the long run, and people were scrupulous to a fault about things that I thought shouldn't bother anybody, but when you start talking about taking children under 2 and 3 and even 4, for certain kinds of yearly observational studies where there's no harm at all, you're wondering what is your attitude to the child and what are you subjecting him; what are you thinking about in these things?

It's those kind of softer things that pale in comparison to the real value of finding cures for Wilton's tumors, but I don't think the people like Paul Ramsey who are concerned about these things were — I think they were onto something, something very important.  It didn't depend on a theological notion of childhood innocence.

I think you can give an account phenomenologically in terms of the experience of being a parent into whose care this new life has been entrusted, and what are you willing to subject them to and for what purposes.

I'm not sure I've done it very well, John, but I think in your writings and your thinking you've spoken more richly about this than you have in the presentation to this point, and I wonder what you say.

DR. LANTOS:  Thanks for pointing out what seems like a contradiction although I think is partly contextual, that is, both where it's at in the article and to whom that paper was addressed in a pediatrics journal.

I think a lot of the problem for most pediatrician clinical researchers is they don't see why any sort of regulation ought to be applied.  Hey, I'm a good person.  I'm just trying to do what's best.

And so I strongly support the need for rigorous oversight of research involving children for the reasons in that article and because I think things like the inclusion benefit only apply if there is rigorous clinical oversight.

But at the same time, I mean, I guess I should throw the question back to you in a way, but in my opinion, we have developed a stringent set of regulations that would prevent most of the risks, most of the harms.  The wrongs, I guess I don't quite understand that concept yet.  So that may be where we'd at least have to talk more to see whether we agree or not.

But I think this regulatory framework with its ambiguous terms of minimal risk and minor increase above minimal risk is quite restrictive, takes into account what we owe the children in terms of protection, what we owe the parents in terms of explanation, getting their permission, their fully informed permission and the child's assent when the child is old enough to assent, and that those kinds of protections adequately address the major risks that you'd get to in treating the child solely as a means to the creation of generalizable knowledge.

The challenge today, it seems to me, is more one of figuring out how to apply those in ways that people would agree upon, and I mean, I think what will happen here tomorrow when you discuss an actual case protocol is people around this table will have differing interpretations of what those concepts mean, and you may find them inadequate to address some of the things, but in the end, you know, you'll take a vote and it will be eight to six on whether this protocol should be allowed to go forward or not.

And the discussion that you'll have will be quite illuminating and important, and then maybe in this one it will eventually get published, but it will have no precedential effect.  It will have no regulatory effect.  It will be sort of your opinion or this group's opinion or the President's Council's opinion and people will be free to agree with it or disagree with it, and that's been happening now for 30 years.

And so we have these concepts.  They're applied daily in pediatric centers across the country, and the same issues keep coming up again and again, and we don't get anywhere.  So if there are errors of overprotection or denial of access or if there are errors of enrolling children in protocols where they ought not to be enrolled, we don't know it, and we only find out about it in the sort of egregious cases that end up, you know, with the death of a subject or a lawsuit, and bad cases make bad law or bad precedent of any sort.

But all of the good deliberation and all of the casuistry, all of the case based reasoning about what a minor increase over minimal risk means and sort of a system in which the best insights can be reviewed, scrutinized, modified, and generalized so that all children benefit from them doesn't exist now.

Is that sort of an answer?  I mean it's kind of —

DR. KASS:  Let it be.  It's okay.

DR. LANTOS:  Okay.

CHAIRMAN PELLEGRINO:  Dr. Foster and Dr. Gazzaniga.

DR. FOSTER: Let me change the direction for just a second.  In your presentation there are two things that seem to me to be a little bit — I'm not sure which way you want to go.  The last thing you said was — and I think I used to chair an IRB myself like Paul and so forth — you said, well, probably a compliance officer could do 90 percent of the work or 95 percent of the work just to be sure that, you know, the Ts are crossed and so forth.

And then, on the other hand, earlier you said that it might improve the system if we had some sort of I think you used an analogy like an appeal court or something in which there would be an increase in transparency, and it may be some way that decisions could be passed on.

I want to understand a little bit more how you think that that court could be done and a word about does the fact that most of the things that using the compliance officer would mean that you could have the freedom to have people serve on such a court about assessments and so forth.

And finally, if you've given any thought at all about how one might come up with the money to do this.

One of the questions is whether the appeal court is going to be sort of locally or whether there's going to be some published thing like a lawsuit or an ethics journal or something that would come there.  I just wanted to hear a little more about that aspect, which I don't think we've heard suggested by anybody.

DR. LANTOS:  I don't think it should be local.  I think it should be regional or national.  So whether it's — and I think it should have regulatory teeth.  That is, once the decision is published, it becomes binding regulation.  That is an interpretation of whether giving injections of placebo to somebody in a randomized trial of growth hormone is minor increase above minimal risk or not could be adjudicated.  People could disagree.

But it doesn't seem like it serves anybody's interest to have sort of ongoing, never ending disagreement that never gets resolved in a way that has regulatory bite. 

So let's work on this together societally.  If there's a debate, have the debate.  Come up with an answer sort of like the Supreme Court comes up with an answer.  You may like it; you may not, but once they speak it's done, although you can bring another case that readdresses some aspect of the issue if there's confusion.

How would it be paid for?  I mean, the only way to pay for it would be out of tax dollars.  I guess you could imagine user fees or something, but some collective subsidization of this as an important societal function.

What it would do, I think, would be to make explicit the expenses that are implicit in IRBs and their functioning.

CHAIRMAN PELLEGRINO:  Dr. Gazzaniga.

DR. GAZZANIGA: I don't want to leave Paul alone and into the day open-ended questions.  Dr. Levine, that was a terrific talk, I thought very illuminating, and I want to go to Leon's question, if I may, and try to look at this wrong-harms issue because it's a tough one to get your head around mainly because I guess we're not dealing with two or three examples here that drive the point home.

But let me try something out on Leon.  I want to see if he will see where I'm going and see if this might help illuminate it.  Are we not talking there about Leon when he's talking about caring for his child, that it's the theory that Leon has in his head about the well-being and caring of his child that is being really dealt with in this psychological situation because the child is obviously oblivious to 99 percent of what's going on here.

So what we're dealing with is your theory of how all of this should transpire, and the harm or the wrong has to do with the potential theory you come up with about what might have transpired or what kind of future psychological damage may occur from the clinical trial you're in.

And it's so hard to deal with that because let's imagine I was your son and I grew up to be who I am.  I would want you to put me in a clinical trial because hanging around the medical community, you really realize how hard it is to do medical research, completely difficult enterprise.

And I'll use as an example one that Paul can relate to.  I was once at a medical center with two of the world's leading neurologists who could not agree to do a randomized trial on the efficacy of coumadin because one of them simply refused not to give it to his patients.

And so here we have adults all worrying about a clinical issue that should be resolved and still hasn't been resolved as far as I know, and it's hard to get it off the ground.  It's hard to do it with adults.

And now we're going down to children.  It's hard to do the research that is crying out to be done.  So what I'm suggesting is in trying — I'm really asking  a question — in trying to think through this harms-wrongs issue, are we talking about you or are we talking about the wrong to the child or only your theory about the wrong to the child that you're carrying around in your head?

DR. KASS:  That's in contrast to the theory of you're running around in your head about the difference between your head and mine.

(Laughter.)

DR. KASS:  If it comes down —

DR. GAZZANIGA: I guess where is the locus of the wrong?

DR. KASS:  No, it's late in the day, and I didn't come prepared for this.  My friend Robby will help me out, but let me fish for something for a moment.

By the way, I don't deny for a moment the importance of the research of its value.  I mean,  if you're going to treat children, you should know what you're doing.  If you're going to treat anybody, you should know what you're doing.  And there's no substitute for good science to learn how that is.  We don't have a disagreement about that.

The question is whether we make things easy for ourselves in doing that work by blinding ourselves to certain aspects of those human relations in which the people who we hope eventually to benefit are for the time being at the very least — and forgive me — to make the point luridly, they are our guinea pigs.

It's not a nice way to put it, but they are experimental subjects for the gaining of this knowledge, and we can dress it up any way we like, but that's part of the essence of it, and it's much better if you're going to do it to have that clearly in mind.

And by the way, I do think that the regulations were in a way designed with part of that clearly in mind, and they are meant, in fact, to protect against the mere treatment of people in this way and to make sure that the harms to which they are exposed are minimal and justifiable in terms of some kind of benefit, however much you try to do that.

But you in a way raise a larger question about ethical conversation altogether.  I advance some suggestion that there is a way to do wrong to a person without doing them bodily harm or psychic harm, and you want to know is this my peculiarity.  I'll wear it. 

But part of what you do in these conversations is you try out certain arguments and examples in the hope that you're not just talking about your own nervous system, I mean, because we're looking for a kind of collective wisdom on this.

Now, I give you a biblical example, which means I'm going to lose with you right away, but —

(Laughter.)

DR. KASS:  — but a wonderful story of drunken Noah and his sons.  Noah has three sons after the flood, you know.  The poor man, it's an ordeal.  He planted a vineyard.  He drank of the grape.  He got drunk, and he lay naked in his tent.

And one of his sons comes in, sees the nakedness of his father, and goes out and blabs to his brothers.  Now, it's a trivial story in some respects.  I think it's a profound story.  The son who goes and uncovers his father's nakedness and blabs about it has participated in the unfathering of his father.  He has ratified that and celebrated that without laying a hand on Noah's head.  He has committed an act of metaphorical patricide.

The father lies there no longer as a father.  He's just drunken sower of seed, and the son reveled in it.  The two other sons walked backwards with a cloak on their shoulder and covered the father's nakedness, refusing to participate in it.

Now, there's no harm in the sense of bodily harm.  Yet there is a wrong.  That would be a suggestion.

And part of it seems to me what we want to do in our interpersonal relations is to somehow look upon people, never mind talk to them, even the way we look upon them, the way we speak to them, the way we relate to them, to somehow do honor to the human person that's there.

Those things are at least as important, I think, to all of us whether we would immediately recognize it or not as finding the cures for the diseases that afflict us.

I'm sorry, Mr. Chairman.  I rose to the bait.

CHAIRMAN PELLEGRINO:  Oh, no, no, no.  John and then Dr. George.

DR. LANTOS:  I have a clinical analogy to the Noah story.  I mean, in the growth hormone trials one of the aspects of the trials was annually they'd take nude pictures of the kids in a trial on the, you know, background so they could measure arm length, and they'd cover their eyes with black tape, and one of the big debates in IRB review of this was, you know, was this portion of the study necessary was one question, but harm or wrong or sort of unacceptably demeaning so that it should be negated.

I think that's an important debate.  So in that sense I think there are wrongs that can be done that researchers could propose that would be important for the study.  I mean, all of the endocrinologists said crucial, crucial information if we're ever going to evaluate the results, and I think people should have debates and discussions and decide, given what the goals of the study are, is there a less invasive way to find it out?  How important is it?  And ultimately come up with a judgment about whether it's a good thing to do.

But I don't think every center every time this is proposed should have to debate it again and again and come up with their own idiosyncratic result that's then binding without having to explain how they got to the conclusion that they got to.

That's all I'm trying to say.

CHAIRMAN PELLEGRINO:  Dr. George.

PROF. GEORGE:  Well, yes.  Thank you, Dr. Pellegrino.

I was originally jumping out of my chair to defend Leon from Michael's question, but Leon proved with that eloquent statement that he certainly doesn't need my help or anyone's help in defending his views on this very, very important issue about the nature of wrong and its relationship to harm and the possibility of there being wrongs that aren't strictly harms.

So instead of jumping into Leon's defense since he doesn't need it, let met just toss something on the table for Leon and Michael to maybe think about.

Does it in the end come down to whether one's essential view of benefits and harms and of those creatures who are capable of experiencing benefit and harm in the human way, does it come down to whether benefits and harms are in the end material or, if not material, psychologistic realities or whether there is some category that is distinct from the material or merely psychological of human good and, therefore, of the privation of human goods that can be offended and thus a wrong done to a person in whom the good is instantiated?

And to test that, I might try an example somewhat different from Leon's biblical story since it will put it into the context of research and, indeed, pediatric research, and I also think somewhat different from the example that Dr. Lantos just raised about the children who were photographed in the nude, and that would be something like this.

Imagine that you are the parent of a severely retarded child who is also spastic in some ways and you had a research team that was seriously interested in understanding the phenomenon of other children ridiculing retarded children who are spastic, and so an experiment was designed reasonably under the terms of which retarded children were put behind a one-way mirror and other children, ordinary children, were invited to come in and we could see which ones ridiculed the child and laughed at him and made fun of him and so forth and so on, and you as the parent were asked to volunteer, contribute your retarded child.

Now, the retarded child, let us stipulate, is not going to know this ever went on, is not going to know that he was ever ridiculed, jeered at by these people, doesn't know anything about the reason for it because he doesn't know it's happening at all.

Now, on a certain non-materialistic and non-psychologistic account of the human good, that child, if the parent volunteers, is participating on at least — it's reasonable to think, it would be a reasonable view — that that child is being subjected to a wrong.

But on a view that really restricts the scope of our understanding of human benefits and their privations to the material or at least the psychologistic, it becomes puzzling at best and probably kind of superstitious to suppose that that child has been subjected to anything that could qualify as a wrong.

CHAIRMAN PELLEGRINO:  Paul.

DR. McHUGH:  Surely the children that are being encouraged to make fun of him have been done a serious wrong, and —

PROF. GEORGE:  Well, let's stipulate that they're not being encouraged to.  In other words, we're bringing in perhaps a randomized group of children because we're going to try to figure out what it is about backgrounds and so forth that leads some children to ridicule and perhaps other children to look with pity or sympathy of some sort on him.

So we're really sincerely trying to figure this out.  I mean, it sounds to me like not an unreasonable — I don't do this kind of work.  So I don't — so, I mean, it doesn't seem to me that that's an unreasonable way of structuring an experiment if we factor out the ethical dimension of the question.

DR. LANTOS:  In some ways we have an investigator at our place who is trying to study how pediatric residents' attitudes towards children with severe disabilities are formed or changed over the course of their pediatric residency, where the equivalent to the other children who are being brought in in your hypothetical study are the attending physicians who are hypothesized to be the ones who sort of teach the residents their attitudes.

So I guess the question would be: Would your judgment of that sort of study depend on the goals?  I mean, if the goal of the study was to develop educational interventions that could sort of change discriminatory attitudes to the better, and therefore, the research was designed to improve lives for all children with severe disabilities, would that be different than, you know, if the goal of the study was to see whether catecholamine levels rose in the child who was being, you know, treated in a disparaging way?

PROF. GEORGE:  Well, I think that is a very interesting and perhaps important distinction, but it wouldn't be a distinction that would be relevant to our getting at the question of whether there could ever be a wrong that's not a harm in materialistic psychologist terms.

DR. LANTOS:  Right.  It would take the next step of asking whether there could — we had acknowledged there was a harm.  The question would be could there also be a benefit that balanced the harm.

PROF. GEORGE:  Sure.  It would open a range of questions.  I agree.

DR. GAZZANIGA: I remember going back three years — how long have we been doing this?  It's four and a half — and I remember talking to Paul.  The Council was talking about human dignity and you know how you rotate us so that there's no coalitions formed.

(Laughter.)

DR. GAZZANIGA: Anyway, so I was sitting next to Paul and someone was going on about human dignity, and I turned to him, and I said, "Paul, you work in a hospital.  I've worked in a hospital.  There's no human dignity in a hospital."

Can I quote you?

And you said, "You're damned right there's not.  The first thing you come in and you tell a guy to drop his pants, get on the table.  It's terrible.  You check it at the door."

So there's a context.  I mean he's being his usual humorous self, but there's a context, and what is being undercut here, and I think what you were trying to get at is the guild of physicians have under oath the patient's best interests  in mind, and I think 99.9 percent of them practice that.

And we have this whole apparatus around produced by rogue doctors that have made us supersensitive to some issues and are having the result of not allowing solid research to go forward.

And I think that to footnote that, I think the concept of a non-physician bureaucrat running an IRB is the worst idea I have heard this year because you will take away from, it that matter, the whole understanding of the medical mission and the whole understanding of medical research and the whole caring function in a way that is truly interpersonal between a physician and their patient.

So what I'm saying is that if you look from the real context of medical research knowing that dignity has been challenged by the very nature of our medical procedures, but on the other hand, balance that out with the commitment that all physicians have made, these questions that we're talking about take on a different color and a different meaning, and I think they're hard to talk about in the aseptic sort of non-medical environment that we're holding this conversation.

There's something in there somewhere.

DR. LANTOS:  I think it doesn't matter whether it's a physician or non-physician.  All of the good things you just described have been taken away by the definition of the role.  So the physician who does it can no longer pay attention.

But the idea of responsible investigators being punished because of the transgressions of the rogues I think is an important idea, too.  I mean, I think while there are conflicts of interest for investigators and people build their careers, and there's fame and fortune to be made by doing good studies. There is also, I think, the moral imperative generally to gain knowledge in order to improve care of children as part of that very mission of the medical profession, is, I think, an important part of the motivation of most clinical researchers, and the sole motivation for many of them.

I mean a lot of the people I know who are trying to do good clinical research aren't going to win any Nobel Prizes.  They're not getting famous.  I mean, they get a paper every couple of years, but their motivation is to take better care of their patients, and in the care of their patients, they come across unanswered questions that can be answered by good clinical trials, and often the clinical trials raise the kinds of issues that we're talking about today, and the regulatory apparatus puts roadblocks in front of those responsible investigators that I think often don't protect subjects, in fact, leave subjects exposed to all the harms of nonvalidated therapy, even though it's carried out by unconflicted though necessarily ignorant, well meaning physicians.

CHAIRMAN PELLEGRINO:  Thank you very much.

We're finished our time.

(Whereupon, at 5:17 p.m., the meeting was adjourned, to reconvene at 8:30 a.m., Friday, April 21, 2006.)