About the NTP

Advisory Board & Committees

Transcript of SACATM Meeting - June 18-19, 2008

Print this page Easy Link

http://ntp.niehs.nih.gov/go/32984

This transcript was generated automatically and has not been reviewed. Typically, automated transcripts will contain words that were not recognized properly by the software.

Contents

  Day 1

Event ID: 1024969
Event Started: 6/18/2008 8:10:35 AM ET
Please stand by for real-time relay captioning.

Good morning everybody, if we could take our seats and try to get started.

I would like to welcome everybody to the scientific advisory committee for alternative toxicology methods meeting. I would like to call this meeting together and start by asking participants at the table, go around the room, ask everybody to give names and affiliations. Please remember to use the microphone, this meeting will be recorded and video cast.

I am Dr. James free man, toxicologist with Exxon mobile in New Jersey.

John Booker, national toxicology frm and NIEHS.

Grant Charles, toxicologist, southern California.

Frank borele a, professor, toxicology division, St. John's university, New York.

Marlin fwrown, laboratory, veterinary --

Mary gene Cunningham, recently joined integrated laboratories in research triangle park.

George C George from NB laboratories, research stocks coltion.

Dan Marsden, proctor and gamble.

Michael tong, toxicologist with California Department of reg iewltionz.

Helen Dig Gs, UC California, veterinarian medicine.

Don cox -- university professor.

Independent toxicology, Texas.

[indiscernible]

[indiscernible]

Karen hammerknack from the U.S. environmental protection agencies division of science coordination and policy.

George Curbmac --

Jodie cull pa Eddie, U.S. Department of Agriculture and chair to ig fame.

[indiscernible] veterinarian with the Department of Defense.

Paul -- live animal vetter veterinarian --

[indiscernible]

Marlin Lynn, chair of IGFAM.

[indiscernible] toxicological methods.

Richard McFarland, U.S. food and drug administration and also member of IGFAM.

Sue Mcmaster, with Environmental Protection Agency.

Doug winters, Program Manager for ILS, support contractor.

Eddie Ball, NIH S.

Joe Tomial --

Diane, Patel Columbus.

[indiscernible].

Nigel walker --

[indiscernible]

BSF.

Kim -- Brown University.

[indiscernible] St. John's University.

St. John's Universe its.

Rod -- toxicologist with NTP.

Mike -- [indiscernible].

Cathy Sparkle --

Patricia Segal --

Judy Strickland.

Elainey sal --

Frank deal, also with IS --

Liz -- [indiscernible]

John -- DS, Incorporated.

Sarah -- with A gist LF and U.S.

Tom Burns, LS Nigh seat um.

John Hammond ond [indiscernible]

Debbie McCarly --

George Clark [indiscernible]



Please stand by for real-time relay captioning.

[Captioner lost audio.]

A photograph will be taken at the morning break. You must verbalize comments you wish to be incorporated into the minutes.

The members of the committee on alternative toxicological methods -- not representative of any organization, exercise judgment to any meeting as to whether a potential conflict of interest might exist relative to the topic of discussion due to your occupational affiliation, professional activity, interest of spouse, minor child, general partner or owsdz organization for which you are negotiating or arranging future employment. Should there be a conflick the of interest the member is to resign -- on the topic, not participate or vote on any action. Thank you.

I would like on reinforce if there are comments to be made, please register. Per our usual meeting standards we will allow seven minutes for those discussions.

With that, we can move on, Dr. Stokes?

Dr. Stokes: [No audio]



Some of the things I want to talk about are the -- in February, the five-year plan, several different test method areas, conclude with some outreach activities that we're doing.

So in February we had our 10-year anniversary symposium, at the consumer products safety commission headquarters in Bethesda. We had over 100 people that attended that. It was a half-day program, had speakers from federal agencies, A cay ac deem ya, and -- five-year plan at that symposium.

Sam Wilson was there, Burn Swretz, former associate director of the NTP, Marty Stevens, and we had Dr. Dan -- the chair of the committee on toxicology in 21st century that you will hear more about this morning.

We also had a panel discussion from various stakeholders, including European and Japanese counterparts, Dr. Thomas -- from exvan, and Dr. from Japan joining, pleased to have that, and representative, chair of the NTP Executive Committee.

We also looked ahead for the next five years, we looked back at what would have been accomplished over the past 10 years. I think it's important to do that occasionally so you have a bearing point on where you are coming from. When we look back, 17 alternative methods have been accepted or endorsed by U.S. federal agencies since 1999. 12 are non-animal methods, 10 of the 17 are based on ig IGFAM recommendations, clearly we have targeted the area where the most animals are used, targeted where there's the most unrelieved pain and distress.

The committee has produced recommendations for research and development, translation and validation activities. These are produced at workshops we have, scientific Sim Penal Code Section ya symposia we put on, validation, acceptance, application of GLPs to in vitro testing, created in the late 70s there wasn't going on, but as more is conducted there needs to be guidance to apply to non-animal testing.

We have come up with dev definition and process for performance standards to facilitate more rapid validation of new methods, including those that are proprietary and have strength end international partnerships with IGFAM and jet FAME, created two years ago.

The five year plan was -- the Senate appropriation committees the day before. We did have a press release announcing availability of this plan to everyone. Again, this plan emphasizes priority areas for refinement of use, regulatory testing, emphasizes the testing of new science and technology that can support the development and validation of alternative methods. You are going to hear more about those this afternoon from several of the IGFAM agencies. It emphasizes partnerships with stakeholders, not just a government responsibility to move forward in this area, and emphasizes international cooperation and harmonization.

Last September I participated, along with Thomas Hartongue from -- the cosmetics regulation in brussels, a voluntary cooperation of regulatory authorities from Europe, Japan, Canada, and the U.S. They asked us to come up with a plan that would promote better cooperation among validation organization. Those being IGFAM, NTPSACATM and -- in response to that charge we worked to come up with a proposal that we are calling -- I CADM. To promote international -- communication, among organization. It's not that we don't have that cooperation and communication now, what we realized is that it hasn't been consistent because it hasn't there's sometimes gaps that result in different conclusions being made as a result. We felt that by having a plan that would provide figure consistent collaboration and communication, this would do several things. The first, it would help ensure optimal design, foundation studies. In other words before we start the studies we will come to agreement on what the chemicals are to be used in the studies, what the validation study design should be, and this will help support national and international regulatory decisions on alternative methods. We want to be sure to get regulatory involvement before the validation studies are conducted. We want to ensure high-quality independent peer reviews that provide transparency and stakeholder involvement. The likelihood of harmonized recommendations by the national validation organization. This is a key part of this. We believe that if we are working together throughout the process, there's agreement on the methodses and what they can be used for, limitations, it will provide for more international acceptance.

Then finally, by working together we hope this will leverage the limited resources available to achieve gater efficiency and effectiveness in this whole process and avoid duplication of effort.

Moving on to some of the test method areas, considerable time has been spent on alternative methods for safety assessments. In October of last year we forwarded recommendations to the IGFAM agencies, that's a formal transmittal as required by the IGFAM authorization act. The IGFAM test method -- in vit roe toxicity methods for identifying irritants and corrosives. In these recommendations, two methods, the bovine and -- perm yabibility and isolated chicken ISA were considered useful for regulatory classification testing. These methods when use indeed ned in a tiered testing -- can be classified as -- corrosives or severe irritants without the need for han animal testing. These have been -- will talk more about it in the next agenda item.

We have several ongoing, planned activities in this area. With regard to the BCOP and ISO methods we plan to evaluate improvements in those methods. Obviously what we are doing is trying to identify the most hazardous materials that can cause damage, referred to as a top-down approach, you can identify, classify label, without animals. This is going to be essential, if you are going to then move to in vitro methods to find those that cause reversible eye damage or don't, you need to be sure they are not in this highest category. We need a high level of accuracy for these methods. If you look at the reports you will see there are limitations, there are areas of performance that are not as good as we would like. A couple areas planned, to evaluate a different cornual holder for the -- maintain better confirmation of the cornea, and for his to pathology assessments, in the -- cornea and the chicken cornea in those tests.

Effort to evaluate in vitro test methods that cause reversible or no eye damage, we are working in conjunction with exfame and geek fame to review documents, analysis about the performance of these methods, and moving forward with conducting independent peer reviews. NTPSACATM and -- and the fed cam assay -- the exfame has the lead for several leads, the linkage method, release method and the mirco fizz I don't -- and the European industry organization.

Further areas are to then assess integrated decision strategies using data for multiple methods, activity information. What we need to do is as we work in this field is continue not to make decisions only on information that would get from one assay. It's clear by integrating information from different assays and integrating different types of activity information that's going to be necessary to be able to make more accurate predictions. One thing we want to look at is called, I mentioned the top-down approach, the other is bottom up, that's looking at, if you have information from one or more assays, different types of biological information, can you make a decision that chemical or product doesn't cause toxicity. The thinking is that if you do that in several assays, probably that's going to work. We now have a proposal we are looking at, the second bullet there, to evaluate a non-- approach for determining hazard potential for antimicrobial cleaning products. This is a non-animal approach that incorporates the top down and bottom up approach. This was submitted in January by the institute for in vitro sciences on behalf of several companies. Includes the BCOP, the -- ocular, cell-based and [indiscernible] method. We are awaiting additional information from the sponsor. Once that's received we will move ahead with scheduling and independent peer review.

Final activity, like to mention we plan to carry out comprehensive review of topical an esthetics, annal jeezics to reduce alleviate distress when animals are used for safety testing. What we need to do is dmpl if determine if the available data will support routine use rather than the current optional use.

Moving to acute chemical safety, we had a workshop at the -- conference center, organized in conjunction with -- had over 120 participants. The goal of the workshop was to look at how the collection of in vivo key toxicity pathway information could be used then to develop more predictive mechanism based in vitro test systems and more human in vivo for toxicity testing. You are going to hear much more about this tomorrow. I won't go into a lot of details.

We have also recently forwarded the final reports on two in vitro cyto toxicity systems for -- oral systemic toxicity testing. These recommendations are based on the results of a joint NTPSACATM -- the first collaborative validation study between the U.S. and exfame. . The recommendations -- where appropriate, can use to provide for reduction and refinement of animal use. You will hear more about that.

We have several ongoing planned activitys this area, the workshop recommendations that came out, we will be working to talk in the committee to implement those activities, look for your suggestions with regard to that.

We are working to develop, evaluate an up and down procedure for acute dermal toxicity. We are father gathering data now, hope to model the performance of that just as we did for the up and down procedure for acute oral toxicity.

Other activities that we have planned are to assess reduction methods for -- evaluate the usefulness for cyto toxicity, starting doses for testing of mixtures. Further evaluation of in vit low limit dose to identify non-toxic substances.

In the area of allergic contact dermatitis methods, as you recall, back in 1998 we held our first independent scientific peer review panel where we looked at the local lymphnode assay. This method was reviewed, IGFAM made recommendations to agencies, the method subsequently adopted by U.S. and international agencies. An alternative method that still uses animals, virtually eliminates the pain and distress with the traditional method. Offers considerable advantages there. It also provides dose response information and can be conducted in less than a week, compared to over four weeks for the other method that uses guinea pigs.

Well, over this 10-year period, several things have happened. People looked for other applications of the method, looked at developing non-radioactive versions, the method we reviewed in 98 uses A triteyated thyme owe screen as a sight for limp owe --

Moving toward a non-radioactive marker of proproliverration. At request of CPSC we put together documents on three non-radioactive methods, a limit dose, reduced LM approach; a report looking at data supporting use of LNA, A questioneous solutions of metals and proposal to use the LNA for potency determination, in other words could it be used to determine strong sensitizer or weak, in terms of human hazard.

At the peer review meeting we had 20 international experts from eight country that's served on the peer review panel, over 50 attendees at this meeting. You will also hear more about this tomorrow, Dr. Mike luster, who chaired the peer review panel will be here to talk about the conclusions and recommendations of that panel.

We also have several planned activities in this area. With regard to the methods reviewed at the scientific peer review panel, the limit dose procedure, we're going to be able to complete that, finalize that BRD, the draft, test method evaluation report, and have that regarded to agencies by early fall. With regard to performance standards that were being developed, reviewed by the panel, this is an effort that is also ongoing by ECFAM and we worked close with them, their advisory committee that their E sack, the European scientific advisory committee, somewhat equivalent to IGFAM, we were working to come up with harmonized performance standards and hope to have those finalized this fall.

With regard to the other areas, we have become aware of other additional existing data that couldn't be available to us at the panel in March. We have asked for the data, will be using that to update analysis performance, and will reconvene the panel to look at those updated BRDs.

We hope to do that by late this year or early 2009. Once these activities are completed we will be submitting updates for the test guideline 429, the guideline for the local lymphnode [indiscernible].

With regard to alternative methods for biologics testing, we previously had a workshop on alternative methods to reduce, refine, replace the mouse LD 50 assay for bot U line um -- sponsored with ECFAM, Washington, finalized earlier this year, online for those of you interested. Reviews the state of the science, knowledge of alternatives that might reduce, replace, refine animal use for bot U -- testing, and -- needed to advance methods in development at this time.

Another area of biologics is that we are awaiting completion of USDA studies on alternative methods for potency for leep owe Spiro sis, Dr. Jodie Eddie is our liaison with USDA center for biologics on that project.

Area of endocrine disruptors, ongoing international study with ECFAM and geek fame on the lyme cell, estrogen receptor, human cell -- has human estrogen receptors in it, transsuspected with a lo siever ace system you can use for defection of when it's activated. We have three labs, U.S. lab here at XGS in Durham, the lead lab with in-house laboratory, conducting this, and geek fame is coordinating with a Japanese Japanese laboratory. We are in the second phase, anticipate scientific peer review in early 2009 and then will follow that with submission of a OEC D test guideline for a [indiscernible] transsuspected, trance scripted, will include performance, this particular method is obviously trademarked, but the test guideline will be generic and performs standards will be able to be used by anyone with a similar method to validate essay with a smaller number of chemicals.

In the area of genetic toxicity testing, we have a genetic toxicity interagency working group. This group has been busy reviewing a draft test guideline for the in vitro micronucleus assay, several participated in an expert consultation meeting in Atlanta in October of last year. This guideline is coming close to being finalized and many of the members contribute to finish up the outstanding issues on this guideline.

We are working with geek fame on a validation study, the -- looks at draft protocols, study design, chemical selection, provides comments back to the validation study management team for consideration. This will be followed by validation of in vivo combination assay, I am sure Dr. CoJim a will talk more about that tomorrow.

An area of dermal safety assessments, we have several activities planned, underway. As you recall, ep i derm and ep I skin were methods that have been adopted for detecting where chemicals can cause corrosion or permanent burns. However, there are false positives, and you need a high accuracy for know figure ing if they cause permanent damage or chemical burns chemical burns. We want to find out whether there are, when you put the false negative cor rosives in the assay will you get a response that will identify the chemical as a corrosive. If approximate You don't, that will make it hard to move completely from animals. We have a small study planned to look at false negatives, the rate is 21%, we will look at confirming whether a procedure developed to measure whether they directly reduce MTT, if that can be identified, if that will further reduce the false negatives from the assays. We will be evaluating a draft OACD test guideline for in vitro skin irritation assay, expected in the next few months and we have submitted proposals to OACD to update test guidelines to include performance standards, so other companies or organization that want to validate similar methods can use those performance standards to do validation studies, with fewer numbers of chemicals.

With regards to outreach activities, the sixth world Congress with -- life science, animals, very well attended, over 1000, 33 countries. The national institute of health sciences and geek fame were instrumental in organizing the meeting. We had very good participation from edge fame and NTPSACATM, 20 presentation, 12 oral papers, 11 posters, chaired 11 sessions. We had two members on the organizing committee, five members on the program committee. I was glad to see there was a lot of interest by the ECFAM agencies in this meeting. The seventh world Congress will be August and September of 2009 in Rome. We hope to have a similar level of participation there.

At this year's SOT meeting we had five poster presentations on the work done by the center and the committee. We have already -- by the way the abstracts as well as the full posters are available on our website. You can look at those, if you would like a hard copy we can also provide those to you.

For next year's meeting we submitted proposals for two workshops on alternative methods, hope those will be received. We think this is a good forum for providing information to the toxicology community about new methods that can be used for regulatory safety testing.

With that, I would like to conclude the update and see if there are any questions.

This is for you, any questions for Bill?

I have one, Bill. You mentioned early on about the -- trying to improve the consistency of the cooperation on approaches, developing alternative methods. My question, there's a lot of effort being made there, and I think there is more international cooperation. Your emphasis was on consistent. My question is how successful do you think that's been?

That's a good question. We were given this charge by the ICCR, we had to come together, include our representative from Health Canada. This forced us to say okay, we need to come together, solve a problem, come up with a response. So this was a good exercise in getting organized, working together, coming up with a proposal we could all agree on. I thought that was a good start. Where we have worked together it's been enormously successful. The validation study on the two cyto toxicity methods started 2002 went very well. We learned a lot from each other. By working together we developed standardized test method protocols, had language that wasn't just understandable by people speaking American English, but by those speaking international English. So that was very helpful. We had agreement on the chemical that's were used in the validation study design, and so that led to agreement on the outcome of these validation studies with regard to usefulness and limitations. That's really the key. If you work together from the start on something, then when you get to the end it's a lot easier to come to agreement on what our method is good for and what it's limitations are. That's the information that agencies need in other words to make statements about using these methods to meet their requirements.

Roger?

Roger McClellan: This is a broad issue, one we are going to touch on several points as we go through the individual agency presentations in the program here. The real question I would like you to address broadly, much of our focus in terms of test methodology is addressed to the question of safe versus not safe. It's a yes/no kind of issue. That's certainly very important in many areas, but for a wide range of chemicals, including chemicals we will introduce into commerce, introduce into clinical usage, that's emphasized in the NCI, national cancer institute presentation, the issue of not is the safe or unsafe, but what's the potency of the material? I would like your comments on the extent to which, as the program moves forward we are going to be able to address additional attention to that very critical issue, as I will comment later, I think that was a serious deficiency in the national research counsel's recent report. We sort of dodged that issue. As we look to the future we have to recognize it's going to be important to consider potency of some materials, we can no longer use the artificial distinction between safe and unsafe.

That's a very good comment and I would agree with you, that that is a challenge we will have to do. The first test, does it cause corrosion or not? It's a yes/no answer, but as we move toward systemic toxicity, even looking at acute oral toxicity, there you have six categories, hazard categories based on potency, each driving a different level of hazard labeling, child-resistant packaging, safety packaging for transportation purposes. We are aware of that in that particular area. Certainly all the systemic toxicity endpoints, we will have to deal with that. We deal with it to some extent on the local toxicities, with ocular toxicity it's not just whether it causes -- we have to come up with the kind of lesion it occurs, that's not a dose response, it's one dose you get. I think that is going to be very important. I know in our high throughput testing Ray Tice is overseeing, acting head of the biomolecular screening branch, they use I think nine different concentrations in those cell systems? 14 now. There's fight quite a significant doze response even in that approach.

Other questions for Dr. Stokes?

Dr. [indiscernible]?

Dr. Stokes, you mentioned E sack and ECFAM, and jack fame, and I was wondering, at the level that you interact, the executives interact with the other validation agency, we realize ECFAM is probably the largest and the one that we have the most overlap verlap with, and JACFAM will be a full partner sooner or later; my question is can you estimate a where they will mutually accept each other's validation studies?

That's a good question. There's been suggestions like that many years that, we should have reciprocity. Whatever we decide they have automatically accept and what they decide we should automatically accept. We have completely different processes for getting to that point. In this country, as you are well aware, when we have a new method we are reviewing the validity of it's a very open transparent process, materials made available in the public domain for people to look at, comment on, we have an open public peer review meeting, and it does take longer. At the end everybody had the opportunity to comment on the science of that method. That's what is expected to come to conclusion. Right now they don't have similar processes. What we are looking at is trying to come up with way that's we can ensure when they carry out a peer review, different, not a public meeting, we can have the opportunity for those materials to be widely available for stakeholder comments, and those can be provided to the people on the peer review panel, so if we can have those opportunities provided we won't have to then duplicate and have a completely separate peer review panel. We can still have all the opportunity for stakeholder involvement and such, but would be able to do it on a more expedited manner. So the idea of automatic adoption is not something I I don't think either side is willing to do; but if we work together we think there's a very good likelihood we will come up with similar recommendations. That's what the international cooperation on alternative test methods is designed to do. To work together, have discussion, share information, and hopefully at the end of the process there are harmonized recommendations coming out of this.

I don't know if you want to make comments on the progress we made at the last E sack meeting.

I think we have that on the agenda tomorrow, can come back to that tomorrow at 2:00.

Right. The E sack agreed to make -- documents available, publicly, at the same time they send them to their peer review panel. That will allow the opportunity for other stakeholders to make comments on those, provide those to the peer review panel. That's very important progress.

Thank you. Bill, looks like you are on the agenda for the next item.

Thank you, Dr. Free man. What I would like to do now is go into a bit more detail on the two types of methods IGFAM made recommendations on and agencies indicated acceptance. The first I will talk about is are the ocular toxicity methods and the cyto toxicity meth 0Ds for acute oral toxicity.

As I mentioned, the recommendations on the in vitro ocular toxicity test methods were sent to agencies in October, under the IGFAM authorization act those agencies have to respond within 180 days. I am pleased to report we got responses within that time frame. All the agencies concurred, with the stated limitations, where applicable to their agency. As you recall, IGFAM is not just regulatory agencies, it's both regulatory and research type agencies. I think it's important to note that despite the fact that alternatives for the -- test have been pursued since 1981, these are the first scientifically validated for ocular toxicity to be accepted by regulatory agencies.

The four methods I mentioned previously, two of these, the bo Vin cornea -- this, swells as well as the isolated chicken eye and -- use eyes that are the by-product of the food industry, not used specifically for tissues, they are by-products of the food industry, that are collected in a specific man every, handled very carefully so they can be taken to the laboratory and used. The fourth method involves use of embrio onated eggs, the hen's egg -- membrane assay, where a piece of the shell is cut off to expose the A lain toic membrane, chemical apply directly to the highly vascular membrane, you look for the change that's occur in response to that chemical application. Again, we will be reviewing all of these for their ability to detect whether something doesn't cause ocular irritation or whether it causes mild or moderate effects. We think some of these, such as the head cam assay may be more useful for those type type of end points.

The recommendations on these methods, the first in accordance with USDA animal welfare, rabbits, should be considered before using rabbits for ocular safety testing, should be used where determined appropriate by the study group. What IGFAM tried to do, and in the evaluation report, provide analysis, tables of the results of testing that's been done during validation. It shows the type of chemical classes, types of physical properties of these chemicals and what the predictivity was for each of those types. When you have a substance, considering whether you ought to use these or not you can look at the tables, determine if it it's in a category fairly well predicted or not. Obviously if it's not, it probably would not be a good use of resources to use it.

Under the animal welfare act, principle investigators have the responsibility to provide a narrative discussion of alternatives they have considered before using animals. This has to be included in the animal study protocol. The institution Alan malcare and use committee has to review this consideration and approve the use of animals.

The recommendations on the use of the methods, VCO P, isolated chicken by were considered useful for classification testing of some types of substances. Their use is recommended in a tiered testing strategy. That tiered testing strategy is where you do step-wise consideration of information. Even before you use these in vitro methods you should look at all the available information on similar substances, physical/chemical properties, coming up with an idea of what the expected hazard is likely to be.

When a positive result does occur, the substance can be classified as an ocular corrosive or severe irritant without the need for animal testing. The weight of evidence decision testing, for positive results, where you would take into consideration things such as what you bow about know about the physical/chemical properties, greater likelihood of severe effects. Closely related products also cause similar effects. Negative substances, however, require additional testing to determine false negative, ocular corrosive, you need to progress to testing, and right now that has to be done in an animal.

If that is negative, in that one animal, then a second animal is typically used to determine if the substance can cause moderate or mild irritation. If you get consistent results, usually you can stop the study at that point.

Clearly, use of these methods will reduce animal use where you get positive results and will provide for refinement, because you are testing things that cause severe effect s in animals. The rabbet and -- not sufficient data to substantiate use -- but may be used for other purposes. They are being used for screening substances, for occupational health purposes, in manufacturing facilities where you have chemical intermed yats. These methods are being used as a screen so they can determine what kind of precautions our workers have to take.

committee recommends users -- in vitro, as well as in vivo testing necessary for negative substances. That this be submitted to NTPSACATM, geek fame -- this data will be used to expand the current validation database for these methods. This may help identify a wider range of [indiscernible] for different substances. It will help us further define usefulness and limitations. If there is post-marketing adverse event surveillance, including human experience, obviously we would like to know about that as well. Obviously these tests are designed to come up with practices that will protect people that might be accidentally exposed.

We are also encouraging users to collect and process tissues for his topathology and forward the results to NTPSACATM for further evaluation. The goal is to evaluate his topathologist that we hope will improve the accuracy of these -- if they -- the guidelines will be updated to include histopath olg, not just on these methods, but where it's necessary to conduct an animal testing be conducted that the rabbit eyes also be collected and -- future studies at symposia, mechanism of induced ocular irritation in 2005, should are routinely be done.

there are other recommendations from that meeting that we hope will be incorporated by users when the Dray test, designed in 1944, there weren't procedures routinely conducted such as slit lamp, changes in cornea thickness, to look at damage that could be incurred, detected by fluersin staining. There is other information to help design better in vitro tests.

We are working together with geek fame and ECFAM to prepare test guidelines for the BCOP and isolated chicken eye. These, because histopathology decision crit ya and -- don't exist, we won't include those, to create -- we will develop a guidance document on how to collect and process those tissues for histopathology and hope they will be generated. We expect to submit in July, indicated it will expedite processing due to European commission interests in this. Membership of you are aware, the recent legislation will require testing on thousands of chemicals, as well as the impending ban on the use of animals for testing cosmetic ingredients going to place next year created quite a desire for increased availability of scientifically valid alternative methods.

Consideration of the guideline is expected at the national coordinators meeting that will occur in next March and April.

Moving on to the in vitro methods for acute systemic toxicity. In 1999, ep A asked us to review in vitro methods for the ability to predict the classification categories they require assessment for. We held an international workshop in 2000 to review the current validation status of the in vitro methods and to come up with recommendations on how to move forward We came up with two cyto toxicity methods based on preliminary work that indicated they might be useful for predicting starting doses. We sent those recommendations to agencies and EPA in 2001 announced the availability of those methods, encouraged submission of the in vivo data for high production testing volume. Notified over 1100 companies about their availability.

So, subsequent to that meeting we carried out the international validation study on the cyto toxicity test methods to confirm the usefulness suggested in the method. Formally transmitted to agencies, publicly released as announced in an FR notice.

The responses from agencies are due in August. We already received two responses, which were positive. However, I would like to point out these methods don't require regulatory acceptance. The information is not being used for regulatory acceptance, or safety decisions, simply being need available to help determine the starting dose you should use for your regulatory safety study. The key is the closure you start to the actual, the fewer animals you will use. These method says have been shown to correlate -- mentioning in test lines so availability will be known to those folks who have to do this kind of testing. The recommendations are they can be used in a weight of eves approach to determine starting doses for current or acute toxicity protocols, two are the up and down and the acute -- method. They should be considered before using animals for acute or toxicity testing and should be used where determined appropriate.

Where applicable, principle investigators should consider these and in accord with U.S. public health policy the use of laboratory animals and U.S. government principles for use and care of vertebrate animals in research and training. Domestic rats are not covered by the animal welfare act, provisions don't apply in this case. However, most testing laboratories and companies do voluntarily add here to a higher standard, A credit aigdz by A lack, assessment of animal care. When organization are ALAC accredited they agree to comply with existing regulations, including those listed here.

Unclassified substances, LD 50 or greater than 5000, typically in the United States, or 2000 in Europe. Compared to starting with the suggested default dose, if you don't know, don't have a basis for starting your study, you would only use three animals versus six if it was predicted to be in the non-toxic category. Over 50% of substances would fall in the non-toxic category. You can see this has a good possibility of impacting animal use for these types of substances.

There's also refinement in some testing situations, the in vivo suggested is highly toxic, by starting at a much lower dose, you predicted it to be very toxic, you will have fewer animals that die or become moribund from the testing and need to HUMANEly killed.

[indiscernible] toxicity for classification purposes, they are only used to estimate the starting dose. With certain substances that have toxic mechanisms not exengted expected, such as neuro toxicity, cardio toxicity, likely to overestimate the starting doses, therefore may not be appropriate for using these methods for those type of substances. It's really important, again, to think about, consider all the information that you know about a chemical, in deciding whether you should use these methods or not.

We don't want in vivo testing to further characterize, not be done for safety. Where animals are required for safety and parallel data generated this can help us further characterize the validity of the methods. So we are asking that the parallel data be submitted so we can do that.

The questions that we have for the advisory committee are your suggestions, your advice on how to increase awareness of these methods, how to go about encouraging their consideration and use, and how to encourage data submission and optional activities such as histopathology that may aid in increasing the usefulness of these methods. With that, I will turn it back to you, Dr. Freeman.

Thank you, Bill. At this point in the agenda item, we have come to public comment, anybody registered for --

Not that I am aware of. Oh --

Okay. So -- all right, we have approximately seven minutes for public comment.

I appreciate it very much. I am happy to register for public comments, the difficulty in these situation system when you have organic dynamic discussion, and I don't know what's on Bill's slides, make its difficult to sign up when you don't know if you have anything legitimate to say. My first point. Bill, I have a question for you and Jodie. Goes to the very first question you raised of the members of the NTPSACATM, has there been thought goifn outside, for example, the opportunity for A wick, literature searches to have NIEHS proactively reach out to USDA, in conjunction with their own VMOs, and also for attending vetses and chairs of IA Cox, to run a series of workshops on what existing methods are to ensure there's something more than a literature search that make this is whole concept you'llization realistic to -- you make excellent points with regard to not just always regulatory accept acceptance, may not be use in that -- you are in a situation where there's baseline encouraged to be utilized. I spean spent time with Jodie yesterday in an animal welfare meeting. What a wonderful activity for the government to perform that outreach, not only do you have IA Cox, but USDA UMOs, really aware of these activities. From my perspective I would encourage you to seriously think about what could be done. Second to that I would also look to the possibility, and we have discussed this before, there are two states now that actually require IGFAM approved federal methods, and be that the case, other sorts of outreach activities with SOT and otherwise to actually do proactive workshops that are hands-on with the methods that are part of vernacular now would be considerable more important just in the grand scheme of utilization. We know from USDA there is a grave underU utilization of literature searches to meet the animal welfare act. Let's teach usage, so you really have that common understanding of where these methods are at. I also wanted to note one other thing, tell you how much I appreciate the fact that you consistently include not only the parameters of animal welfare act and regulations, but the requirements under PH S, A lack A credit accreditation and area that's address all species of animals utilized. Thanks.

Could you please identify yourself and your organization for the record?

Sarah Armand son with HSLF and HSUF.

Dr. Freeman is giving permission to respond. Certainly I think the idea of having workshops is very good. In the past when we have made recommendations on methods we have had implementation workshops to bring together regulators who will look at the data from the methods as well as the scientists and toxicologists who will be carrying out the methods or asking contract laboratories to conduct them. It's helpful to help them understand the results, limitations, and how it can currently be used. I appreciate the suggestion. It's a very good one. We in fact, as I mentioned, do have these workshops planned for SOT. If they are accepted in -- both of those address these two methods, so hopefully those will be approved by the SOT. We have tried to engage the leadership of SOT so they would be receptive to accepting these type of things. But the suggestion for what labs is important too. The folks doing the technical procedures, it's fortunate have ready access to training, but I think Dr. Eddie can probably talk about informing VMOs who do inspections and look at records about how you go about doing that.

I don't want to take up everybody's time, but yes, this is Dr. Cull pa-Eddie, we have had Dr. Stokes come in the past to speak at our VMO research training courses, very informational to have them understand what's been approved through IGFAM and what they can look for when they do their inspections at research facilities, academic, pharmaceutical firms, contract research organization ail include indeed that area. We have in response to the IGFAM recommendations put this information on our A wick, animal welfare information center website, which is kind of our outreach to the research facility in terms of consideration of alternatives, but we are certainly open to other suggestions and ways this can be done as well. We do realize it's another important method of providing education ask information to our regulator community.

Dr. Brown?

Yes, Marilyn Brown, I wanted to clarify a comment made in public comment, ask Jodie to clarify this. A comment was made and I am paraphrasing it. We know from the USDA there's grave underutilization of literature searches, correct?

Some of the add geek thives I wouldn't use, we know citation is one of the most common that we see from our inspectors at research facilities. It runs about 7% of the inspection reports. Not a huge number, but one of the more common ones we see.

[indiscernible]

Sure, we have about 1100 registered research facilities in the United States, conducted about 1600 inspections last year and of those, 70% had no non-compliant items listed whatever on their reports . Of the remaining 30, some of the common cited were consideration of alternatives.

Thank you.

Then I just had a question for Bill.

[Captioner transition.]

I wondered about the impediments for someone --

Concerns regarding studies being done for GOP for submission, concern that -- contract environment, sponsor

[ Captioner Transition ] it would not be part of the GOP submission. It would be ancillary information. I wouldn't think it would have to be. We could check --

Any data that is collected in a study is owned by the sponsor.

Right.

It would have to be the sponsor's decision. If they were going to send it some place other than -- you know, just for the IND.

I understand that. It's really the sponsors who have to authorize collection of that ancillary data and authorize it to be forwarded.

That maybe a tar get audience, it we're trying to convey the need for this information.

Yes. Very much.

What I would like to do, there's a couple of other questions. It's leading us into the questions that were posed to SACATM. I would like to post the questions, I think there was a slide for the actual questions, weren't there? That was the last slide? The lead discussions, we do need to broaden the discussion. The lead discussions was Richard Becker, June Brad law and Helen digs. We have one member of SACATM here. We need to add to the discussion. I would ask all of you to dig deep and think about the questions. Irknow there's -- I know there's comments about where we were just going. Let's bring it into the context of the overall discussion.

The first question -- is about increasing awareness of the methods. Should we read the full question? How would you like --

[ Speaker/Audio Faint or Unclear ]

Okay. You will start with Dr. Becker's comments?

There we go. Dr. Becker has provided written comments, these apply to ocular and acute oral testing. A number of approaches that may prove to be useful to be considered like development of symposia or development of continuing course for SOT or the American college of intoxicatology or American association for laboratory animal science. Collaborate with I Coke.org. Development of web-based materials to include identification of regulatory applications. Consider having each ICCVAM agency to have a web page on agency activities activities. Focused outreach to sectors where seven testing is most often employed through their newsletters, including professional societies and trade groups. Then regarding any incentives that are envisioned Dr. Becker mentioned referring to discussion of breakout group 5 of the acute safety testing workshop. Those are interest Becker's comments -- those are interest Becker's -- Dr Becker's comments.

Helen?

Again some type of education guidance, informational document that we can distribute to the different associations and organizations that need this material. Something that summarizes the changes and gives a history of the entire ICCVAM program and reviews the effort in a summarized fashion for people to read and makes it clear where we're headed. Um, again, a description of the changes and the new options. And clearly outlining the expectations and standards for the program and for the program directors and the [ Indiscernible ] at all of our programs. Um, again, Dr. Becker mentioned everything. Other venue might be the owe la. Getting it into the hands of the eye cuc chairs. They are the ones reviewing protocols. Other organizations that might also need this would be acvam and someone mentioned A lack as a group that would be able to keep an eye on the progress of this work as it moves forward. Um, and USDA A wick was mentioned. In addition to Dr. Becker's comments are mine as well. I have a question for you, Bill. Maybe I'm jumping ahead. It's going back to what Dr. Brown asked. If we're asking this information is histopath information to be forwarded on, what do you expect the financial impact of that to to be our program directors? Has that been considered?

Well, obviously, the time that it takes to collect the tissues and process them does incur costs. You know, this is something that is completely voluntary. We are just encouraging this. And suggesting this because we feel that it is a valuable way to help further characterize the usefulness of the methods and generate the database that is needed to determine how you can use histopathology to increase the accuracy of our predictions. It's a voluntary effort. That's why you see words like encourage, not require. And so -- if there are organizations that want to contribute to this effort this is one way. It does, it will involve extra staff time -- which, obviously, incurs a financial costs.

You would also expect those organizations that had any concern forwarding this information forward would going to somehow have a negative act on the proprietary information there, they're not going to send it either. You will lose those folks as well.

[ Speaker/Audio Faint or Unclear ] I'm sure will are concerns. There are a lot of, all testing is proprietary. It belongs to the sponsor, as Dr. Brown pointed out. So what we do, we would really like to know what the product, or the substance is that is being tested. But we don't want to be receiving confidential business information. What we have asked for other data that is submitted to [ Indiscernible ] we have sponsors provide as much information as possible about the substance. Sometimes that may be limited to physical chemical properties, or a code name for a chemical. But that's okay. I mean, because it's still important data. The more information about it, obviously, the more useful it is.

I think of everything we're looking at here now, that's the piece that is the most product attic. But it's the piece that could be the most useful. This is a difficult, that is a difficult area to focus on.

We'll start down here. Dr. Ba riel.

I would like to introduce question number 2. And maybe consider another word, mandate their use. Just as a preview. I was concerned with two letters that -- response letters -- that are in the package. One from April 22, this is under tab 3. April 22, 2008 from the Food and Drug Administration. I compared that response with letter dated April 24, from the EPA. Just to highlight one of the paragraphs down below on the April 22 FDA response. Paragraph number 5. FDA does not prescribe specific test methods for cosmetics. Rather sponsors have a general requirement to determine safety by those methods that they deem appropriate. I think, Bill, this is what you mentioned before. ICCVAM can only go so far, but can't mandate these requirements. From an agency that is responsible for product testing and drug development and overseeing the public health this is a rather weak response to methods that have been worked so hard. That have been evaluated. In contrast to the April 24 letter from the US EPA, paragraph 3. Which is much more definitive. I do understand that the EPA did ask that ICCVAM evaluate the ocular testing methods. You can see the contrast in the responses. It says that the each agrees with the ICCVAM recommendations -- the EPA agrees with the ICCVAM recommendations. They encourage they serve as full replacement for other testing methods. They go on to say that the EPA encourages the development of additional data to address these outstondzing questions. This is much more forceful reaction.I'm wondering your comments, what do you see as the contrast in agencies? This is a missing link, for instance, in our work and ICCVAM's work in making the public and the regulatory agencies more aware and encouraging their use.

Dr. McFarland. Would you like to respond to that. You are here from FDA.

Sure. I'm glad to. Um, I'm Richard McFarland from the FDA center for biologics.I think in this particular letter what you are seeing is a reflection of the difference in the way that the underlying regulations are written between the two agencies. I'm not expert on FDA regulations. I mean on EPA regulations. I'm not going to comment on those specifically.A few years ago, ten years ago, FDA did a fairly extensive rewriting of our regulations to remove in many cases prescriptionive language. It may be seen as being weak in here. In fact what it is is a understanding by the agency that because of the products that we regulate, that it is more effective to require companies to have regulations that prescribe that they meet general requirements for determining safety, rather than prescriptive methods. That allows us to do at the individual work level unit accept work from newly derived methods. I think this will be echoed in my presentation this afternoon in some specific examples in the center for bilogics. I think it goes throughout the FDA. I think the key there is within the agency communicating the availability of methods at the individual work units, with individual products because of the diverse nature of the products and the regulations.

Um, I think we're working down. We'll come back. Dr. brown?

Um, I just had a couple of comments. One is a suggestion. That is as we were talking about outreach efforts to let people know about the acceptance of these tests is to perhaps develop a handout or a leaflet of some kind, something short that would also be available on the web, but also in a paper hardcopy. That could be given out at the resource table at things like the O la [ Indiscernible ], perhaps at meetings where you have A wick exhibiting or [ Indiscernible ]. I think there would be lots of opportunities to try and get this information out. I also think it would be very useful to have some type of short document providing guidelines, maybe you already have this, for your expectations for people to submit this information. How it would be most useful to you, and what kinds of information would be really be absolutely necessary, and what is more optional but would be very helpful -- so that it might be the kind of thing that someone, you know, I'm speaking from experience really more in a CRO environment. You could take it to a sponsor and say this is what we're trying to do, this is what it involves. You know, so that we're providing people with the resources to go forward and try and encourage people to do this kind of thing. The last comment I wanted to make was in regards to the letter from the FDA and the question about, you know, how strong the regulatory agencies seem to be pushing or encouraging alternatives. Just as for our I cook proposals we we have to say why alternatives are not used or not appropriate, and how you made that decision, perhaps in submissions there could be a similar kind of question. The regulatory agencies would ask whoever is submitting were alternatives available? How did you determine they could not be used? An important point you that made is they're not appropriate in all situations. I think we have to give scientists the leeway to use them only where appropriate, but use methods where these are not. And if they have to scientificically justify the use, or the not use, of these alternatives I think that would really go a long way to increasing people's awareness. If it came from the regulatory agencies, if it to be somewhere in a submission I think it would get people's attention.

Dr. Cunningham?

Yes. I wanted to echo Dr. Brown and McFar lane's comments on question 3. I know that the regulatory agencies vary in how they look at new alternative data on these methods. One suggestion would to be look at the way that I know in the past FDA just recently has put together a formal process for looking at pharmacogenomics data. Even though that has to be part of the submission. They've created a formal process to look at that optional data. That might be one suggestion, to look at how that process was set up. And what they're doing with that data.

Okay, Dr. DeGeorge. I think you had your hand up before.

Thank you. George deGeorge. I also reviewed all of the accompanying letters back and forth from the various regulatory agencies and Dr. Stokes and the NIH in general. There's universal approval and support for all four of the alternatives, even though you did mention that two of them are not quite ready for primetime yet. With the exception of one agency, which stated that, I believe -- it was not within our jurisdiction to make any, um, claims or rulings regarding these alternative methods. But certainly there wasn't anyone who did not agree. So -- I noticed that. There's a difference in the response strength in the letters. I point to the one from Dr. Lindbergh from the Department of Health and human services NIH library of medicine. Second paragraph, we do not have any regulatory or testing authority that would need to be in compliance with these recommendations. We do, however, hardly endorse their adoption.There's the EPA response, another response like the former one. The NIEHS is not a regulatory agency and therefore does not promulgate regulatory testing requirements for which the recommendations will be applicable. However, they're in support of it. There's in constant running comment we're not in charge of making the rules, so we like, we support it, we think it's good science, but we don't know who will do that. I know that ICCVAM has limited powers, as well. I'm going that Harkin back to the histopathology, for example, being integrated into studies and a target audience has been industrial companies, testing, anyway in alternative or animal studies to save the cornea for histopattology. Saving the cornea, preserving it for later histopathology is a trivial amount of extra work and time, even under GOPs. By far, the largest costs in time and effort is the pathologist's evaluation of the tissues and the poghtologist's -- pathologist's report. Where we encounter problems or -- apparent unwillingness to go forward is if you have a test that has already said that, with the existing end points that are supported by all of these agencies and only one mentions histopathology here -- that's the strongest letter, I think. The companies that are asking, that are paying for these studies often don't want to know anytime. If they've got a negative answer why search deeper? I'm just trying to put it out in plain Frankness. The other aspect is the cost. They don't want to pay for something that is not necessary. They're going to say what is in all of these letters because the regulatory agencies said it. The companies will be companies that do eye products, these companies in general, or cosmetic companies who want to know. Because they don't want people showing up in emergency rooms and they don't want recalls and dangerous eye products getting out there. In general when it's done as a bevy of studies along with oral everyone skin testing, et cetera, it's done to the minimum of the regulatory requirements. Until the regulatory agencies insert that into their guidelines I don't think you will see any action that will be reported to you. There may be some internal company action that will not be reported, which is your ink link now. There's another target, the pathologist. There may be a way to stimulate interest and education in looking at shaving cornea. It takes a year for a pathologist to get good at looking at a cornea. Looking at a chicken cornea is very different. The palatologists sit on committees. They would then have some input that mattered, if they were educated in that area.

Okay. I was going to save a comment to the end, but I think I will interject it now. That is whether thought has been given to having tissues sent to [ Indiscernible ]. It's just a consideration.

Great.

The NTP does have considerable expertise in pathology. We have pathologists on staff and viva our support contracts. That is something we could consider. I appreciate your comments about the different levels of expense. As you progress down their hierarchy of activities your costs go up. That might be one approach we could take to making it a reasonable activity. So, thank you for the suggestion.

Dr. Fox?

I had the pleasure of sitting on the ocular tox committee. I think one of the shortcomings of the question is that we never gave any guidelines as to how the histopath would be done. I think this is a major limitation. I do pattology all the time on tissues in the eye. Certainly, there's a lot of different methods from the numbers of minutes that you may fix the tissue to the way that you post treat the tissue. I don't think we provided any guidelines or criteria. If we gave standard procedure and you could vary from it, I think there may be less receipt since to apply it. You would have some organization and some standard protocol to look at to see how this worked and compare it some inhouse or some standards that you developed with collaborators or outside sources. At least it's a positive control way of looking at tissue. I can just tell you from doing con focal all the time that the difference for fixing tissues is the presence or absence of a positive result. A 30 minute fix on some tissue and there's no more antibody detection. Some tissue needs methanol fixed. There's all sorts of problems. We never directed ourselves to any of these critical scientific issues. Being a scientific, to me, that's the most important. To get the data and do the right protocol. I'm thinking we could help these folks, make it easier if we had some guidelines. Which we may have to do testing ourselves.

Great. That's a recommendation that you are panel made was to develop guidance on this. We're doing that now. And hope to have a document on how to collect and preserve and process those tissues.

Can I just ask, where we are on that stage? And who has input?

Right now it's early on. We have just begun working with the NTP pattologists. We would welcome your input.

I think your sponsors who do this and maybe not mention it, you know, I'm certain there's a lot of sponsors that do histopath inhouse. I'm certain they're there. If you ask the right question I'm sure people would step forward. We like to talk about our results.

Dr. Mcclellan?

Yes. I will try to be brief here.The first, awareness. I think the bottom line always is to try to get something out in the open pier reviewed literature. I would suggest you prepare an op-ed piece for tox science. You could prepare a broader review article perhaps for critical reviews and toxicology. The advantage of those is they will be captured in the literature review. I think that's an important point. Second, I have a lot of concerns with regard to the issue of voluntary or optional collection and reporting of data. I think, I would urge people to backup and say what will we do with that data? It will be nonrandom. It's almost of no use from a scientific standpoint. I would urge people to read U.S.A. today or a success story on tomatoes and salmonella and the importance of randomization. The third, what I see as this very uneasy alliance, or issue of the role of specific agencies. We've heard excellent examples here of how agencies differ. And then the interagency really cooperative venture. There's very little in terms of muscle that requires these agencies to do anything as they participate in ICCVAM, except a good faith effort. If we went to your visual aid, you noted in 2001 EPA announced the availability of the invitro methods. And asked, encouraged their use in terms of the high production value program. I would be interested whether anybody in the EPA could tell me what the response has been to that. And then that links really to the issue in terms of your visual 13, in which you said that there was encouragement in terms of invitro and in vivo data being submitted to my see Tum. How, what did EPA find in response to their sort of encouragement? And then what is the linkage between that kind of action by a specific agency and then the interagency kind of approach to go out and say we would like to see this data? How does this fit together?

I will respond initially. I would like to ask Karen hammer nick to also respond. When EPA put out that letter to the companies that would be generating data under the high production volume initiative they did encourage that the data be submitted to my see Tum. It would be data generated under that was made available to the public. We did not receive any invitro data at NICETUM. I believe there was only one data set that was made available in summary on the HPV website. There was not a large response to that. I think we need to look at why, you know, why we didn't get a response. I don't think there was a lot of testing conducted under that. I think most of that had already been done for those chemicals. Karen may want to add.

Let me just say -- I'm dispointed there wasn't more. I think one would, even if you had gotten a significant number of submissions one would have to view them with a John dissed eye. They may have shown a cherry-picking of results. We've got real [ Interference ] here.

We do have to be cautious in understanding the biases and the type of data we may receive under that voluntary program. One of the precepts we follow is we consider all of the available data that exists on a test method. So sometimes we do get data on certain product sectors or chemical sectors, we'll get a lot of data in some areas and none in areas. That's just the nature of the business.

Yeah. Um, I'm not an expert on toss co. That program in and of itself is by and large a voluntary program. And that's one of the difficulties of toss co from my perspective. It is very difficult to get data anyway, even for EPA purposes. And getting data in some cases involves lengthy rule writing. My understanding is that the program within EPA is a voluntary program, anyway.

Our recent workshop on acute toxicity our breakout group 5 was a breakout group to deal with this very issue of how we get data from industry. And why the response to the EPA request was not met. The basic answer was that the industry already had the animal data, there was no point in them doing invitro tests for chemicals for which they knew the answer. So that's why there was no submission of data. But we -- we are -- there was an interest in working together and seeing what we could do to facilitate such a transfer of data. And we've started to have some discussions about that.

Okay. I ask people to identify themselves. For the regard Dr. Hammer nick and Dr. Maryland [ Indiscernible ]. Dr. Hammer nick?

This is Karen hammer nick. I had one other comment with regard to Dr. Fox's suggestions. I want to let him know that I'm the cochair of the ocular toxicity working group. We took the expert pier review panel recommendations very seriously with regard to developing standard used procedures.Actually there was an interest in doing that before the pier review group met. I just want to let you know they were enthusiastic about following up on those recommendations.

Thank you. I think Dr. Charles is next.

I would like to give input from the industry perspective. I will take the example of the importance of encouragement by the regulatory authorities. The example of a colleague who was recently submitting a test plan and the agency commented his proposal to use the Ginny pig Maxization test is we suggest that you consider the LMA instead. So basically, the input from the agency is important. Just a recommendation from the agency, or suggestion, maybe enough for a sponsor to go forward with an alternative. If the agencies basically, as I said, go forward with the suggestions, if it goes down to -- if the agencies filter this information down to the people looking at the plans and reviews it's a help. It's a significant help. Secondly, in terms of the invitro basil [ Indiscernible ] toxicity evaluations. In terms of setting starting dose levels, from a sponsor's perspective these studies are typically done at CROs. If the CRO has such a system in place for setting the doses on the studies the sponsor probably will not argue that much in terms of outizing it. -- utilizing it. However, submitting data from studies that's whole another issue. In many cases the responsibilitier will use the CRO as their adviser so to speak. If the invitro, if all you want is to encourage its use then I would say an approach for targeting the CROs and the large contract labs is one way of doing that. In terms of the final comment with regard to appropriating the use of the ocular irritation assays. Two other venues would be the society of intoxicate cologic pattologists and [ Indiscernible ], where you have a lot of people from the pharmaceutical industry. It might be another venue that you want to tar get.

I think Dr. Marsmen was next.

Dan Marsmen. I will start with my second comment first based on what was just said. I think for me, I guess I would expand on Dr. Brown's great recommendation. Which is as we talk about these assays and as a [ Indiscernible ] is approaching the decision of a assay to choose the sirnl rephrasing of the question from justify why you didn't use the in vivo study, turn it around to ask the investigator for an approved method explain why you did not use the alternative is an incredible turning of the tables in terms of the decision-making process. Right now, frankly, you always approach how high is the hurdle I have to jump to use the alternative instead of the in vivo? We frankly choose to jump the hurdle most of the time. The reality is it becomes a easier jump if the person asking the question is asking why you wouldn't choose a method that ICCVAM has recommended. Um, the second one, regarding the suggestions about submission of additional end points, such as histopathology. Certainly we would love to do more histopat. With the number of agents and chemicals for which you would potentially do this -- I think for most companies that number is too high. It's too vague. And it's too big for most, the people writing the checks to bite. I think, for instance when we were talking about the exvee vee rant study it was suggested that the data set there is too weak to make a formal recommendation. I think that assay is plagued by the fact that it's one of the broader used assay. The data set is here and there. In one respect the ability of ICCVAM to identify the gaps in the data sets so that the regular straint knows where data is truly needed. It's the antithesis of what Roger was saying. It's nonrandom. But it's directed at filling a specific void. I think there's a lot more buy in.

Dr. Brown?

This is different. I'm wondering what the connection was between your recommendations for dose setting and using invitro methods for dose setting and [ Indiscernible ] and how that was connected to the article that came out recently from the U.K. and the center for the three Rs in questioning the usefulness of the acute oral tox test anyway. If the move is away from acute oral tox tests, then, I mean, it may be that this particular alternative came a little too late. [ Laughter ]

Well, the decision not to require or use estimates of LD50 for hazard classification categories is a regulatory decision, not an ICCVAM decision. So I think what will be important there is if regulatory authorities move away from estimated of LD50, they're going to have to move to something that indicates some level of toxicity. And maybe whatever that target is, is what we want these invitro methods to predict. Obviously, you know, an estimated LD50 is a pretty, pretty severe reaction. In many cases what companies are interested in and regulatory authorities is what dose, what level, is going to cause some kind of adverse effect, not necessarily debt. -- death. I think that's the gist of the U.K. report.

I think it's been a while since I have looked at it. I think it mentioned most of the information that you get from the more chronic tests. You didn't really need the acute test, it didn't add that much to the body of information. I'm not a intoxicatologist. I guess one of the suggestions, is that report be looked at by an expert working group for this group so that they might provide some information to us about that. It seemed a pretty important step. In terms of the three Rs.

Just to clarify that for everyone, what Dr. Brown is saying is what they were suggesting is 14 or 30 day subchronic studs that may provide you information about systemic toxicity.

[ Speaker/Audio Faint or Unclear ]

I think what we've done is we've answered the first question. There was also a parallel question on alternative methods for acute oral toxicity testing. I think the conversation would be largely the same. Do we need to read Dr. Becker's response to that?

His response to those topics was the same. I could read it again, but I don't think it's necessary.

We have the questions, if there's other comments on the second question I think we should bring those up now, or we will go to break.

Okay, good. Let's go to break. It's about five of 11:00 now. So 11:05.

The hotel would like a show of hands of the number of people having lunch here. Raise your hands if you are having lunch here at the hotel buffet.

Also our photographer will take the photograph of members right now. I think he wants us down front, so if you go down to the lobby. I'm sorry? Yeah. Okay. I'll explain --

A ten minute break. ICCVAM and SACATM out front for a photograph.

Session on morning break, estimated to return around 11:10 Eastern Time Zone.

Okay, I would like to call the meeting back to order. And our next item on the agenda is the overview of the [ Indiscernible ] ICCVAM five-year plan.

Okay, yep. Okay. I kind of feel like I am the last person on a relay team. And the whole rest of the field is away head of me and I have a lot of time to make up. As a result, since most of you, if not all of have seen the five-year plan and read it, I'm going to zip through my presentation. This is my required disclaimer. The five-year plan, as you all know, was requested by Congress and it was done by NICEATM and ICCVAM. We started working on the plan in August of 2006. It ultimately was released in February of 2008. There was an enormous amount of public comment that was incorporated, as well as SACATM comment.

The five-year plan is a plan to advance the alternative test methods of high scientific quality to protect and advance the health of people, animals and the environment. The plan itself, since it was going to Congress we were limited in the size of the plan. So you will see there are lots of appendixes. They only count the pages in the real part of the plan, not in the appendixes. That's how we got around it. There was too much information to squeeze into very few pages.

The five-year plan builds on the NTP road map and goal 2 is to develop and improve testing methods. Again, in the road map it says that activities and assay developed under the NTP road map will be done in cooperation and consultation with ICCVAM to maximize their value to regulatory agencies.

The road map and the five-year plan are consistent with the recent NAS report.The five-year plan builds on current U.S. laws. One of the most important things here is to reiterate that are a number of agencies that have been mandates to protect human, animal health and the environment. The agencies have to ensure that substances are safe or properly labeled if hazardious. It's the responsibility of the regulatory agencies to determine if alternative test methods can provide equal or better protection before they adopt or endorse those alternative test methods.

The ICCVAM authorization act of 2000 requires that new, revised methods must be determined to be equivalent for risk assessment purposes.

Again, Bill talked about some of the other U.S. laws and policies that require prior to the use of animals for research and testing that available alternatives must be considered and used where promote. And they will -- where aappropriate and they will adhere to the 3Rs.

To promote these activities. ICCVAM depends on others to conduct and achieve successful test methods. And then ICCVAM reviews the test method submissions for their validation status, that is to determine the usefulness and limitations of the new methods. This is a list of the federal agencies that do research development trance translation.

We catch the plan with four key challenges. I will go through each of the challenges. Chapter 1, which was challenge 1. Identify priorities and conduct facilitate activities in these areas. Our ICCVAM test method priorization criteria are the impact on the three Rs. And applicability to multiple agencies. We recognize the priorities may vary across agencies and that priorities are going to change and we need to be flexible so we take advantage of advances in science and technology and the availability of new methods. In terms of applicability to multiple agencies, if one agency has a strong need and it will advance the three Rs that also is given a priority. So it's not limited just to multiple agencies.

The four highest priority areas are ocular, dermal, acute and biologics and vaccines. The other areas are listed. The basis for the high priority was that multiple regulatory agencies require that the ocular hazards be identified to warn consumers and workers. There was potential for significant pain and distress to test animals. And you see here are planned activities. We've talked a lot about the ocular toxicity and the things that we're doing to improve the nonanimal methods. I will not spend time on that. We will be reviewing the review use of topcal [ Indiscernible ] and annal gee disiks. When my agency had our own lab and, even now with we have things done outside we routinely require the use of topcal an thetics.

Chapter 2. We looked at research and development efforts that will support the development of improved alternative met odds. There are 11 agencies that are R&D programs. ICCVAMing will be monitoring them forl methods that may advance the three rrks. The areas that are currently identified is potential applicable are high-through put screening, other animal, other species, biomarkers, nanomaterials, testing strategies and development of toxicology databases. Many of these areas will require several years of development. But ICCVAM and NICEATMM will monitor the federal agency and other stakeholder's areas for additional areas of interest.

I'm not going to go through all of these areas that we've identified. You have the slides and you have the report.

In chapter 3 we addressed Fostering regulatory acceptance and the use of alternative test methods. Clearly this is important because we can value daylight a test -- validate a test method. If it's not accepted and used it will not have impact on the three Rs. In order to Foster acceptance and use we will be providing guidance on adequate validation study design to ensure the data that is generated to support -- to ensure the validity of the data to support the test method acceptance decisions. We'll carry out high quality pier reviews. And we discussed, organize implementation workshops.

Chapter 4 addressed partnerships and interactions with ICCVAM stakeholders. Bill discussed a little bit about what we're doing on the international scene. And he'll be discussing that further. I'm not going to deal with that. But clearly there are interactions that are necessary to stimulate the alternative test method research development translation and validation by stakeholders. We will be engaging in those. And developing partners to leverage our resources and maximize efficiency and ensure early exchange of information. We are collaborating with ICCVAM and jack Vam and take taking about better ways.

We a workshop in February. I'll talk about that tomorrow. We have proposed international cooperation on alternative testing methods on validation study designs, independent scientific pier reviews, harm Monday used regulations for regulatory acceptance. We've implemented ICCVAM research and development working group and an ICCVAM five-year plan implementation subcommittee.

What do we hope to achieve? Further reduce and replacement of animal use where feasible. And continued and improved protection of public health, animal health and the environment. We look forward to SACATM advice.

I want to acknowledge all of the people that worked really, really hard on developing the plan. Um, it wasn't easy. We worked on a short timeframe. ICCVAM, the ICCVAM five-year plan subcommittee, all of the agency program offices, stakeholders, public commenters, NICEATM and SACATM. This is just everybody in ICCVAM. And thank you for your attention. These are the questions that we hope ICCVAM will address. And I turn it back over to you, Dr. Freeman.

Thank you. This is the place for public comments. Are there any public comments? All right. It doesn't look like there are. We will turn it back over the SACATM discussion. The lead discussionants will be Maryland brown, Charles, and Marsmen. Maybe I will read the question quickly.Please provide comments or questions on how to implement the plan described for the four key areas. Identifying and promoting research for new test methods and approaches to reduce and eliminate the need nor animals. Fostering acceptance and appropriate use of test methods. Developing partnerships and strengthing interactions with ICCVAM stakeholders. Dr. Charles? If I can call on you to kick this off.

Conducting and facilitating. ICCVAM should be commended. It's an ambitious plan. I recommend that you try to focus on the ones where you can have the highest impact and set metrics that you think that you can achieve with defined milestones. So later on you can say we tackled this. These are the metrics and the milestones we met. So five years from now when you go through this process again you have a much easier way of tabulating what was accomplished based on the plan that you elaborated.

[ Captioner Transition ]the guidelines, systems across international boundaries, that will be a very good aid to getting these implementation by industry, instead of having to do multiple test guidance for different countries or regions. That is good to note you are also going to be involved in the validation prose, OEC D based guidelines, protocols, really go into the test systems, methodologies performed to a large degree by industry.

Third, I would like to get back to, in terms of fostering accept apps, the CROs in the process. From an industrial perspective, more and more of the testing is being done by contract research organization, not by the companies themselves anymore. The CROs are getting larger and larger, some are involved in the test validation mechanisms. At least in test method development, the micronucleus and common assays, part of the validation process. Another example, using the in vivo micronucleus test system, acute rodent system, evaluates chromosomal damage when animals are dosed up to -- per kill owe, appears to be increased or decreased temperature results in chromosomal damage empirically. Recently, a few responses started requiring, suggesting temperature monitoring the use of MTDs in the in vivo micronucleus study. One of the ambitious contract labs includes that now as a standardized evaluation for setting top dose ease. s. The sponsor sees I can have a lower -- based on -- or -- they are okay with that. I think emphasis has to be placed on interactions with the CROs, getting them onboard, as Iityer ated earlier, a lot are acting as the experts in their fields for industry.

The other good ideas at the toxicology databases, good one, wondering if the -- bank type data concept is ever peer reviewed data generated that the authors, for example, of that data would have to fill out submission form where they could immediately submit that data to the toxicology database, so they don't have to go to a separate process by which this data can be -- you don't have to calling for the data, facilitates updating the database more rapidly.

Pretty much what I have for my comments.

Dr. Marsmen? You want to go next?

Sure. I am going to restrict my comments to a couple of items. First, I think overall I have enjoyed the revisions to the five-year plan. I think the priorities are reasonable under item one, the priorities are reasonable based upon animal welfare needs and investigator needs. In particular, I think maybe one, not sure it's an end point, but in particular the one area where there could be greater emphasis is for IVV VAM, moving out of some of the more complex systemic and in vivo end points is going to take time. There are a lot of interim things that are possible. I guess I would highlight simply under refinements as we talk about new methodologies under the second bullet, those new methodologies can be both refinements and terms of identifying such things as subclinical end points that may allow for one to terminate the study sooner rather than later. Under reduction, those kinds of new technologies, microarray or whatever, a bigger bank for our buck when we have to do those studies. I guess I just would ask that ig VAM use it's voice, where the workforce lacks the meat of the discussion, as we move into three and four, I am struggling here a bit to separate this topic from the next one, the coordination across the various international agencies. I think for me where the plan falls short is the lack of specifics of how to get to the end game. The way I think about this is to think about a starting point that looks to the stakeholder needs for what the regulatory agencies around the world will be expecting. I understand the complexity of that, but I think partly maybe this is my Optic view from where I sit, in the luxury of a situation where we are dealing with just one agency within the United States, and we extend that to all of the different countries that we often have to deal with, the difficulty of moving in the direction of moving in an alternative is often of satisfying the various stakeholders and expectations. As we look at the U.S. to respect all of just the needs of the U.S. agencies, their expectations are different, and the greater clarity that IGFAM can bring to what those expectations are would help us as we set the game so to speak for the strategies of alternative methods. If we don't have that in start, the assays will also miss their mark for somebody which triggers us back into a situation of default, having to runed in vivo to satisfy one agency or in particular lately one country. I can rattle off three or four pain point countries, Canada, Korea, Japan, China in particular are countries in which haven't necessarily aligned with some of the conversation going on today. When we get 99% of the way there, still have to end up running an in vivo assay for a specific expectation. I guess to summarize it would be to look for the stakeholder needs for the registrant, and then the expectations of every place that as I, in vivo assay may be needed and all the varying expectations place odd that in vivo assay as we look to replace it with non-animal methods to truly address the different agencies and countries and risk assessment expectations of those assays.

Dr. Brown?

I think I actually made a lot of my comments earlier in the previous conversation, it also jumped out to me in looking at number two that we are talking about reducing, eliminating, replacing the need for animals, but refinement jumped out, was not in that sentence, in that particular challenge. I agree with Dr. Marsmen that there's a lot of room for improvement in the near term as we continue our goal longer term. As we look at, I think Dr. Stokes did talk about human end points and a mention earlier, Dr. Charles talked about using a body temperature as earlier time point, and I really think there's a need to get that information out there and some mechanism toville validate end point studies that we are doing today.

We already talked earlier about partnerships with the stakeholders not so much the international steajd stakeholderrings, but here in the United States, talked earlier this morning about providing more information to them in terms of guidelines for expectations if people are going to submit a study or a methodology for validation, as well as the kind of information you want to get, how you want to get it. Then sharing the information of what is available already and what's in the pipeline, I think with -- and the scientific industry. Those are the key points I saw in this. I thought it was hard to -- there weren't a lot of details in here. It's the broad plan, and the devil is in the details.

We were limited to the pages we put in there.

Right.

Obviously this is a more vision goals type thing, and the implementation is --

Comments?

First, I certainly extend my compliments to everyone involved in the process. My experience in developing plans of this kind, and I do this as primarily an aspirational, what Bill just described, it is not an implementation plan. But most frequently the process is more than the product, quite frankly, in other words what it the participants gain from their involvement in it. As we develop any plan we go back to Shakespeare or the saying at the national archives, the past is prologue. I will come back to that in a bit in terms of understanding where we have been and where we are going. There's an overlay in this where we have the challenge of two issues. One is how you deal with Congress. They specify certain things, length of support, also shun any mention of dollars, resources. All of us know that a plan without resource associated with it is frankly, almost worthless. I still preach that to my kids, even as they move up to their early 40s, somehow you have to bring resources into the question. I understand your aspirations. The question is do you have the resources that are matched to achieving those aspirations.

So, the other is, I guess drawing my experience, hate to admit it, now more than 40 years of interagency involvement activities. They are really a challenge because each agency has its own mandates, turf, and as you try to bring the number of agencies together, 15 different statutes they respond to, real challenge.

So I think what's most important now is this is aspirational, now let's look at the implementation plan. The implementation plan should be documented, given as much scrutiny, in fact, as was this report mandated by Congress. It will be critical that includes quantifiable, measurable goals, and that means quantifiable in terms of what is to be achieved and when it is to be achieved. Not the outcome, but the action on it. Then, I think it's important through all of this, and I saw it in a few places, but important to understand at the end of the day what our most important concern is, safety, safety of people and the environment. The activity we are carrying out is clearly, wire concerned with the three Rs, but we need to recognize where we are headed.

I am concerned that throughout many of these activities, the issue of validation, we have fallen into a trap. It's a trap that existed in terms of agencies going back to the origins of the two-year chronic bioassay used by NTP, which is I think most people appreciate, grew out of the national cancer institute, the cancer bioassay program, and failure to recognize that what is needed is validation relative to knowing human toxic -- in that case carcinogens. Secondarily, a comparison with existing methods. I see repeatedly reference to validation in terms of comparison with existing methods, then oh, by the way, we want to make certain we are protecting human health. I fail to see in the documentation, which I would have expected at the aspirational level, the clear linkage to knowing human toxicants in terms of validating methodology.

Additional point is we touched on earlier the distinction between hazard ID, does the material or not cause adverse effect, and the much more robust data set required in terms of we typically say dose response. It's really short hand for exposure dose response. To date the activities have focused on eye, skin, areas where the distinction between exposure and dose is not nearly as complex as with inhaled or ingested agents. So I would have like to have seen at least a place holder in addressing that important issue, clearly in the implementation plan some attention there. In terms of the implementation plan, it is vitally important that we actually go back historically, do the best possible job of estimating what resources have been applied to this issue. I recognize in some occasions I have been somewhat critical of the progress made. It's not nearly what I would have liked to seen made, or others. When I look at the resources applied, it's probably recognizable that it's resource constraint. We need to go back, document what have the resources been, that will give us a better basis of anticipating what can be done in the future. So, I hope those comments are useful. As I say, I look forward to seeing an implementation plan and I certainly urge it be given the same kind of scrutiny and public review and committee review that this document was given.

Thank you, Roger. This is Jim freeman again, some of these comments reflect initial comments of a year ago, a lot of which were addressed. Brings me back to the question of the implementation plan, the status of that, and when we might have the tune the to see that.

Sure. You actually, on one of the slides that Dr. Win showed, initial implementation activities that have been undertaken, that were mentioned in chapter 1. The first of those actually occurred the first two days after this plan became available, the acute toxicity workshop. That was to try to come up with -- that was looking at how to collect more mechanistic in vivo information to inform the development of predictive in vivo systems, and help identify earlier, more human end points as was just emphasized we need to focus on. That refinement was left out of the question, but not intentional. When we mention any of the Rs, we mention all three. We recognize refinement can be attained more quickly than replacement and in some cases reduction. As we will work on that area.

as far as plans, each of the topical areas, they are given the task of coming up with detailed procedures undertaken to implement recommendations and come up with a timeline. Work hanked hand in hand with NICEATM. Is it recommendations, report, study, we will provide more details at the next meeting, in writing that outlines those --

With regard to the comment about human prediction of adverse effects. When we look at, we certainly compare if there's existing method, we look at performance relative to existing, but also with regard to any existing human data, accidental exposure, ethical human studies. That's what we are interested in doing for human health studies. In the bioassay we will talk about tomorrow, that information is used to help assess things with the poo potential to cause cancer in people. We have some known human car carcinogen --

Many of our existing methods, frankly, not as robust as we would like to portray, by using them as the gold standard, I think we may in fact come to an erroneous conclusion with regard to new alternative methods, whereas if we at the first step look hard at the alternative method as a predictor of human toxicity, we may in fact come to a different conclusion. So, that's just --

I think it's a very important point, and I will go back to the local lymph node assay, review indeed 1998. It was only 85% predictive of the tray ditionzal assay in guinea pigs. We had human test data for quite a few chemicals. You compared the performance for human potential, both method z had comparable performance. It was on that basis that it was recommended that it should be used as a substitute for the traditional test. Your point is well taken, we have used it in the past and will certainly use human data and experience where we have that, in the future.

Dr. Marsman?

I wanted to follow-up with one more statement. As we look at the priorities ig vam laid out, in many respects it's an assay whose value is waning. As we approach the end point in the workshop, we will talk about later, many things came up which are important and will be reapplied to a lot of the systemic end points. It many respects it's the lowest hanging fruit of the systemic tox end points. Many of the things we talked about, everything from element 2350*EU6 5, stakeholder involvement, but particularly systemic tox, what it's going to take to replace other end points. As we look at the comments about the bioassay, developmental neuro tox, all thes things that are mind-boggling, those things address, many can start with addressing what are the relevant outcomes of the acute tox study and how would we mimic those with non-animal methods.

Any other comments on the five-year plan?

Okay, well, thank you to Dr. Win, and evening for their commentary on that. That will bring us to our next item, which is the national research counsel report on toxicity testing in the 21st century. Our presentation is by Dr. Kim [indiscernible]

Thank you to the chair, panel and audience for this opportunity. This says it all.

This is where we are, taking animals, running them over with cars, and I think everyone in this room knows this is where we will be sometime. The new biology we are aware of driving the processes we now undertake with animals. So what I want to do first is give you a little bit of a flavor for how this panel was set up, what its charge was, then go through some of the recommendations and give you some of my personal views. I want to emphasize up front that what I am saying is really my personal view. Dr. Win said this about her report, it's really true, because I think the panel that was involved in this, righting this report, was very diverse, had many different kinds of conclusions, some had different feelings about the results than others.

The panel was brought together as an NRC panel, 25 members or so from across the spectrum, academic, industry, animal rights, environmental activists, government individuals, very broad. Very broad background and experiences, and not a lot of restraining of that experience to actual knowing about toxicity testing. This was a very broad panel.

The charge came from the EPA. The EPA's fundamental concern was we are doing this kind of testing, more and more issues are being raised, for example, endocrine disruption, developmental neuro tox -- the traditional response is to make a new test. We are doing lots of animal testing already, have a new endpoint of concern, the response is let's do a new test, add that to our battery of tests in order to feel safe.

Fundamentally that's just not sustainable, and we already know that our coverage of chemicals, ability to test, is much more limited than the number of chemicals being created all the time. So the charge to the committee was to let's look way ahead, be visionary, macroscopic instead of microscopic, and how can we do this differently and how can we move toward doing this differently that can address the concerns of coverage and cost and animal use going forward.

That was the charge animal testing at high doses, extrapolating to the human dosage. I emphasize since this was an EPA-driven process we undertook, their concern was environmental exposure. Not talking about high-dosage exposure such as pharmaceuticals. We are talking about Ambien the exposure, how to do risk assessment on those low-level exposures.

Traditional approach is MTD, animals, low throughput, MTD database is in the high 500s, over 20 years or so. Expensive, time-consumerring, pathology -- I love pathology, but pathology end points are probably not the future we are aiming for. Extrapolations over a large range, testing in the top, modeling response likely in the human exposure range. Then just to be really certain we feel comfortable we apply uncertainty, safety factors, 10 for speesh species, 10 for dose extrapolation, and we end up with a 1000-fold factor just so we feel save.

The other question, discussed a bit this morning, how do we know how good this system is we are using? That's a really tough question to answer. One of the papers I looked at, Harry Olson, pharmaceuticals, a good database, 12 companies gave coded data to LC trying to compare animal study results with human toxicity results from pre-clinical and clinical trials. The overall true concordance was around 70%. The non-rodent alone, 63, rodent alone around 43. This is relatively high-dose exposure, targeted pharmaceutical testing, and we are, with all the species engaged up to 70% concordance. The concordance varied a lot among tissues, some better than others. Metabolism didn't really explain the problem. It wasn't metabolism that was different across species that led to the concordance, but other issues.

All right, so is it dangerous or not?

The panel worked on this for two and a half years, and 10 meetings. We are talking a long time and a lot of peoples' time and still it's a very macroscopic, very big picture we generated. These are the panel members. Dan Crewsky was the chair, very diverse background. I think the critical decision that the panel made in recommendation was to go towards a new phenomenon we described as toxicity pathways as being the focus for how to develop knowledge about response. I think the analogy to this, and actually there's a lot of discussion about this word toxicity pathways, because people didn't really like the word toxicity pathways, just sort of icky. But I think the concept is very parallel to what has happened, if you think about it, in cancer studies. So, toxicity pathways would be -- gene and -- tumor suppressor. Toxicity it would be protect ant pathways, heat shot protein responses, or A pop to thetic -- exposure within cells to try to understand mechanism, rather than phenotypic responses. The whole paradigm that was developed really developed around the concept of toxicity pathways. I think that's actually a key idea. If you think about the cancer analogy again, we have been doing cancer looking at ong A genes since the war on cancer began with President Nixon 35 years ago. We made substantial progress in 35 years. We actually now probably can robustly predict the behavior of a cancer based on its pop owe geneeous expression, in terms of forecasting how the cancer will behave, on par with pathology endpoints, and predicted -- 35 years to develop that kind of database, those understandings.

The visionary concept of this panel was pointing out in that kind of time frame. We are talking 25 years, 50 years down the road, being able to change the paradigm of the way in which we do business in terms of testing. So we are thinking along those line necessary s in terms of times.

Looking at -- a homeio static response, higher dose there may be adverse or adaptive responses that take place, cellular changes, then responded to, the cell can adapt, the pathway continues to function. At a higher dose yet there may be some kind of irreversible change in the pathway function that leads to entry into the cell or organism that has severe consequences and is therefore no longer adaptive, but adverse.

We don't have a vision. Probably an IBM to Mac problem. The vision actually is this diagram here, as you can see on this diagram. I will go through this relatively quickly, it's detailed in the book, the conceptual pieces are the most important here. This vision has a chemical characterization component, a toxicity testing cassette, one of the components of which is the toxicity pathways piece. It does response and extrapolation piece, to a population and exposure data, human input into the process, and a risk context piece so we know why we are doing this kind of measurement. In the chemical characterization, mentioned earlier today, acquiring all the information we can about the chemical, then predicting based on what we know how that chemical might behave, using computational tools, whatever physical properties of the chemical that are available, we know about, we can use to understand better how it might behave, interact with biologic systems.

The dose response and extrapolation model piece, the idea is to develop computational tools, to go from in vitro exposure settings. Say a -- and media concentration to extrapolate to human exposure, target tissue level, kinetic models that allow that extrapolation to function so that we can realistic sale access, assess, taking known levels, extrapolating from the in vitro cell system, those kind of extrapolations.

In terms of actual testing, I think the idea, this is again personal, was that we needed to have, had this as our visionary end point, toxicity pathways that we thought would work, ultimate ultimately if sufficient pathways were -- used productively. But in order to feel comfortable we needed targeted testing along the way. This would be more complicated systems, more focused systems, and in particular this was driven by some concerns about develop toxicity, something that couldn't be predicted by the -- part of the experiment. We needed to fill the gap between where we are now with the whole animal, where we might be with a cell in a dish, by having some kind of reassurance from complex systems, less compleex than the -- systems, but more complex than, that would tell us about the responses of groups of cells or organisms.

I think there's some disagreement on the panel for the need for the intermediate step.

There's a real strong feeling on the panel that the human exposure assessment, biomarker information piece needs to be more fully developed in general and be used in a more inform ative way to guide testing that we do; that it has not been mined in the way it can be, particularly as the tools developed, sophisticated tools for measurement available to us now, those can be employed in order to gain greater information from people on ambient exposures and what happens with those exposures. So there's a real push toward incorporating more sophisticated methods into the modeling that we anticipate meeting.

Finally, risk context, like risk management settings, drive a lot of the decision making. There was a recognition it would be important to know whether it's a new agent, agent out there a long time, the settings of exposure. This may drive the assessment process.

In summary this is sort of how the evaluation testing process was conceived. There's a chemical characterization piece, either predicted, observed, say by human assessment. The biologic -- potential of the chemicals, assessed. Pathways, significance, interpreted, the in vivo levels in which they occur, brought into a validation comp pewitational setting, human -- informed by population based human testing and human exposure data to develop an exposure guideline.

I already mentioned some of the thinking about time frame. I am serious when I say the committee was thinking in the 20 to 50 year time frame for this to come about. The idea is to develop a comprehensive suite of in vitro tests based on human cells or components. Computational models based on toxicity pathways and their responses. The intent is for risk assessment. So, to be safe and predictive about being safe. There's also a recognition there would need to be quite a change in the infrastructure in the way in which the science of testing is done in order to accommodate the changes that are anticipated in how the tests are done. There's a huge establishment in place to do animal testing, CROs, regulatory agencies, regulations. Authorize There's recognition that's in place and something different would have to happen to -- and the kind of scientists doing the tests. A combination of computational skills, cell-based skills, that allow the communication to take place to develop this new kind of science. Validation of tests and test strategies you were very concerned with. Evidence that justified the kinds of approaches were sufficiently predictive to be useful in decision-making.

I am getting close to the end, but want to spend -- this slide is easy, the promises. Next is the conundrums. The promises are human relevance using human cells or components. Dose relevance, going towards doses that are environmental, trying to mimic the response of in vit owe response reel in vitro relative to doses.

One of the most attractive features for all of us, myself in particular, was that the proposal focuses on mechanism mode of action rather than phenotypic responses. It's the new science, and I think that's an important driver, has been for the panel.

Cost effective in the end, not in the beginning, so hopefully once the system is in place it's actually quite cost effective, fast, and also the three Rs.

Some of the conundrums, the destruction before discussion before I came up was the screening tool, testing compounds, therefore a screening tool that has to be validated against those traditional tests, or is it a stand-alone system? I think that's an extremely important question and I am a strong advocate that it needs to be developed as a stand-alone system that was validated against the human exposure knowledge we have as O opposed to the existing animal tests taking place largely at high dose. That's a difficult issue, but one that I think is very important, worthy of discussion.

The recommendation is to use human cells, there's a lot of advantages in that. Human cells from the protein that's chemicals might interact with we have in our body, as opposed to rat proteins. The biology taking place in them, so they can be in vitro, perpetuitied ad nauseam yum. We . We would need to know more, 20 years, 50 years to make this system knowledgeable.

Mixtures, there's an advantage. Right now we don't look at mixtures, it's hard, expensive. One could imagine that would be an advantage to this system, be relatively cheap to throw a bunch of stuff together, see the response. Mixtures have conplex effects on each other, inhibit metabolism on one compound or another. Creates problems. Metabolism is not an easy issue to be addressed some of that we can get from human exposure, sfudies in people, but it's another issue that needs to be addressed. What about the unknown mechanisms, the other unknown biology out there. All of us probability feel probably feel a little more comfortable an animal is getting exposed, covering the unimponing biology, than a cell. Not sure that's true, but I think it's one of the comfort issues that is challenged by this proposal. One of the issues this new paradigm would need to address. What do you do about biology you don't know about?

cell-cell and organ interactions. One cell in a Petri dish, hard to have interaction. On the other hand, cells, loop, have to signal, cells stim iewmented to signal have pathway interrupgzs one could perceive and measure, could be enough of a signal to act upon.

The distinction between adaptive and adverse, nice words, but the actual distinction is complex. That's a key piece of the argument of the new paradigm, one that's not simple, so this is -- the words are great, putting the meat on the bones is complicated.

toxicogenomics, overpromised and underperformed, the proposal here rests on the history of toxicogenomics. People have been a little disappointed with how that will solve all their problems next week and here we are 10 years later and still have a lost problems. It's one of the conundrums, how to deal with that, legacy, learn from it, still use the molecular approaches that are so powerful.

Another conundrum, rats look like people, cells don't. We have a comfort zone around what we have done for so long that we trust, just because that's familiar. Doing something different is unfamiliar, scary. That's actually a real issue.

Finally, I would say is this another war on cancer? We are 35 years out on the war on cancer, supposed to take 10 years to solve. At least I think the time frame of the vision for that panel was good, 20, 50, maybe a 100, go back to the first slide, we would all agree we will end up on the right side sometime, if we are still here.

I will put one more conundrum up for your consideration and wrap it up. That is, who is going to do this? Where's the money going to come from? The panel talked about this quite a bit. Actually never reached consensus on that particular issue. The words in the book are quite fuzzy. I will tell you my opinion. I think it needs to be done in some kind of independent institute setting. My model would be the human genome institute, given the task of sequencing the genome, given the money, went out, did it, problem solved before they could imagine it would be solved, but they were given the charge of making the solution happen, independent of any constraint of existing agencies, rules of existing agencies. My personal feeling is this development would best take place in an independent, newly-established setting. It will cost a lot of money. The panel estimated $300 million, and time frame of 20 plus years.

All right, back to the first point, stand-alone or screening tool. Just like to make a comment comparing this figure. The thinking of the panel was, we take the knowledge we have from the studies we have done in laboratory animals, rough rodents mostly, in order to better understand the toxicity pathways, interpret those as they apply to people, using information from people to interpret those correctly.

This is actually a different way in which to conceive of doing the work that needs to be done than came out in the science article, Dr. [indiscernible] is here in the audience. The proposal here, you will notice this looks very much like the previous slide. The proposal here is actually to use the information, at least one of the pieces of the proposal in the science article is to use the information at the molecular level to prioritize to the animal tests. I think that is going to take serious thought, because there's quite a bit of difference between this proposal and this proposal. This is meant to ultimately be a stand alone pair dime for testing by itself. With that, I will answer any questions, look forward to your discussion.

Thank you, any questions from the committee, clarification, before I move to public comment and discussion?

I would like to make conquick comment. Under your conundrums, you almost touch odd it, that is conundrum that science, most scientists love what I call discovery science, discovery research. They are not very happy in many cases with what I will call issue-resolving science. This is issue-resolving science that builds on discovery science.

Right. It's applied as A posed to --

Any time you get a panel together there's a little concern, that could be my next Gran the grant that went down the tube because -- as a scientific community we have to come to better grips with, if we are going to sell the public on science-enabling progress in terms of human --

That's a great comment. There were not all that many academics on the panel, not that much self-serve behavior happened from that perspective.

Questions of clarification only at this point, please.

Karen [indiscernible], EPA. I am curious as to the notion that most of the toxicological data we have in the regulatory arena is from high-dose testing. I can think of some cases where that may not be the fact. I am curious as to where that notion came from.

The panel was basing that notion on the MTD requirement for high doze dose and extrapolation, usually two other doses. In general, orders of magnitude from what the environmental exposure typically is for people. The animal testing, done on purpose that way because there's a constrained time frame to see a response, and I think the NTP animal studies were used as the model of that.

The cancer studies?

Yes.

Because I guess my point is there are many other types of toxicity testing designs that are used. The pesticide program at EPA is one of those. They don't just do carcinogenic --

I suggest they are typically higher magnitude than environmental exposures.

Other questions for clarification?

Dr. Marsman?

I am curious. To what degree does the panel feel like it was, its vision is limited by looking at the constraints of environmental exposure. As I look at it, I struggle a bit with the focus at that low dose. In reality, few of us have the luxury of doing assessment for the low dose exposure. Any product we make, we deal from the standpoint of raw materials supplier, high doses in the plant, transport for the making of product, exposures, disposal after used, and the CSP, all of that while in use by commerce. The exposures are all over the spectrum, and the "field" exposure may range all over that. I struggle a bit with the limitations of the vision.

I think the -- that's a very good comment. The purpose, the panel was empanelled, speaking only for myself, to address that question, an EPA question. The concept still, there was the idea to do dose response in whatever the paradigm is that's created looking at toxicity pathways. That ought to be robust across the response as well.

restaurant is closing at 1:30, get to public comment, we may have to come back to finish after lunch. At this point let's see if there are other questions for clarification on information presented.

Go ahead.

Marilyn Brown. My question is similar to Dan's. You talked about Ambien the exposure, not direct, such as pharmaceuticals. My question would be along the lines of what your vision is in using this approach in a pharmaceutical environment. Is that clarification or something for later?

You asked it, so -- it's okay.

I think it's a great question. The paradigm works even at those levels or could work at those levels; but that wasn't our charge, so --

Okay.

Okay, a simple question, and hopefully it falls under clarification. Three slides back, before conundrums, your toxicity testing and risk assessment slide, you were talking about the move from phenotypic, what I would call observational end points of toxicity, to mode of action approach. Can you clarify, or succinctly expause the reasoning from -- to the motivation approach?

Why? Why would one want to do that? there are steps in between, so why would one want to go from so macroscopic to so mechanistic.

So I would argue that if we knew mechanism about compounds we actually have a lot of progress toward the mixtures question. We could understand interactions better at that level than the phenotypic level. I will use the cancer analogy again. As a pathologist we look at tumors, grade them, fairly predictive about outcome. You could do that at the molecular level, not worry where the Tishy was coming from, only the pathway reductions. Maybe not so great right now, but increasingly good in terms of prediction. You don't have to worry about tissue, just the pathways driving proliferation of death, et cetera, that are so important in cancer. Fundamental properties of effect, as opposed to measures of effect that are not fundamental to how chemicals interact with tissue.

Any public comments?

Well, it's about 25 of 1:00. Before we get into the -- I think we agree to break for lunch. The restaurant closes at 1:30. I say we start here at 1:30 sharp, and we will adjourn for lunch at this point.

[Meeting to resume at 1:30:00 p.m. eastern time]

.

Okay, can we call the meeting back to order, please.

[ Speaker/Audio Faint or Unclear ]

Okay, thank you all for returning, more or less on time. I'm as much to blame as everybody else. The point we're at are the SACATM discussion on the testing in the 21st century. I don't know if we can pull those questions back up. I will read the question, again. We do have written comments from mayoran [ Indiscernible ]. 1, the NRC report puts forth the new approach. What are limitations and advantages? What impact white this new approach have on regulatory decision making?

2, strategy described in the report. 3, how might ICCVAM and [ Indiscernible ] valid for regulatory safety testing and further reduce, refined and replace animal use for safety testing. Our lead discussionants are Frank, George deGeorge, Marion [ Indiscernible ].

I have read the report and used it for teaching my graduate course this spring semester. [ Audio Cutting In and Out ] and number three, ICCVAM and NICEATM role from changing regulations would require validation studies. The agencies would be need to be kept informed. That's all.

Okay, thank you. Um, why don't we go into alphabettal order. Frank, do you mine going first?

In response to question one. It appears in terms of advantages this NRC report we can look at it either as either taking a step backwards or reevaluating it or reevaluating toxicity information that we used to do years ago before the development of invitro tests, that is look at meek ca nis tick toxicology. The idea of invitro tests in the 90s was quick short-term tests. They were developed to go along with the me cannistic studies. You can use cell tests for basil sight toxicity and using minimal target pathways to explain toxic effects that are common to all cells. It seems that the me can is tick pathway suggested in this report, the advantage is if you can find a targetlike organ in the cell cultures this might help in developing either short-term tests or a stand alone test or screening test. I'm not clear as to how that would work. It will take another 25 to 30 years to develop tests, screening tests or stand alone tests based on me cannistic information. So we can't expectation to have too many developments in this field, within the three Rs, within the next five to ten years. I think that is important to keep in mind. The me cannistic tests have a lot of information in them. They're not short-term tests. The advantages are, of course, me ca nymphs help to understanding the site of action -- mechanisms help to understanding the site of action. Inn terms of what role might ICCVAM and NICEATM might have. The report gives some information on how to help for the formation of a new institute in this area. As an academic I don't believe too much in the value of forming large bodies of organizations. Committeework at that level is daunting enough. Instead of a new institute I would suggest the expansion of the existing institutes here. Formation of a new institute means money, resources, all of which we have mentioned we don't have enough of at any of these steps. Taking the existing institutes and giving them for funding, giving them more resources, more administrative clout that would be a better way of using the resources rather than going the new route. And how might ICCVAM and NICEATM help to ensure the development and validation of assays? I think they have another experience in knowing how to bridge those gaps. -- enough experience in knowing how to bridge these gaps. We have enough me cannistic information and biotechnology that could be used, could be used for the development of new tests, things like that have already been mentioned this morning. Even in our lab we use PCR analysis for detection of gene expression. If a small lab can do that -- I think that the information that is already generated can be used to use it as a screening test for target oriented mechanisms. And there was a mention in the presentation there weren't enough academic representatives on the NCR report committee. I think there should have been. This way we would be have been more self-fulfilling. And asked from where the money is going to come from. [ Laughter ] Thank you.

Thank you. Dr. George?

Well, I agree with everything that Dr. Ba riely said. I don't have much to add. With respect to question one, the potential advantages and limitations of the approach. Which I describe as a meek nis tick mode of action that we saw in the previous presentation. First what was formerly we call it, I call it in my lab read and dead toxicology. The animals are either four legs up. That's how you knew your LD50s. You knew skin or eyes are red or not. Moving from the macroscopic to the microscopic cell models is a huge leap. You can't see. You got to use a microscope. There's less observations, it's the nature of the beast. You have to then measure and perturb the system, the cell culture or cell tissue tissue culture to know what is going on in there. Even putting in vital dyes effects cells. So going the further step from the microscopic to the nanoscopic where you are looking at the biochemical mechanism, I think 20 to 50 years is a realistic estimate. I think within 20 years you could do the cell-based assays, but I think we'll be held back. So the advantages of this me cannistic approach is that you could discover widespread mechanisms of toxicity. We'reThere's hundreds of millions of dollars of grants been going out for a decade or two to generate probes and antibodies and measure it correctly and distinguish it correctly from other types of cell death. There's people who need to be trained in the difference between the types of technologies [ Interference ]. Perturb the system to see what the systems did. We can't discover fundamental principals that would hopefully explain the existing classical toxicology tests the discordant, the accepted low accuracy of some toxicology tests when compared to an alternative or human data when you can get it. It was mentioned that Ginny pig data compared to rodent data yielded high 80% correlation accuracy. When each one is compared to the human data the accuracy is in the 72 to 74% range. That might sound good. But 50% is a coins to. -- coin toss. Hopefully by finding out more about when we encounter false positives and negatives -- I think the advantage of preceding toward this will allow us to resolve conundrums from the past. It brings on a whole new set. But it will bring us up to speed on some older ones. I agree, I think that ICCVAM needs more staff and resources. I think that it's got a lot of weight on it. The coordinating the 33 or 30, et cetera members from the various regulatory agencies and even the suggesting funding something like banked corneas that might be coded from companies so that they don't have to worry about what the answer really is later,. [ Laughter ] That will take people. That will take money. And Dr. Stokes, Dr. Ties and their staff, I think, are overburdened.I think we should also, finally, in number three, Dr. Marsmen brought this up and others did -- is to remember there's three Rs. You don't have to go for total replacement of animal testing. Not everything can be tested in local [ Indiscernible ]. The 3T3 that we use that is accepted here, uses media covered fibroblasts from a sarcoma of a mouse. What does that have to do with a human skin, which is the target usually of photo toxicity. So we're going to have to use sental cell models and work our way down. I think if we try to shoot for too much we're going to have the end result of endocrine disrupters, what happened there. Rage for a few years. They spent money, a lot of assays were developed. Retried to go high through put. That was a big mistake. One of the slides that was presented discussed the problems of going to high through put too early. That is one of the problems that endocrine disrupter testing ran into. They encountered basic problems, physical problems. I think you may have to put hue through put aside. Learn from the endocrine disrupter status and where we are now and where we were supposed to be in 2001. That's my comment.

Okay, thank you. Dr. Fox?

Thanks, Jim for assigning they this. No, no, I'm serious. I have listened to this presentation now three times. I must say that Kim's -- I thought you gave the most poised and reasoned approach to this. I applaud the committee that put it together. I've read the report in full. I've thought about it since I read it, several times. I have several comments. Start off with a global comment. I don't think that most toxicity is like cancer. I'm looking at you directing this. We learned this a long time ago. Neurotox and these things can't be modeled the same way. We're looking at a single end point in cancer, with all these other systems there's multiple end points. The tissue is so complex. It really concerns me that the committee in some ways chose to use the cancer model as the global model of which to base their thought processes on. I like the way it's written. I think it's brilliant. A lot of thought went into it. These are my overarching comments. I believe they left out some good cell biologists and systems biologists. I think that some of the problems that we're going to face, I talked with Kim about this and others, these systems they're missing important points. The pathways that we use to assess toxicity. If anybody who is an academic toxicity, we all use pathway assist programs. These are only curated pathways. If you use a curated page way and you come up with a novel way you are off the chart. None of the pathways that we've discovered fit the curated pathways. We have no hubs and nodes. In general we have no hubs or nodes for our self-signaling pathways because they're novel. So when we put in the cell pathways we come up with nothing. I'm a little concerned about the use of curated signaling pathways. Some of my other concerns, reversibility. Reversibility is not detectable. I think this is critical. It goes to your words of adaptive versus -- I have to come back to the two words that you used. Adaptive and adverse. Certainly there's adaptation. You can actually final cells that undergo, you give them an insult they undergo a response. It was just a heat shock response. It could be toxic but it's short-term and reversible. Delayed effects. There are so many delayed effected now. They don't manifest themselves until 6, 12, 20 months of age. In humans they don't manifest until the 5, 6, 7, 8 decade of life. I'm not trying to be overly negative. I'm trying to be a good scientist. Maternal fetal effects. Sensory, motor and cognitive effects. I've never seen a PC12 cell think on its own. And neuron sprouting to be viewed at this. But you have to add NGF, but now you are looking at drug chemical interactions. Aging and susceptible populations. Cell choice, how do we pick these cells? You made good comments, Kim, about using human cells. Most of them are transferred. When you look at the science paper that John and others are authors on you see about 80% of the cells are transferred cell lines, we know that neuroblas Tomahs don't respond normally. Some you have to transeffect to get a cytokind pathway into. Some of the cells don't have the compliment of the pathways that may be important. Eppy genetics you mentioned. Subtraits. If you grow your cells on [ Indiscernible ] versus [ Indiscernible ] you get different results. The substraight that one chooses. How do we do a matrix of this? I know that folks now at NIEHS are doing these studies, they presented them at the toxicology meeting in a nice way. Data analysis, I won't talk about. There's going to be a hundred thousand runs a day. How you correct for multiple samples being run at the same time. This is just a statistical question. Do we accept a P value of [ Indiscernible ]? How many multiple corrections? I always like tier testing. I thought thought that the tier testing approach was a really good approach. I don't see this has an incorporation of that. If that could get an added tier testing approach I would feel more comfortable. I have not seen this proffered yet. Those are my, I'm an in vivo biologist, I guess. Those are my limitations. The advantages are they make you think differently. That's a good thing. You can point out the limitations and overcome the limitations with enough thought and money. The impact on regulatory decision. I think this is where the strength of it lies. I think if it was approached in a risk assessment manner this is really a great approach. Again, these other decisions have to be made. What role could ICCVAM and NICEATM play? I really think this should all be based at NIEHS. I'm a firm believer that this is the right institute for this. They can branch off and have their own independence in doing this. I don't believe we need a new institute. Extra money for NIEHS. I really think we need to get scientists interabilitying with the intermural people. And have the ICCVAM and NICEATM people. There were not enough real hard core scientists on the committee. Question three, I think that the validation approach is that ICCVAM has done proves these folks know what they're doing and can do it. It's getting the right number of meetings and stakeholders. I think these kind of assays need those kind of people in this kind of room. If it takes two years or five years we need the extra mural people. We need the intramural people and all of the stakeholders to discuss this. This is too big. It's a $10 billion project, minimum. It's a $10 billion project. We know that NIEHS is underfunded. This is important that protect human health. These are my comments. Thank you for the opportunity.

Thank you. Any other comments from the rest of SACATM? Dr. Mcclellan?

Again, start out by noting my compliments to those that put their personal efforts into developing the report. I do, again, as already mentioned, I've heard presentations on the report. And, again, I would say that the one we heard today was, to me, the best presentation I've heard. It was appropriately knewanced. I think that comes in part from his backround. This sort of leads into one of my concerns with the NRC report. To some extent it does not have an adequate recognition of the range of human disease. To the he can tent there's a human disease orientation in the report it uses cancer as a model. That in my view is in a serious mistake. It's a trap that we wander into. Many people were trained with that approach in mind. We can go into the details of it will. It's related to the fact that Nixon did declare a war on cancer. We had the cancer blinders on. You only have to look at national disease statistics to understand that even today we've moved cancer down that list. If we look at cancer as to what it truly is, a very diverse family of diseases, the individual cancers drop way, way down. We realize yardo vascular disease is up there. The report is visionary. That's it's strength. It lacks the broader view that I think is needed to understand that humans are afflicted with a broad array of diseases. It's almost mind boggling to think of all of the pathways and interaxes that can occur with regard to disease occurring. Considering the genetic base and then the broad role in terms of environment and then environmental agents. Now, to join that we've also talked about the extent to which this report earlier today today is a visionary report. It's lacking in terms of an implementation plan. By the same token, the NRC report, again visionary, but comes up very short in my view, even on a hint of a strategic implementation plan by how it would go out. I see these joined in some sense, using one of those Venn diagrams. There's a little slice of overlap, but not a big one. I think that the way by which ICCVAM and NICEATM can facilitate that process they can continue to encourage the institutional structure to push ahead with trying to create an implementation approach. At the same time as ICCVAM sharpens its implementation plan and its strategy I think the area where it can be of great advantage is to give greater emphasis to a disease orientation. ICCVAM has not had a strong disease orientation. It's talking about a few effects. I think it needs to struggle to get that. The second is that I think it needs to develop that broader view of variability across populations. Which is going to drive, I think, many of our risk assessment approaches in the future. So the extent to which ICCVAM sharpens its validation methods and links them, the known human toxicity, it will provide a tell plate for validation stepwise of the various approaches advocated within the NRC report. So, I think we ought to commend the groups involved. Note there's an area where the activities come together. But I would not want that report to take ICCVAM and these important activities offtrack. I don't think there's a reason to hold up the stop sign. We need to sharpen our methodology.

Thank you. Dr. [ Indiscernible ]?

I have two comments that are interrelated. One is why innocent the NR -- isn't the NRC approach to be done with the animal cells first? And the reason I said that is because with human cells we might have the ethical issues of continuously getting human cells from -- especially from healthy humans. What if the human is already exposed to say an initiator. And the agent B is what we're interested in and is a promoter. Which one is responsible for the disease?

Yeah. I think Kim could comment on this. But I can, if you don't mind I can. If you look at the report. Almost all of the cell lines are transferred. Access to tissue is not an issue. These are growing up in laboratories. They're SV40ed to life. They're transformed. Is that right, John?

That's correct. There's some primary cells. I would say that in the initial evaluations that we've been doing at NCGC they've used animal and human cells from corresponding organs. Seeing differences in their responses, unfortunately. But there is that -- that aspect is being looked at.

I think that's an interesting point that John brings up. Two cell lines in the neurofield used. One is much more sensitive that the other one. Here we have differences within the same -- within using two cells that are both neural-based. This begs a question. Is there a more sensitive one that we have now? Or a less sensitive one? And which one do we use? You could be on the dose response affect. You can find an effect that is greater unanimous 100%. Go out on the dose loter than 1 -- lower than 100%. These things kind of concern me. Who you pick to be your sental. Which canary gets picked.

Dr. Brown?

I can't hold a candle scientifically to anyone that has spoken so far. It seems to me this approach would be helpful. One advantage of this approach is the potential of helping us refined animal studies. We can learn more about mechanisms and what types of cells or begans are targeted and focus our observations and develop earlier inld points. I am concerned about the potential for false positives. Because it more or less, it seems that it's looking at toxicology as an event rather than a process. And that if, I think this goes to the comment that was made regarding reversibility. If you just look for specific reaction it's, um, it may not give you the whole picture in terms of what would really happen in the whole organism. If we are looking at increasing false positives, I think we have to look -- I realize we're concerned about the safety. Also we need to be concerned with killing drugs that have the potential for doing a great benefit to humans and animals. And because of false positives. They get killed early on in their development. And don't go on and do their potential good. I think there's a real danger with false positives.

Dr. Charles?

Just a couple of comments to follow up on that. It's going to take a new breed of scientists somewhat comfortable with invitro test systems to actually put this into effect. That's also true in a regulatory context. The people interpreting this data have to be educated as to how to interpret the data. It will have a significant impact on regulatory decision making and education is key not just for the intoxicatologists, but also for the regulators. Also as Dr. Fox put it, I think this needs to be a single agency-driven approach. I don't see it, somebody has to drive. And in terms of a multiagency approach to this, I don't think it's going to limit [ Interference ] what could come out of this. In terms of ICCVAM and NICEATM serving this vision and this strategy, I see this as a [ Indiscernible ] process. Where these test systems are developed and used to generate this conceptual basis. In terms of when they go into effect answer when i meanted on developing new products or chemicals, I see the ICCVAM as helping to implement the test systems so when they go into effect we have more confidence in what we get out of them. I'm looking at them in that regard, so.

Thank you. Anybody else? Dr. DeGeorge?

I want to reiterate one point. We've made a lot of progress in cell biology area of alternatives toxicology. An example is 3T3 is a good model system. Nothing human about it. Cells on plastic under salt water. I think the vision to get human cells that we understand how they behave, how they function normally or abnormally is obviously the ultimate goal. At the cellular level and at the intercellular level, all the way down to the internuclear. I don't think we should throw away time and effort with already established nonhuman cell types. Right now every time I do a 3T3 study I do 96 wells -- I don't it, but somebody does it for me -- cells that are treated with different concentrations of a chemical. Another 96 wells that are put under a cell simulations. I did two viability, they could be apop toe sis assays. They could be necrosis assays at another timeframe. That's a lot of information. I have not changed to using human cells yet. There's a lot of data being generated by tests like that right now in that cell line that can be explored me cannistically. I think we should pick cells, why do we use 3T3 cells? And why do we use L929 cells? They were banked 50 years ago. One of the first types of cells that would grow in minimalist media.I think, I think that my main point is don't rush to try to use human cells. There's plenty of human cells either transformed or not. My donor card is filled out, I wrote everything on it. I just got my driver's licence. I said you can take everything. I think that we have 50 years of knowing how a lot of different cell lines respond and they've been fully geotyped. I think we should captainize on that -- capitalize on that. It is hard to work on human cells, that's why the 3T3 assay is not run in normal human characteristics. I it works in the cells that are mandated in the guidelines. I don't see any move to change that assay to human cells right away and to revalidate that. If you're not going to do that, you know, I think we need a stepwise plan that gets us to human. We shouldn't fear still using rodent or whatever intermediates, crone strakts that -- constructs, they're all available to us. It is a pathway, a me cannistic pathway. It might only cost $8 billion. Rather than $10 billion. Thank you.

Okay. Thank you, everybody. I think we've expended or hour that we were allotted for discussion on this topic leading up to lunchtime. We're going to move on with the afternoon session. I think you received a range of commentary from scientific to practical to don't throw the baby out with the bath water.

Yes, on behalf the ICCVAM I want to thank the panel for area comments. As we work toward an implementation plan we'll be considering the implementation strategy and plan that will be brought forward for the NRC report. Certainly, there is overlap. We need to take advantage of that overlap.

Okay. Looks like the rest of the afternoon we will be discussing the federal agency research development, translation and development activities relevant to the NICEATM/ICCVAM five-year plan. We will have a series of presentations from the various agencies. Mostly about 20 minutes each, I believe. There's about five minutes allotted per presentation for a little bit of question and answer. We will have to watch the timing. We're running a bit late today. I just everybody to be, keep an ion the time. With that we can get going. First presentation is from the NIH, Dr. [ Indiscernible ].

Thank you. I very much appreciate the invitation to come and talk. It's the first time I've been able to come to one of these meetings. I had to muscle [ Indiscernible ], she's watching us. I couldn't have asked for a better prelude. Kudos to the NRC committee for having a vision for the next 20 or 30 years. What is important about having a vision like that is not so much the details of that vision. They're not there yet, obviously. I think what is important it forces us to think beyond where we are already are. It forces us to think about the things we could do. For those of you that have heard me before you know I talk fast. In this case it may be an advantage. I have 20 minutes and 26 slides. I will buzz through a lot of the first part of the talk. A lot of it has to do with NIH and setting the stage and the context.

It did work.

I will divide talking into three parts, what NIH is, the part I will buzz through, most of you if not all know what that is. The five-year plan, part I will spend more time on, talk about some of the scientific projects we are supporting in relation to the five-year plan. I won't delve into the details, the last time I was in the lab was 1989.

I keep up with the science, in Nature, sometimes Cell, but we are talking about 40,000,000-foot level. To, cooling will be the last part.

The mission of NIH, the old maps where you encompass the entire country, if not the world, national institute of health, the nucleus is in Bethesda. The important part, I want to e. emphasize, we are in pursuit of fundamental knowledge, the underpinnings of what will inform the latter part of our mission, which is application of that knowledge, healthy life and reducing the burdens of illness and disability.

This is our vision for medicine, personalize, preemptive and participatory. Gene chips, for instance, will be one of the things we will tend towards. Personalized in that you can now pay, what hundred million -- less than that, hundred,000 to get your own DNA sequenced. One of the things we will emphasize to get ahead of the cost of medicine, get to disease before you even have it; identify it and get to it. Of course, participatory, without the people to participate in this new vision, either as patients or as participants in clinical trials, recipients of healthcare, we simply are not going to realize this kind of vision.

So we support the various models available, letting the science drive, things that, unicellular, multicellular, like zebra ra fish, furry things with ears. Rats, those are mice. These are primates, and of course the uber primate, we think of ourselves, support all of those model says.

We have a number of revolving public health challenges, we have had a shift from acute chronic conditions, I can't believe I lived into this century. Still mind boggling. Shift from acute chronic conditions, in the past, you got ill, died. Not the medicine to keep you living longer, health year. Aging population, I am feeling that, turning 56, at the end of the boomer generation also ages, some more or less gracefully than others.

Disparities, a fact of life. Should not be, something we definitely need to address, obviously working hard to do that. Have emerging, reemerging diseases, Ebola, pandemic flu, other diseases that certainly provide a lot of challenges for us. Emerging and non-communicable diseases, obesity, the district of Columbia is one of the highest per pound I think for children. Not a statistic we are necessarily proud of, but something we need to be cognizant of, actually a consortium of institutes at NIH working hard on the obesity issue. September 11, our world changed profoundly, and we have a number of projects, not only looking at VOD colleague there, NIH and other agencies to address biodefense issues.

One thing that some people sometimes don't understand or forget is that NIH is really in a sense, has a dual nature. It's two institutions, there's the intramural institute, research; and the extra world, where we support institutions V scientists in either case, on campus. 10% of the budget remains with the intramural research program. The primary campus in Bethesda with labs in other places, basic translational research. Outside we support approximately 150,000 personnel, a mushy number, hard to get, don't have the data system necessary place to get a good number. The budget outside of NIH, 3000 institutions and -- basic and translational research.

Someone mentioned earlier there was an independent institute, they are all part of NIH, most get their own appropriation for Congress, they are in fact, considered part of the national institutes of health. The genome research, health and environmental sciences, and library of medicine is part of NIH. We are a complex organization.

This is the 2008 enacted budget, has been basically flat over the last years, difficult for investigators in institutions. Most of the budget goes outside, 82%. Research project grants or contracts, a lot goes out to various institutions.

I am going to buzz through 24, ICCVAM, you know what that is, promoting primary test methods, new science, technology, where NIH comes in, supported by NIH, validation of regulatory safety tests, the key question here.

In terms of incorporated new science and technology, the NIH developed to understand biological systems, promote human health, research, may open new possibilities for alternative toxicology tests. Here I'll have to start referring to notes and put my glasses on, what happens when you get to be 56.

Friend of mine says you get new part and lose old ones. This is a schematic of the NIH roadmap. You have heard of the roadmap, something Dr. Z put in place when he came on board in 2004, a way for us to do business involving all centers of NIH, basic tenets are the project that's go in, cannot be done by a single institute or center, all participate. Provided a vision, way for NIH to work together in a way it had not before. Congress was supportive of this, with reauthorization act last year, year before, 2006, I think, they provided for what is called a common fund, which basically funds these projects, and so there's an appropriation, has been appropriation each year in the common fund that provides support for these kind of projects. There are a number of projects here that I think will be of relevance to toxicology testing. Again, this is very fundamental research. It's not necessarily that it is something ready for translation, but does provide the seeds and the embryonic vision. I will say more specifically about one of those in a bit.

Examples from NIH research portfolio, I will buzz through, how am I doing for time? Okay?

Am I losing everybody? Everybody okay? Good. Okay, I will talk about zebra ra fish as potential for portfolio, think of these as concepts, not necessarily something ready for -- review. As concepts for thing that's could be translated into things that could be applied for toxicology testing. The NIH genomic chemical center, subject of a press release in February, I believe. Bill participated, others, EPA was part of this. Microarray gene chips, promise of that, very interesting little story there I will have fun talking with you about. Three-dimensional tissue modeling and basic intervention research as a source for, depot for translation for some of these ideas.

Let's talk about zebra fish, I am a BI biologist, a gee-wiz biologist. Zebra fish you can get a lot in a small space, I apologize for the cut-off, it's translation to Mac. They develop quickly, highly fee fund, fecund, very pretty, see the org nisms, watch things. Better than that, there's genetics. You have a wonderful tool to use seb zebra fish. They are ecto thermic, all of those add up to being a really powerful possibility for us as a model system.

This is the NIH chemical genomics center, I won't belabor this because I am sure you know this was subject of earlier press release, collaboration between NIH and EPA, and established at NIH to be a national resource in chemical probe development. The whole mission is to discover small molecules, probe biology, understand biological targets. This is a copy of the, article in Science, and the whole point is that with the EPA and -- the center is doing, basically using chemicals known to have toxic effect to understand why they are toxic, what genes they effect, the mechanism by which they have that effect. If the techniques are useful they can be applied to large numbers of chemicals of which little is known. This is part of the roadmap processing for other things like this that could be useful.

This is my favorite slide, how many of you know what news olg means? Basically means classification of disease. In the past the way we thought about, classified disease is by phenotype, basically by symptoms that are similar, if you are a pathologist by the way they look under a microscope. This actually is a way to think about this as, new way to understand, classify disease by the genes they are related by. The genetics, sequencing of the human gee in gene only, this is now possible, the work of Dr. in California, Stanford, I believe. This may really alter the way we think about disease.

So the thought is to try to use gene chip and microarrays in a new way. For example, muscular dystrophy and heart attacks, there are common genes in those, characteristics that if you view them in this way where you make those associations, actually remarkably similar, doesn't make a lot of sense intuitively, but relationships you can get to once you sequence genes, the underpinnings of disease, commonalities. It's a new way to think about disease and think about how you can use microarray data. That's something for you who are in the toxicology business, how can we use microarray data in a new way. They are calling this, we all have onlies, this is the disease-OM. Basically the collection of all diseases and the genes associated with them. DiseaseOME.

Tissue models, 2-D culture doesn't mimic the complexity of tissues, we talked about that, we are not two-dimension animals, [indiscernible] they provide important tools, but this happens to be native -- you can see biomarkers, gene expression, pathway analysis and we know how to do this. We can culture, even I could still culture, do a cell culture. So we can do skin, lungs, the liver, all key sites of impact. 3-D data will be more physiologically irrelevant, you have interactions, barrier functions, temperral gradients, mixed populations of cells in relevant configurations. You don't have unicellular cultures in your body, other cells interact with them. By now we have all bought into the really high importance of the extra cellular matrix.

If you think about an idealized 3-D tissue model, you have staff old, scaffold, cells, need to think about, for example, respiratory tissue, patterns, tox effects on tissue integrity, cell polarity, barrier functions in a 3-dimensional model, fusions are possible. If you are looking at a lung tissue, gas exchange, something that can be measured. [indiscernible] to look at host responses, cyclic stress, tissue growth and maintenance and real-time monitoring for studying disease progression. Does not do what a real live furry creature with ears will do or a person will do, but it's a step in the right direction in terms of trying to find alternatives, trying to find the limitations, the positives, uses we can put to this kind of model. The perfect model may never be -- may end up being a person, but you can't use a person for all the tests. This is a good start.

The thought is to think about this as a 3-D tissue, the scaffold, cells, think about the structure, the porosity, topping topography, gradients, getting increasingly complex here. As the complexity increases the difficulty in creating these increases. Profusion, channels, v ascularization, bioreactors, optimizing cell culture conditions, biomechanics, innovation, nirvana. One of the most difficult things that we can do here, signal propagation, oak echo located response in a tiny piece. Host response, another part we could think about, functional read out, sensing, non-destructive, way to monitor various aspects of this 3-D model and computational design, systems were mentioned earlier, an area that holds great promise and is very complex. The promise is in the future, many things we need to work out still.

This is a 3-D liver model from a researcher, Linda Griffith, a profusion system, she has created a scaffold, has liver cells on it, used that to seed some single tumor cells from breast cancer or from prostate cancer, has shown they form a solid tumor on the liver [indiscernible], a model for cancer, obviously not toxicology, but worth considering what kinds of applications such a model could have for toxicology testing, I know you are interested in. What toxicologist is unentered uninterested in the liver? Like the cash carburrator in your car --

Two publications, the policy paper from Science, and the NICEATM -- environmental toxins, the 3-D models can be complementary to these activities at many levels. So the ultimate outcome is going to be more predictive models of human responses that incorporate pharmacogenomics, as well as reduction, use of animals in research. A opportunity to transform and accelerate drug discovery.

I said I would mention SBI R, the small business research -- set aside for innovative development of projects that are translational. That is targeted to small businesses, and in fact many biotechnology companies are applicants to this program. The point of this slide is not so much the science in this model, but primarily a 3-D model of respiratory epithelium from -- a small biotechnology company, using for screening for and cancer -- a grant from NIEHS to validate as an in vitro airway model for toxicology applications, one of the things the SBI R program supports in this particular area.

In closing, a couple things I want to say. First, I want to be careful not to overpromise. We support fundamental research. Much of what I talked about is, I would say, years from being applicable to high throughput kinds of screens that toxicologists would use. However, there is the promise there, and I think it's worth thinking about, exploring, so I applaud the NRC for having the vision to articulate a vision. We need to capitalize on the new technology, new science. I think if we don't we will be left behind. It's just as simple as that. I think, these are concepts, a vision, not a finished proposal, not ready for study section review. As a community we need to think hard about what our direction is going to be. In the short term, is there any low-hanging fruit here? Anything we could turn into, emphasize translation on. In the long term, minds much greater than mine, scientifically active need to think through, hard, as to where we need to be in the next five, 10, 15, 20 years with this project. Thank you very much.

Thank you. I want to say the rate of delivery was just perfect, and I want to encourage the remaining speakers to go at the same rate. I can, I don't know if everybody else can.

Anyhow, I want to remind the group that we have a public and kind of discussion period scheduled at the end of all the talks this afternoon. We have a few minutes after for clarification, if there are any this would be the kind for that type of question.

Dr. Fox?

Nice presentation. I just have a clarification question. When you talk about these 3-D tissue models, these are distinct from tissue slices, these are recon instructed models, although tissue slices are truly 3-D models themselves. Do you view in your picture, addendum to the throughput, do you view that tissue slices have an equal or parallel place in this kind of process? I am just curious from your perspective.

I would not want to close the door on anything that works. From that perspective, yes. Tissue slices have been around a long time. Even I have used tissue slices back in the day. I would think we need to start thinking about what are the limitations, if they haven't progressed beyond tissue slices, what are the limitations in terms of throughput, everything else, if they are going forward to the next 15, 20 years. I would not want to close the door on any of these things. Yes.

That reminds me, thank you for bringing up, I neglected to mention 3-D models like this are going to be part of the NIH roadmap, a solicitation, part of that, so for innovative, transformative research, one of the areas of research will be 3-D tissue models.

Other questions?

Thank you very much.

Our next speaker is Dr. John Biewker. NIEHS/NTP.

The national toxicology program for 30 years now has worked under the four goals you see here, two of them clearly relate directly to things we have been talking about today, toxicology, developing, validate test -- we have been in business 30 years, coming up in October. These are the numbers of chemicals we have looked at in different endpoints, you can see an enormous amount of work. It's not really going to be sufficient to take us into the future. Recognizing this a number of years ago we developed the NTP vision for the 21st century, moving from observational science we have been practicing, to more predictive science based on mechanism based biological observations.

We articulated this in 2004 in the NTP roadmap, and three of the elements relate specifically to things we are talking about, further evaluate, develop high throughput capabilities, and -- to toxicology information to add value and understanding.

You heard an excellent presentation by Kim. Another report, NIEHS sponsored report on application of toxico genomic -- risk assessment. Basically these two documents both had in essence the same goal, which was to begin to articulate the ways in which, what I would like to call high density data development through high throughput screening and genomic technologies, how you can bring the information from these kinds of technologies into the risk assessment realm.

As I mentioned, the high throughput screening, part of the NTP roadmap, had this program in mind, with three main goals, identify mechanisms, develop predictive models, and to prioritize substances for further in-depth toxilogical evaluation. In 2005 we began partnership with the libraries -- NCGC you just heard described, to identify batteries of cell-based and biochemical pathways, to probe toxicity pathways. In the past years we supplied chemicals, assays and financial support to the -- you will see later on, few minutes, the data generated from our high throughput screening assays, the toxicology testing program, toxicity pathways, immune function in cancer. I know it's been crittized in the past few minutes, but toxicity pathways are common to many diseases, we picked immune function in animals because we have databases and there's also a large amount of flsmgz information from rodent -- processes involved, immune function and disease in humans.

In 2007, to aid the progress of these activities we created a branch called the biomolecular screening branch, testing high throughput and medium throughput. This branch is headed up by Dr. Ray Tice, the acting branch chief. We are under this program developing the analysis tool and approaches to allow this integrated assessment of the high throughput end points and associations with the traditional toxicology and cancer models. Also carrying out the automated -- with --

The permanent branch chief search is underway crnltly, and we are recruiting bioinformatics to help out with this activity. Some of the activities of this branch so far, first chemical sets that have been sent to the NCGC, which are generally sent in almost, about 1400 chemical sets, the first chemical set was screened, the 96 assays currently underway at NCGC in relationship with our activities at EPA and also undergoing additional 4000 chemicals selected to send to the NCGC for evaluation, and further assays as we continue to send as ways to NCGC, we are working with commercial suppliers primarily to provide assays and we have scheduled a public meeting on September 11 and 12 of this year to have commercial suppliers come in, present their wares to us, in essence, argue the merits of the various assays they have developed to date, such that we can make some selections to get the most we can out of the NCGC activities.

Targets on toxicity pathways you have heard about have been identified and we are working toward covering as broad categories as we can in this arena. The interagency agreement we have is being increase increased in terms of funding we are providing, created a memoranda of understanding with -- and -- signed back in February. We are in fact cosponsoring agency for the 3-D tissue model you just heard described in the last talk.

The NTP interagency agreement with NCGC, we provided a number of assays related to toxicity, cyto sis, antioxidant response pathway, 45 DNA repair, pathway, cytotoxicity in chicken cells -- deficient in, and channel pathways. These are part of the current battery of 96 pathways currently underway, you can see the various break down of pathways they believe are part of the overall armament air related to toxicity, the bottom one is 10 nuclear receptors provided by the EPA, providing an enormous amount of very interesting information.

It's already been mentioned, the MOU signed back in February, high throughput screening, toxicity -- termitation -- interpretation of findings, a joint program, the genome research institute, chemical genomics center, EPA, I put the slide up to show the pictures taken at the press conference held in conjunction with the signing of this memorandum of understanding, to show you Dr. Z actually came to the press conference, participated in the press conference and is very interested in developing the activities under this memorandum of understanding. I think he didn't have to be here, could have been any number of places, but chose to be here that day. That meant a lot to us.

What do the partners bring to this? I will not get into the discussion Kim brought up, whether this should be carried out by one organization or a conglomerate. In effect, the reality of funding at this point is if we need to go forward we need to do it in conjunction with our partners with similar interests. I would love to start up a new agency, $ $300 million, I don't see that in the future right now. The NTP, NCGC, and EPA all bring complementary strengths to the program. We covered the boxes in all the areas, historical toxicology data, experimental toxicology capabilities, to go back, analyze in standard animal models, that output from the high throughput screening models. What's really important here is that we have a structure that allows us to do iterative testing. We are not going to understand the high throughput screening data output unless we can go to animal models, take it back, understand where we are, what the high throughput screens are able to tell us and not able to tell us.

There is a niche for the lower organism models, the C El gans and zebra fish, the reports call for targeted testing program that allows you to move from the very very lowest level of cell-based assays up to an intermedial before you get to mammalian. There's a place for dweching developing that in this system toxicology is covered. Some toafert begin to look at the evidence of genetic backgrounds, genetically defined cell lines that differ in certain genetic properties.

Also, the validation experience obviously, NICEATM, a format you can actually deal with it.

How is this activity being carried out? Three agencies, I think have been working together remarkably well. We created four different working groups to address the issues we have to deal with. There's a working group on pathways and assay development, and selection. You will notice these are headed up by folks from NTP, NGCP and EPA? Every case. The -- suitable assays, focus on human cells for those. Looking at trying to figure out ways to reduce biotransform aigdz capabilities and evaluating reliability.

Looking at, defining, create a chemical library of 7600 to run our assays through. The NCGC is providing a drug library of 2000 compounds, FDA is in possession of human data in relation to the toxicity of these 2000 compounds, trying to establish a library of water compounds, most we are able to analyze in high throughput have to be salt soluble in DMSO, to take appropriate action, progress in that area.

There's a group looking at bioinformatics, the strength primarily lies with EPA at this moment. Defining positive versus negative calls, this is diff cultd when you have single end point assays as some of them are. The NCGC is running 4-point response curves, sometimes hard to tell a negative versus positive. Evaluating patterns of response, relationship to adverse health outcomes, data integration step.

A fourth group I am on, the targeted testing group. This is a group looking at trying to understand whether the best approach at evaluating a positive response in a particular assay, particular platform, is to reproduce that assay or to take that end point and look at it in another platform that has a whole different cell type, set of conditions around it. Those are the kind of questions we are dealing with in this targeted testing group. We also would like to use this group to decide which should go back to in vivo testing, iterative testing I mentioned earlier.

What is the universe of chemicals we really need to worry about? You always hear about 80 to 100,000 in commerce, but the use in daily life, only about 13 or 14,000. The EPA has come up with a specific number -- I don't know if I put credence in that, but if you sort through that, of that Northshore number of structures with chemicals known you have about 10,000 undergoing assay development currently. Looking at those with the appropriate physical/chemical properties to be able to be analyzed in high throughput screening, there's about 7,000 we could conceivably be working with in this activity, what we are targeting. To get all these chemicals into the system now so we can begin to run them in all the assay systems we put together.

Karen will be talking later about the tox cast program. I won't go into detail, it's the EPA's arm, if you will, I look at it as the EPA's arm of the -- they believe it's a stand-alone program. But there's a system whereby they have taken the approach of going to a number of commercial organization, issuing contracts with those organization and providing them with chemicals, and they have come up with a number of different assays. In fact, over 400 different assays where they are running currently 300 chemicals for the variety of types of information you see here. One of these assays is actually a genomics-based assay. Will be integration between high-throughput screening and genomics. That one they are making the least successful progress with it at this point, but the attempt to integrate the two types of information.

There's a phased development of tox cast, and basically I won't go through this in detail, but gives idea of the timeline on the right-hand side for when they believe they are going to be able to start running thousands of chemicals that are data poor through all of these assays to allow them to generate prediction model and prioritize those for further evaluation. This is in the fiscal year 11 to 12 range.

I will spend time talking about worm tox, within the NTP a program looking at C El E gans, the utilization for midthroughput screening. There goes the movie, the worm, you know about worms, I won't tell you much about C El gans, one of the most well-understood, studied animals we can use in toxicology screening. What can you monitor in the medium throughput format? Growth, feeding, reproduction, gene expression, development, you can screen these using wild type C El gans, a whole battery of C elegance with genetic mutations, there are transgenetic them nem a toads, we know what they do, C El gans that have through RNA interference have -- gene knocked out. You can order them, they are frozen --re ey are frozen -- you can expose to hundreds of -- changes in phenotype, protein based response C elegans to be exposed to a variety of toxicants. You can use microarray, adapt methods for high throughput screaming screening in the C elegans. We are developing informatics tools, have reached -- development of the national center for toxico genomics is just about ready to be utilized to integrate information from all of our databases at -- and NTP. This database is called the chemical evaluation and biological systems knowledge base. It can include information from in vivo assays, in vitro assays, genomics, proteiomics, high throughput screening, and allow information to be integrated in a way you can gain knowledge, you could in other ways n ever tease out from the associations you can put together in this kind of database.

So this is moving forward. As I mentioned, I don't need to go over this, we just heard about it, we are one of the agencies signed on to support the 3-D tissue models under the roadmap, pleased this was selected to go forward.

Finally, I have to show this slide since everybody else has, I think that, again, I want to get back to the issue of whether it should be one agency or many, but at least what we tried to do was lay out a logical way in the conditions we face now to accomplish, at least begin to address the challenges laid out in the NRC report.

Whether we can pull this off with all the changes that will be coming up with NIH, I don't know. We are going to go forward as best we can with this kind of approach. Finally, what do I see as how we would approach an evaluation of toxicants in the 21st century? I hope the stage of evaluating, initially examining cell-based, in vitro, take a certain number of those agents through the model organism, C eleganses, zebra fish, who knows the stage, end life studies, and one of the things this type of approach brings to toxicology that we haven't dealt with before is that we will have an idea of the end points, genes, what effects biological chemicals are having in vitro before we design animal studies. Instead of putting it in an animal, in a black box that we think will cover all the end points, will give a read out we can understand in terms of human health, if we take a chemical we know hits six different nuclear orphan receptors, shows a pattern of response similar to another set of dhms, chemicals, then design around that, we could bring an enormous amount of pow tore the field of toxicology that we currently don't have.

This is the iterative process I have been talking about to develop the predictive models to take us forward into the future. And, goals, timelines, I probably have taken up too much time at this point, but I will just tell you that we managed to get it all in five years. I would be happy to answer questions at this point. Yes?

Obviously I was pleased to see there's 100 human toxicants identified within the EPA Comcast program, but are those compounds now present within the NCGC?

Yes, all the compounds that are going through the EPA process are going through the NCGC.

What is the array of diseases represented, those 100 --

Great question, and I have no clue.

Wonder if somebody could give me that listing and I [indiscernible] the basis of their selection. That would be important. It's critically important we join the world of to, toxicology, to the world of human disease, not a hand-waving scare the Hell out of people approach, but understanding the world of hiewmg human disease, how these agents fit in. I think it will be important to make certain we don't have just a list of chemicals, but a list developed in a very thoughtful way.

Nice approach. In reality, your previous slide, penultimate slide, it's systems biology, predictive -- that's --

I don't have anything against systems biology.

That's the Leroy Hood definition. To parrot what Roger says, expand this, you say there are six known pathways sur rounding -- toxicants, the sentence I would like to hear after that is these six known pathways are also involved in obesity, neuro degeneration or -- not just another group of compounds activates, but they interact in a Venn diagram like --

This is the tier testing approach that wasn't presented in the original document as well as you propertied presented it today. Thanks.

Other questions for Dr. Biewker?

Okay, thank you.

Question: [indiscernible]

I don't know that I can come up with a correct number, but they will tell us that they put about half of their man hours into the toxicology program in terms of assay min ya tourization, running the assays, much larger number than the amount of money we provided to them.

One more comment, and then we will go to break.

How does this work? Push the green button?

Just about between the last few speakers, the -- has almost completely been given, my talk. I have a different perspective that I will talk about, I am at a slightly different level. I will talk about a group we formed at NEPA that has decided to take on the challenge of the paradigm presented in the 21st century toxicology document from the NRC. So I am basically looking at it from more the senior scientist, person with experience in the trenches as a regulator, and so my talk is a little more descriptive, and I realize there are a lot of issues and a lot of things that have to be taken into being accounts, but EPA has decided to take the plunge. We have taken a lot of time to try to put a strategy together and I will talk about that briefly. So I will try to give you an update on what we have been doing. This is the talk concerning the highlights of current and future directions related to alternative -- research and development at the EPA.

I want to thank EPA's office of research and development for providing some of the slides used in this presentation. As you know, Hal Zanic, head of OAD, gave a talk on activities at ICCVAM's tenth anniversary presentation. He allowed he to borrow slides, and I want to thank Bob Cafelock, head of EPA's -- for providing information for the talk.

EPA and ICCVAM have a long history, since 94, working together with other federal agencies, towards the goal of advancing the principles of the three Rs and realized that this needed to be done in the context of the agency's mission to protect human health and environment and promote high quality science. I think this agency mission statement or idea is something that we all need to keep in mind as we consider the challenges of the NRC report and recommendations. EPA benefited from the use of ICCVAM analysis, strength of reliability, applicability domains.

We have had a lot of things that have gone on, going on in terms of collaborations with ICCVAM. We have been actively involved in the development of various types of guidance documents, such as guidance document 34. We have certainly been oin volved in a lot of validation activities, all the work groups, and workshops, always had representation, people worked very diligently to help ICCVAM and NICEATM plow through the projects and issues; we helped develop performance standards for alternative test methods.

The person who is the national coordinator for the OACD toxicity test guidelines program works, is an EPA employee, so we certainly work together with other various program offices at EPA and ICCVAM and NICEATM to develop better interaction with regard to projects of interests and the guideline program. There are many examples of that, that are currently ongoing. I think there's a B cop, and a nice test guideline being considered many projects.

EPA participated in many task Norse forces and work shom shops, toxicity testing issues with ECFAM and in other ways.

Of course we are also involved in following up on the various nominations that EPA put forth for ECFAM and NICEATM evaluation. These are some of our recent activities, to explore, scope expansion, the ocular irritation and cor corrosion assays. We have a letter to ICCVAM and NICEATM, OPPTS, overarching office requesting technical review of non-animal approach to eye irritation -- clean products. We also have plans to fund a workshop to update on humane care of -- the office of science coordination and policy of which I am a part, entered discussions with the EPA's office of research and development to jointly try to funds this particular product.

We also have participation ONS advisory board, upcoming -- did I say that right? Developmental toxicity [indiscernible] testing, right?

Now I want to get into what I just mentioned, our efforts to try to develop a strategic plan. EPA established an agency-wide future of toxicity testing work group with the abbreviation of the -- developing a response in the NRC report. So we developed a strategic plan for the future of toxicity testing at EPA.

Components of the plan include a number of strategic goals which involve, and you have heard a lot of this before, buts I will basically summarize, toxicity path identification, prioritization, pathway risk assessment, institutional transition, what we are going to have to do in order to incorporate these things into our programs as we make progress in implementation of the strategy. Then the plan also includes applications and impacts, as well as drivers for the proposed new approaches. I will talk about that more when I talk about tox cast. Computational toxicity approaches, toxicity pathway, knowledge based or key aspects of the plan. We got good news from our agency's policy science counsel, reviewed the plan, it's been endorsed by them. There were a few comments they made, tweaks here and there, but basically it's been endorsed. The big question, I guess, is money. We know what, OMB is being a little tight with their funding right now.

We do see, however the, the plan is a critical step in a long long-range process to move forward with the strategy for a testing paradigm shift at the agency. So we're excited anyway.

So, as you have heard, the agency is collaborating with NTP, NCGC, memorandum of understanding. You have heard about that. We hope that the strategy will help to facilitate some of the interactions we have with partners in the MOU. John talked about some of the groups established as part of the MOU partnership. My information may be a little different than John's. I was given a figure of 6000 chemicals and the four working groups were as follows: Similar, not sure about the number of chemicals, but more or less that's the situation.

Now I want to go into a little bit of some aspects of the tox cast program. Developing new approach, why do we want to develop a new approach? We commissioned the NRC report, want to, of course, follow-up on the recommendations, but we have some specific needs at the agency that we are trying to address, and one of those needs is to obtain information and address information gaps for chemical, hazard and risk assessment for many different agency programmatic needs. Examples would include different exposure scenarios, mechanisms of action, toxicity, life stage sensitivities too better understand, aid in -- you heard about mixtures before, sometimes clean up, know something about them. Other mixture issues as well. For species extrapolation, these are just a few examples of many things we do at the agency. We all vs many many chemicals, many classes for which we don't have as much toxicity information as we would like. I worked many years in the office of pesticide programs, we are very data rich. A lot of other programs, because of the statutes they operate under, or for other reasons, they don't always have the opportunity to have the data they need to, or would like, put it that way, to make decisions. They try to do the best they can, use the best available science for decision making. That's an agency mantra, but, depending on the program, there may be more or less data available. We would also like to exploit recent advances in toxico genomics, new -- statutory authority difference that's can impact on the amount of data, type of data we have.

Depending on the program, the cost of the conventional approaches can be quite a bit, sometimes an issue, and then of course we want to reduce animal use where possible. We need better information or as good information as we can get for ecological risk assessment, and reduce hazard and risk assessment where possible.

Tox cast is sort of a major player in our strategy. As you can see, it's also playing a role in the MOU work. What is tox cast? Well, it's really a computational toxicology approach, uses a variety of high throughput screening assays and techniques to derive chemical profiles, signatures for hundreds of end points, and they may have potential relevancy for cars no Jen -- developmental, neuro tox, chronic tox, other things.

I wanted to put up the link to our website, there's the linkage between -- tox and the tox cast program which is associated with our national center for computational -- what's the utility of this approach? Characterization of toxicity pathways, the ability perhaps to use resulting hazard predictions for screening and priority setting for further testing. I think this is seen as something very important. If we could get information that would help us make assessments about the priority a chemical might have for trying to get further information or help us figure out what kind of further information it might be good to have, this would be a positive thing to have. It would be useful perhaps in obtaining mechanism of action information, and this can help us refine a risk assessment, so that's why we are excited about that chyned of kind of potential. It will allow us to data share; will be publicly accessible and obtain data for targeted testing. The databases we have related to store information, the relational database, information from tox cast to populate the database. We have another slide that shows a couple of the databases we have, just associated with our efforts to use new technologies.

You have seen this slide before, I put it in basically to illustrate how a high throughput screening time can shave a lot of time and cost. Not sure about the human relevancy issue, depends on the tox question, whether something's reel van the relevant to a human or not, showing you can screen a lot of compounds more quickly than with conventional testing.

This is a pretty important slide. I will say I am taking this as something tentative in terms dates. The dates are already in are 2008, may be reached. The development is going to be phased. The first phase has about 300 chemicals or more. What's really personality Important is we are going to use data from the pesticide program, I already told you is a data rich program, from lots of different assays, chronic, subchronic, lots of systemic, developmental, mute A data, cars no Jenicity data, a big data mining effort to get data from the pesticide program. I really do not have an idea of how it's being done or how well it's being done. The purpose is of course to get this signature development activity up and running. I don't needs to go into the rest of the slide, but phase one is very important, and in part of the success of tox cast is going to depend on how well we can meld the new approaches with the existing data we have, or integrate them, integration may be a better word.

These are some of the databases we have, when you get the slide you can look at them, get more information.

What are some of the benefits of toxicity pathway elucidation, one of the hopes is to use these computational models, to assess pathway risk, you might not have as much data as you would like to do assessment, or improve current risk assessments. One thing we have is a development of a virtual tissues, organs, systems with the goal of linking exposure to response, effects, example of this is the virtual liver project. The hope is to be able to predict liver injury by chemicals.

There was a diagram there, did it convert off? IT people? I can A sure you there was a diagram there, you have already seen it, so we will move on -- there it is. I see, okay, needed to keep pressing the button. You have seen this, basically shows you one of the ways that we envision this paradigm shift to function. You have already heard people talk about it, I will move on.

Moving forward with the vision, we know there's going to be challenges where the vision development, acceptance. It's an iterative process, long-term process. There will be complexities of data analysis, interpretation. We will have to see, look at adequacy of new approaches to meet regulatory need and look at them relative to existing approaches. Of course expertise and training will be involved.

There are these other considerations too, in order to make the whole thing work you have to have this comprehensive suite of in vitro tests, targeted tests for adequate data for decision-making, adequate models, infrastructure changes in the anxious agency to support research, funding, outreach to stakeholders to explain this. People may feel nervous about some of these new approaches. One of the things we will do is try to use case studies, my understanding, to try to work at this in small chunks, see how our approaches can work in small case study scenarios.

Of course, there are validation issues, challenges, for these rapidly evolving technologies. We may need to consider layered or multilayered -- excuse me, some approaches are multi-layered or multi-leveled. May have to think outside the box a little bit for ways to approach the validation challenges associated with some of these new technologies, and their applications in the regulatory arena.

You have seen these ideas about validation before, the in vitro systems do need to meet a bar in terms of trying to see how they compare to existing technique that's may already be useful. Another idea is to look at validation as fitness for a purpose. These are some validation needs that may be associated with these new technologies, platform validation, software analysis, validation, biologic validation, regulatory validation, pathway knowledge validation, et cetera, et cetera. There's a lot of things to think about when we think about validation. So basically one size may not fit all.

As I said, we may need to think outside the box a little bit about what we need to do, issues involved in validating a particular item.

This is my last slide coming up. Many years ago, I and other people, one of whom was Lynn Checkman, were participants in an ICCVAM/NICEATM event to look at -- meeting report validation of toxico genomics based systems, for regulatory use, co-chair of the group, privileged to the on the group, Lynn was the former chair of ICCVAM. We were looking at ways to validate, to think about how we would validate to, toxico genomics that were rapidly evolving. One thing we came up with is the idea we would be moving from a lower level of complexity to higher and higher levels where you have to integrate more and more information. Basically from one level to the next you would have a prediction level and have to substantiate or validate that level. You go from the gene chip to taking it way to the relevant for human health. There's the reference, and this was an ICCVAM cosponsored activity. This is all I have, glad to take any questions.

Questions for clarification?

Question: I am interested in understanding where those 100 known human toxicants fit in. My basis for that, described previously, they are not in your system. To me, what EPA is doing here is quite frankly an end run around these things, taking a series of tests, and starting to apply them without any clear validation of the test. The validation pathway is "the data-rich rodent toxicity data" primarily for rodents. We are not interested in protecting the health of rodents, we are interested in people. I ask where are those 100 known human toxicants in there, and if they are not in there, why aren't they in the first series.

First of all -- I have to find out exactly which -- you are talking about. Bear in mind I sat on the -- but I don't know all aspects of what -- I can go back, try to find an answer to your question. I disagree with you a little about the use of the data mining -- databases, that's -- I worked with -- many years -- all I am saying is we use that --

Can you use the microphone?

Sorry. To make regulatory decisions. At least to me based on what I know of what's gone on. It makes sense to use a rich database to try and compare information you may be getting from some of these systems that are trying to simulate biological systems.

I would agree with her utility, but say they need to be preceded by, or at a minimum in parallel, look at known human toxicants. That's what we are interested in.

We may very well be doing it, maybe I can talk to you offline to get your exact question so I can take it back, get the answer.

I will be pleased to provide the exact question, and I would like the question and response interested into the record for the reader.

The question I am going to ask, and your response, so it's a part of the record.

I can go back, get a response. I need to know exactly what your question is.

Any other questions for clarification? Okay, let's take a quick break, 10 minutes, be back here -- couple minutes before 4:00.

4:00, thank you.

.

[ Captioner Transition ]

Session on afternoon break until approximately 4:00 Eastern Time Zone.

Unless I'm totally right, then others will claim it too. I'm going to start really at like a flight ahead into Dallas. I'm going to come down from high, but not talk about specific projects. Give you an understanding of why we do what we do. It will give awe clue how we relate to ICCVAM. I appreciate the talk about the NIH budget. If you take a zero off the the but the it's the budget of the USDA. Take two more zeros off and that's our budget. The responsible for advancing public health by helping to speed innovations.

In that regard the USDA regulates about [ Indiscernible ] of the GDP. Is divided into centers accordingly and is operated unfederal laws. I apologize, Macdoesn't like some of my fonts. As far as foods, foods are regulateed. We have a center for food safety. It runs under certain laws. We have a center for veterinary medicine. We have a center for drugs, which is the largest component of the FDA. We have a center for devices and radiological health. And then we have a sfert for biologics. It regulates vaccines and steroids.

In 2004 Janet wood cock, we all realized the time it takes for a drug to get from concept through the FDA it was taking too long. We came up with the critical path initiative. It was a way to try to innovate the process to shorten the time and expense to market. It's a path to enhance medical product development by creating better tools, this is where it overlaps with ICCVAM. Remember the FDA also regulates the manufacture process,. In 2006 there was, I provided the website, a critical path opportunities list. This is really laid out in 76 different categories of what researchers and the drug industry and others can do to help to speed up this process. This plan provides a description of the accomplishments of the critical path, but also specified opportunities that could help to speed development. It is broken is into six specific areas. One for the FDA needs better evaluation tools. Better biomarkers and standards. Biomarkers that are predictive of disease and specific biomarkers for disease and certain disorders. It's asking for biomarkers. Safety biomarkers in the preclinical area. Et cetera, et cetera. It also asks for advancements in streamlining clinical trials. Harnessing bioinform attics. It's key to understanding these high through put and high information generating tools and assays. To modernize the manufacturing process, how to modernize it and monitor it. But, also to deal with the upcoming slug of products that will come through agencies that deal with nanotechnology. The FDA also requested for assays to help with rapid pathogen identification and better predicters of disease. The FDA is focusing in on pediatric issues, including BPA. The young in the country are more suss suptable to toxins. Again, keeping in mind the FDA want tews shorten the time without the sacrifice of safety. NCTR is one of the six centers of the FDA. Our mission is different. Our's is to conduct pier research and provide technical advice and training. We do a lot of animal-based research for the other FDA centers. As part of that, our research is focused to understand critical biological events. That's where we overlap. NCTR's research goal. There's six. Advance scientific approaches and tools. This is the wave of the future. You can personalize drug modalities in people. To best practices, to develop new methods an standards to help understand regulatory textology. To develop testing platforms in food safety, biosecurity, food biodefense and terrorism.

NCTR has hypothesis-based and investigator initiated. In our case most of our research projects are designed to meet critical path initiative or other agency requests. The most common way to generate is project is a FDA center says we need to understand the toxicity of this, or we need a better way to measure the toxicity of this. That's where most of our projects originate. Also to meet one of the 76 or 72 goals. Where other agencies request our research expertise or capabilities to address questions for that, that also occurs.

Now, if I look at the document the four key challenge identified by ICCVAM there's overlap in two areas. That's identififying priorities and conducting and facilitying test methods and to incorporate new science into new test methods. I don't think we overlap too much with the last two goals. When you look at it overlaps a lot with the goals and programs. What I did not want to do is come up with here and spend 20 minutes going through the projects. I guarantee you it will put you to sleep. Especially at this time of the day. I will go through them a little bit to give you a little bit of perspective. Research divisions are divided by discipline, biochemical toxicology, genetic and reproduct tox, neurotox, systems intoxicatology and veterinary sciences.

We're also using [ Indiscernible ] assays and developing biomarkers. Within the same assay we're trying to validate biomarkers of exposure and biological impact. Rather than using the same animals for the same project. The same way for Tamoxifen. We're correlating those with biological outcome. Antiretroviral drugs, this is in neonatal human exposure. To reduce the possibility of AIDS infection. We're using biomarkers of effect to try to predict that in humans. The next several are right within the scope of what ICCVAM is trying to do, in some of these assays for food-borne toxins such as saxo toxin. LD50 is the only viable toxin. We have an investigator trying to validate [ Indiscernible ] as an alternative to a mouse assay. Also we're looking at realtime PCR assays. That's been working quite well.

Again, another project on ricin. You can see a lot of the assays are in the food biodefense area, but also going to alternative methods. In one we're using a modification of a lymph node. Instead of applying it to the ear it's an injection into the skin.

Um, in this particular one what we've discovered is with several of these compounds that are found in dietary supplements they have a common element. We're looking at gene expression in big blue rats. To see if those are predictive of the biological outcome.

We have an epigenetic group. And I can go on and on about these. We're looking at assays that have the potential for being invitro assays predictive of animal models. In this case a [ Indiscernible ] assay. In this case looking at the accepted [ Indiscernible ] assay and seeing if there's certain [ Indiscernible ] of outcome in that assay. In our personalized medicine we're looking at mathematical models for personalized medicine outcomes. As far as nanotextology. A group is looking at PC12 cells in nanomaterial toxicology. We don't have a clue right now, we're starting to move in that direction from an invitro stand point. So on and so forth. My toe chip assay. To use a gene chip to determine the outcome of that.

One of the things that happened last year, Jim [ Indiscernible ] who is acting director of [ Indiscernible ] gave a presentation at the six world conference. This is based on [ Indiscernible ] technology. In this presentation one of the things that comes up with Jen owe Micks is how good is it? One of the things that we've known is we need calibrated samples am we need metrics and thresholds. We need thorough information. One thing that happened at NCTR was this project, please don't try to read all of this. It's called the Mack QC project. Many different agencies, many different platform manufacturers participated in this. They've had several publications on the Mack QC project. This all feeds into the FDA's voluntary Jen owe Mick data submission with drugs.This effort supports that. In trying to understand how each platform performs in multiple hands. I won't go through the details. You can email myself. It was a very large project that is completed to try to codify the genomics databases in use. From Jim's perspective, he sees that the ability to go to a smaller number of animals and relate to humans that these are good tools that are useful. We've started codifying how to use these assays. The goal is to come up with biomarkers, or looking at a systems tox approach to understanding the impact of any test material on animals and how that may predict human. It's not just only system toxicology and the OMICs. It's also the assays that are being conducted. Understanding the techniques that are used, such as DNA methylation patterns. Better understanding of the statistics. Understanding polly morphisms in snips. To look at the role of snips in outcome. And also alternative tests.

So my conclusion is that the fulfilling our role in the FDA and the critical path [ Indiscernible ] is addressing some of the components. That is conducting [ Indiscernible ] but also incorporating new science and technology into existing methods. With that, I believe that's my last slide. Thank you.

Thank you, any questions for Dr. Howard?

I have a question. What has been the success of the program on voluntary submissions of the genomics data?

That's a good question. That's not my area of expertise. I know that, um, I think it's been pretty good participation from the various companies in that. They see that as the future. Do you want to, Richard, comment on that?

It was mentioned earlier this morning. I think it has been quite successful in terms of allowing FDA to look at the data to test drive the data. Try to see what is there. What might be used and have informal discussions with industry components on that. In a related process we've published an ICH guidance that drew on a lot of our experience with the voluntary genomic data submissions am I think it's been helpful and a worth while model and still continuing.

Any others? Okay. Thank you, Dr. Howard. Next we have Dr. Rich McFarland. Also fald, but CBER. -- also FDA, but CBER.

It's a pleasure to be back here. My capacity as a working group compare, but as the ICCVAM representative to give an overview of the three R's research. Dr. Howard did a wonderful job presenting the overall FDA mission and components of the FDA and the critical path initiative. I will not spend time on that. But they're also applicable to CBER. I'm going to highlight, without going into much detail, but to show a few graphs that are illustrative of the research that we do at CBER that is product-related, but also is related to the three Rs. A couple of examples. Many products are marketed and a couple of examples in emerging product areas.

What is our mission? It's similar to the FDA mission in general. This is right out of our statutes.ICCVAM's missions we're not going to dwell on. Except to point out that protection of human health is also an ICCVAM mission. If you take the two missions together they are comp complementary. The areas are promoting public health, prudent regulation, stressing the scientific validity of the data. We had discussion this morning about an FDA letter in response to ICCVAM recommendation. I want to point out flexibility with respect to achieved missions.

And our general regulatory practices. Because the regulations are written the way they are it allows us to accept data from new and varied nonclinical testing methods. So we don't prescribe, we don't have many prescribed testing methods. These are the products which are under CBER's regulation portfolio. We have three product offices.You can see we have a diverse range of products. Many of which are on the early stages of development as therapeutic fields.

Biologics are focused on end points. We do require provision of data. They're not very prescriptive. What that allow us to do is in our presubmission meetings with sponsors, which we offer all sponsors. Many of the typical questions relate to the data we need to get the product into the clinic. And a significant reduction in the use of animals occurred during those meetings. Unfortunately, it's all confidential information with the sponsors. But we frequently, a sponsor will come in with a particular proposal wanting to cover [ Indiscernible ] and slow their time to the clinic. We not infrequently say you really don't need to do this X number of animals, or this nonhuman primate. We can accept some other data from other models. This goes on in this one-on-one discussion. It's a little hard to provide data to put on a screen.

Now I'm going to highlight a few of our selected research areas. First from established product areas. Which is [ Indiscernible ] and vaccines. I want to tell you how we do research at CBER. We're evaluation and research. Evaluation is our first activity. Research is our second. We do this by having a cadre of people who are called researcher regulators. They spend 50% of time doing laboratory pier reviewed research. Which is reviewed with a subset of our offices advisory committee on a periodic basis. Part of their regulatory work is development of policy and guidance documents. Meeting with sponsors, presenting in committees and the typical participation in prelicense inspections and review of the various applications we get into CBER. I won't quiz you on all of those abbreviations.

Um, but they also -- about 50% of the time perform research. The research is focused on being relevant to the product, safety, efficacy and manufacturing and developing scientific tools and knowledge. We try to concentrate on specific areas that we, um, can fill that are not typically filled by academia or industry. And if they are filled by industry, things that are filled but not be shared because of commercial reasons. Here are three of our research priorities for the center for this fiscal year that relate to the focus meeting the three Rs. Many of these result in refinedment.Very similar to some of the things that Dr. Howard mentioned for NCTR. We're finding that facilitating new biolodgal products is causing us to do research into underlying established processes in terms of manufacturing and processing in vaccines for instance.

First concrete example. In vitro assessment of [ Indiscernible ]. They boost the immune response. A researcher is working on detector cell lines. When one has a new add Jew vanity there's lots of questions that you request.The list goes on from here. What are the potential benefits for establishing nonanimal testing? Reduction of animal use, refinedment, cost, and it allows you to control for species specific adjuvent effects. So a rapid screening assay could help in predicting.

What does it have to do with people? Well, we know from regulating vaccines for 50 years most of the common reactions related to pain, [ Indiscernible ]. Rare cases of systemic cases. There's much animal data that suggests [ Indiscernible ] cytokines and at the brain. To develop assays -- it's based with MM6. To go and look at a panel of adjuvents. Either in U.S. or in Europe. That doesn't work, we'll skip that slide. This shows just a summary of differentiation responses with various delivery systems and different doses. Aluminum is the most common adjuvent in the U.S.

It's not so true with the toll light receptor. Theed a know vectors can be used as vaccine vectors, but of course are also delivery eights. They're beginning to develop this panel of differ Rex groups. They have a fairly good correlation, used in a nonstatistical sense. They're now working on testing multiple [ Indiscernible ] cytokines, this is the first screen for novel adjuvents. This particular lab is working on adjuvents.

Um, another example from the office of vaccines. This is for vaccine to botch lie numb toxin.This graph shows a graph of Eliza. The graph across the top is one of the commercially available vaccines. What he is showing you is these -- the lines denoted CBT are intoxicate oids that he has developed to more closely mimic the [ Indiscernible ] of the active toxin by looking at the toxin structure and high pot sis is that if you more closely mimic the active toxin structure you will more closely mimic its activity. Both in invitro and in vivo response. This is protective immunity in vivo.

On the far left you have [ Indiscernible ] animals. You see a fairly steep survival curve. With a challenge. Commercial intoxicate oid is somewhat less steep. And these tox oids that the lab is developing has much better survival curves.This is the antiJen isty. He tried to develop a panel that spren between the active toxin and the commercial tox oids. To look for specific changes and changes in activity, versus changes in structure.

So he's generated several of them at this point. He's currently in the process of characterizing those. And his high pot sis is if the values correlate then the [ Indiscernible ] assay will help to reduce or replace [ Indiscernible ] testing. It may serve as a basis to develop similar Elizas.

We see applications from multiple sponsors, we're able to take trends that we see by looking at the applications, from talk to get sponsors in a way that very few other agencies do and it feeds back and forth between our regulatory role and our research role.

So -- the first two examples were examples of research that are done on an established product field. The last two are examples on emerging product fields. The first being invitro methods. I want to point out this research is a collaboration with CBER and NCTR. Through the NTP and collaborations with some academics in the invitro part.

The question is, want to develop preclinical assays to evaluate cancer risk of new and existing gene therapy products. The idea to do this occurred about a decade ago as gene therapy products were just beginning to be moved to the clinic. Specifically interested in integrated vectors. We began to think about how to monitor that in vivo. Because the expectation is there was some frequency, presumely rather low for integration to give mutagen sis. There a circumstance in Europe where some children who were in a gene therapy trial with a specific vector, for a specific disease developed some P cell lymphomas which -- let's see -- led us to continue on this approach. Also provided a lot of information as those events were examined to add and to modify our research approach in this area.

This is the study that's ongoing. It's in its initial stages at NTP. It involves mice with [ Indiscernible ] negative cells. One of the the advantages of for this assay is when the data is available it will be able to be shared with many researchers in the field. We're leveraging these data across this entire emerging field.Also archiving tissues from these animals. And those will be used to help inform this invitro assay that we're doing at the same time with collaborations with some academic centers to look at assessment in a replating assay for an invitro tool. The same vectors that are being run in the animal will be run in this system. Will also allow us to investigate new vectors that have different promoters that theoretically should be less prone to tumor [ Indiscernible ] as used as treatments. But we'll be developing this assay at the same time with the same cell, with the same vectors to look at replating with marker genes that we'll be able to look at the frequency of mew tans in this replating assay using marker genes as detections. We'll also to be manipulate the transdeduction and the multiplicity of infection to simulate what is being done in the animals and what has occurred in previous clinical trials. So the NTP study will include side by side assessment [ Indiscernible ] to determine the sensetivities and reproducibilities of both methods. As Dr. Howard mentioned, the clinical path initiative. We have funding for this work. One thing to keep in mine, this is going on at CBER at the same time in the industry research into therapeutic uses for the vaccines and evolution of the target vectors. It's really important that we have an assay that we'll be able to change the vector relatively rapid way to keep up with the industry. Of course, all of this information will be shared with the industry.

Just a couple of more minutes.

Okay.Another emerging area. We're looking at various biomarkers for cell quality. This is an example. We're looking aking at precursor frequency.

Tumor Jen isty is a potential issue. It's an issue of, more or less concern depending on the actual product. As we talked about earlier, these cells being used as a model in our office, they're allegation being used as -- also being used as potential therapeutic products. The end of this is to use biomarkers for differentiation to understand what is going on at the cellular level to character the products. To conclude our research program address ICCVAM's priority areas. Our research programs relate to all four of the five-year plan challenges. I would be glad to talk more about them in terms of how ours is consistent with the five-year plan. And there's my contact information.

Okay. Thank you, any questions for Dr. McFar land?

Frank?

How does your office decide to choose which biologics, what is the process by which you select testing for these products?

Um, our priority setting for our research is annual activity. And it involves public health concerns, primarily. Moderated by availability of our expertise to address the concerns in a timely fashion. But it's an ongoing thing. You know, if you ask us seven or ten years ago emerging public health would not have been our list. It's in response to our general mission. Our priorities are pier reviewed within the center and with the FDA science board and our advisory committee.

Other questions? Okay. Thank you.

Next we have Dr. [ Indiscernible ] for the national cancer institute.

To help us develop cancer drugs.I'm not going to go through the outline. What I'm going to do is go through and indicate why this testing is important for the NCI and move down to the future.We're trying to move things from the bench side to the bed. As rapidly as possible. Some of this is carried at the NCI, either in the intermural program in the NIH clinical center or through our vast array of groups that do clinical trials for us. To show you that we're not a fly by night organization, back in 1955 Congress allocated $5 million to create the cancer chemotherapy center. This is annual timeline from a few years ago. Everything above the timeline are improved cancer drugs that the NCI is involved in developing. So through that period of time we've crbed -- contributed to 40 drugs.

In 1992 when Bristol-Myers licensed the agent from the NCI that it was approved by the FDA. It's built on the order of a typical drug company. There's a discover and a development component. This is the discovery end of the machine. Screening technology branch by Bob shoe maker.The NCI60 is 60 tumor cell lines. They're used on a routine basis. When agents are first submitted to the program to try to decide whether or not we have an interest in moving them forward. They move into limited animal testing. And move from single animal studies up to [ Indiscernible ] studies that have multiple tumor types so we can look at drug in close proximity to the tumor and at a remote site. These are the groupers that are involved in developmental aspects. We have a biologic group. They do the small molecule production and clinical product. And we do what is necessary to put the drug in the clinic in the first place.

What I wanted to talk about is the change in our responsibilities in time. We progressed through AIDS drugs, imaging eights and other therapeutics. The majority of the agents were given by the interveneious route. More recently with many of them we're going with the oral route. But in either case combinations, cocktails need to be used to stop the decide from progression.

The drug development group is a forerunner of the decision network committee. Any organization can submit compounds. NCTDGs, the [ Indiscernible ] program was created about ten years ago. The purpose of supplying material, whether it's formulated drug as a clinical product or [ Indiscernible ] for the [ Indiscernible ] so that an outside investigator could submit his own [ Indiscernible ]. More recently we've become like an a CRO for the NIH.NIH created a pilot program back in 2005, which is now extended to 2013. We refocused our efforts to try to develop agents to look at an earlier phase of zero type study. We're looking at PK and PD. We're in the process of developing a program called chemical biologic consortium. [ Speaker/Audio Faint or Unclear ] so we can move these interesting probes towards the clinic at this point in time.

Also within the NCI, the defense of cancer prevention has a couple of drug development programs, similar to DCTD. But much smaller. The rapid program is a takeoff on our [ Indiscernible ] program. This is a website for [ Indiscernible ] program. Nanotechnology has become a big name. NCI created a unanimous technology lab. It's involved in doing a whole variety of studies on new products that are being produced.

Why is prediction important? Well, I have to apologize for this double phrase here. It emphasizes a fact. Cancer drugs are some of the most toxic compounds that we purposely administer to man. If any of you have undergone chemotherapy you know the side effects are terrible. We're taking sick patients and giving them toxic molecules that only making them sicker. The death rate in a phase one trial is 2% or less. Which considering the patient population and the types of agents is not a bad record. What do we need for the clinic? We need to predict the [ Indiscernible ]. That's easy. I know what the Noel is. In oncology we don't use the Noel. We need to be able to predict maximum tolerated doses. You saw toxicity at the same time that you saw efficacy. Some of the newer agents do not have that situation. But the majority of the clinical trials are escalated up to MTDs.

Of course we need to be able to predict [ Indiscernible ] toxicities. You need to know what to monitor for in a clinical setting. This outlines what the FDA requires for preclinical farm cotox. We use a rat and a dog, I will explain why. We follow the clinical route and schedule. In it calls for a single dose, that's all we do in terms of tox. We can get away with putting an agent into the clinic if we don't do [ Indiscernible ] or [ Indiscernible ]. With 120 to 150 animals total.

Biologicals, the end point is to use the most relevant species. More and more we're able to use the mouse in many of the studies. The clinical route and schedule is important. You need to know where the molecule distributes. I have to emphasize that study designs are agent directors and not designed to check a regulatory box. We don't do 14 day range finding studdies and 28 day definitive studies.

What is the cost of drug development and the failures? The main reasons that drugs fail is because of safety and lack of effectiveness. This is more important. I will show you why shortly. The drug city estimated if it could have a 10% improvement you could save $100 million in developmental costs per drug. The numbers that are quoted for drug development is $800 million. The most recent number that I heard was about $1.4 billion.

I hate to interrupt. You are about half way through your time. One of the reasons that it's important -- I thought I was talking fast enough. To try to move forward. In oncology about 5% of the agents are approved. The overall average is 11%. Efficacy plays a major role. PK used to. Toxicity has been increasing over the last few years.

So one of the questions we ask as animal data sufficient? You've seen this by Kim. I won't reiterate it. What is important is the fact that the dog and the primate are more predictive of human toxicities and give fewer false positives than the rat or the mouse in this particular study set.We match the route and the schedule.

If you pick a starting dose and you ignore you will use 1/10 of the rodent [ Indiscernible ], whatever you want to call it. Or 1/6 of the nonrodent toxic dose low. In this situation you would have had 14% of the agents unsafe to move into the clinic it you had simply used mouse data alone. If you are trying to use one species, you would have a lot of very sick and potentially dying patients in a phase one study.

Now if you use the most sensitive species plus other information you can reduce that down dramatically. What does that mean? You can have a high degree of safety of a starting dose, 98 to 99%.

If you look at MTDs the numbers that are based here are the numbers of studies that were conducted. If you look at [ Indiscernible ] which is comparable the dog is the best predicter followed by the mouse and then the rat. The conclusions that are that the dog is more protect predictive than the mouse or rat. There's a wide variability. If we look as dose limb, toxicities, once again the dog did a better job than the mouse. But the rat did surprisingly well in this particular case. DLTs are well predicted. Other toxicities are not predicted as well. That's a real serious problem. As we move into noncytotoxic agents we're moving into other types of toxicities.

So one of the questions we asked is can invitro toxicity data increase the safety margin? Does it reduce the numbers of animals that are used? This is a set of bone marrow conditions that we used. This is an actual Colin Nate that gets counted. When we use our bone marrow assay the enpoint that is promote sees [ Indiscernible ]. We advocate IC90s. Why? The MTD does well. A lot of investigators don't have faith of [ Indiscernible ]. This allows for determination of dose limiting toxicities. A serious of studies we did we found that [ Indiscernible ] was equivalent to about IC35. The MTD over all of the stud js correlated with the IC90. This is a quantity Tative analysis that we generated. ItIf you use mouse data alone you can predict the human MTD for 78% of the drugs. If you add dog data the numbers move up about 10% to 88%. How does that compare with the in vivo studies. With the mouse alone it only predicts about 69%. The invitro assay predicts better. But for the use of the dog the data is [ Indiscernible ].

How do we use this now? We use it in prospective studies.We use this data to select the development candidate to move forward with. This website leads you to the [ Indiscernible ] assay. The main difference between here and Europe is we use human marrow.

I will skip through this. You were talking about potency and whether or not concentrations are relevant. If you look here these are nanomolar concentrations. This is a data set that we're trying to use now in conjunction with other data to make a selection of which of these three we would take into the clinic. This would indicate to us that there's no differentiation between mouse and human. The odds of this drug being active in the clinic setting are high.

This is from a paper from 2003. Part of the data set that was generated.

What are we doing right now? We started a grant program about eight to ten years ago. Part of that program productions applications for liver and lung slices from the same PI. That investigator wound up dying about five years ago, unfortunately. The lab doing the work was not interested in continuing it. We started our own lab about three years ago. We picked that up. We're moving that along right now.

This outlines the methodology we use to slice, in this case lung. This gives you an example. Taking an exposure out to 28 days. This is a control. Untreated. This is a BCNU treated sample. This is looking at some inflammatory cytokines. An eight that produced inflammatory lesions in the lungs in the animals that it was studied in. The TNF data correlates nicely. There was no effect. They did one IL1. We did IL6. We don't have the data to correlate that. This looks at the viability of the [ Indiscernible ]. I will move to this. In the rat in which the drug appears to be less toxic the viability is of a higher level. Unfortunately, we have other assays that can be used in the clinical setting to monitor.

This is an example of some liver slice work using [ Indiscernible ], which is very hepatotoxic agent. They're HSP the 0 inhibits. They all indicate that [ Indiscernible ] itself is more toxic than [ Indiscernible ]. You see the toxicity sooner than 17AAG. That correlates nicely with the animal data that we see.

This is some additional data showing similar concentrations.

What are we doing in the future? Why do we need to worry about that? In the past we worried about using single agents. We have an MLU in place with NCGC. We're going to be doing a lot of screening work and a lot ofity raitive chemical industry. These are the groups that we're talking about some of that. I will make this move very rapidly. When we first started the tox ranch we just it [ Indiscernible ]. When we merged we started do a lot of kinetics. As we continued we started developing invitro human toxicology assays. Now we're at the point where we're trying to evaluate different [ Indiscernible ] screening for farm and tox. Using human toxants then we'll work our way backwards and use these to move forward into the program.

To validate assays in our opinion animal and human data is required. There's a wealth of animal and human data available. Thank you.

Okay. Thank you very much. Any clarify questions?

Roger Mcclellan. Could you clarify the drugs that have gone through your program that are now in the NCGC program [ Speaker/Audio Faint or Unclear ].

What we're doing with NGCG is very different. We're using them primarily for target assay development for activity. For efficacy. We're going into toxicity assays at a later point. Our initial concentration is looking at new agents.

They could take some of these compounds that you have studied with animals and humans --

Right. The different types of assays we would utilize and the agents. If we don't have aagleement with the company to use them we would work out an agreement to do that.

Point of information. The [ Indiscernible ] chip that you are develop something that intellectual house? Or commercial?

That was a previous speaker.

Sorry. I thought you mentioned it too.

The one that you are using, I thought I heard it there, too.

[ Speaker/Audio Faint or Unclear ]

Dr. Ba rieling?

I'm wondering in the timeline that you've spaced out throughout the presentation, with the evolution of the advancements in biotechnology, would you say it's the time from bench to bed side has improved, has stayed the same, or even if it has stayed the same or increased is it safer? You are incorporating different technologies.

If you look at pax ole it took 23 years. Value caid took about eight years. It was evaluated using a biomarker because the toxicity could not be firmly linked to methodology that you could assay for. If you knew that you could inhibit [ Indiscernible ] in the white blood cells one hour after administration and you didn't go beyond 80%. Then you could see dead as an end point without any monitory signs. That was an reason that it had to be incorporated.

In other words. Not only is the timeline shortened, but the drugs are safer and more effective.

We have an aketive PD biomarker program that we developed. The enpoint of the phase zero, unlike many people, we're using PK and PD as an end point in those trials.

[ Speaker/Audio Faint or Unclear ]

Do you have formal databases of the human data that you mentioned for the cancer drugs? Or databases that could be incorporated into sort of --

The unfortunate part is all of the data that we have is on paper. We are trying to reverse that trend. But until recently there was not a lot of interest of putting it into the computer.

Okay, thank you. Next for Department of Defense Good afternoon or is it good evening at? Admiral Stokes, members of [ indiscernible ], fellow representatives and guests. I am one of three Department of Defense representatives. I direct the animal care and use review office within the office of Research Protection at the headquarters of the US Army medical. In that capacity by office provides oversight funded by the command. Also the Defense Advanced Research Projects Agency, the defense stress reduction Agency and the Army Research Office representing about two-thirds of all DoD funded animal research. It is an honor to represent address this group today. My goals are to provide examples of basic science and regulatory toxicological testing in the Department of Defense animal care and use programs, discuss the present use of some effective alternatives and lay out the future direction for alternatives. These are our customers, the war fighters, service members of the Army, Navy, Air Force, and Marines. At the 36 Department of Defense laboratories that medical research and development mission is to discover, develop, and field medical products to protect war fighters. [ indiscernible ] that would otherwise deplete the fighting force or a medical treatment such as a catalytic chemical warfare agent by a scavenger that will degrade large doses of a chemical warfare agents such as a bearish isn't without self degradation. The goal is to provide optimal medical protection and treatment of all were fighters wherever they serve. Specific Laboratories take the lead role in the study and develop specific medical product lines. For example, the U. S. Army Medical Research Institute of infectious diseases takes the lead role regarding select infectious agents as well as for toxins such as rice and and bottom. So the other army, navy, and airforce Laboratories specialize in the area of chemical and biological medical defense radiation and nuclear medical defense and trauma treatment. Despite an established culture within the DoD to maximally use an alternative is the lethal nature of many of the toxins and infectious agents under investigation agencies the use of animals to establish medical products efficacy. Anthrax, Plague, [ indiscernible ], viral hemorrhagic fevers and filoviruses as well as nerve and corrosive agents. To provide substantial evidence of medical products effectiveness DoD Laboratories submit data from animal studies to the FDA under the provisions of the FDA Efficacy Rule finalized in 2002. The rule applies to the development and testing of drugs and biological is where he and efficacy trials are not feasible or ethical. Aside from the need to use the animal efficacy rule DoD Laboratories to make best use of replacement, reduction, and refinement alternatives in the course of developing medical products. A systems biology oppressor research might not traditionally been viewed as an animal alternative, but it's use has an enormous capacity to reduce, replace, and refine animal use and medical testing. A systems biology approached retains, integrates, and analyzes complex data from multiple experimental sources using interdisciplinary tools such as [ indiscernible ], [ indiscernible ], bioinformatics, in silica modeling, it is allegedly based pharmacokinetics studies, mathematical models and other tools to study the diverse molecular networks resulting from the interaction of the agents and the human or animal subject. The U.S. Army Medical Research and Material Command has undertaken to use a system the approach. The -- this result will be increasingly move away from reliance on animals. Remote its telemetry is another powerful tool with the potential to have a major producing effect on exports of animal providing superior data collection and the potential for [ indiscernible ]. It relies critical data that will be necessary to validate in vitro models are ready research and testing situations. Telemetry refers to the use of sensors implanted within an animal or the placement of an external sensors on an animal to collect physiological data over time. The data is sent wirelessly to a receiver in the animal cage and then to a computer for data analysis and storage. Telemetry equipment and use because the collection of large volumes of high-quality debt that more traditional physiological data capture methods cannot match. In essence it records the conduct of the entire animal study to be viewed in real time or retrospectively. At the US Army Research Institute diseases remote's large tree is used in the study of selected agent animal models where it supports a total systems files a report to the animal model vaccine and research. Telemetry is essential to the development of critical algorhythms in determining, and pathways for medical countermeasures. Provides a much more complete data package toward licenser of vaccine and therapeutic products. Some of the products are electrocardiogram, ventricular and blood pressure, introthoracic pressure and rest proprietors, body temperature. Detailed data is collected via those routes. This is a real breath of an integrated or implanted and telemetry system in a nonhuman primate. You can see the radio down below is the unit itself. It would have leads going forward to the heart, the chest cavity, and the circular structure that you see is actually a radio switch that can be controlled from the outside that acts to preserve the battery when it is not needed. Units are surgically implanted well before the beginning of the study allowing time for healing. Some of the advantages of integrated telemetry are that the system monitors multiple parameters continuously, 24 hours a day, seven days a week, every second of the day. The standard battery life is 15-24 months of continuous use and use of an gone off switch. There is improved grains for the signal, fidelity, bandwidth and for it was a response and an improved signal-to-noise ratio. These are different ways of saying more and better quality data are collected. Multiple frequencies allow for monitoring individual animal within a group housing setting which addresses animal well-being considerations. The systems multiple sensor capability reduces the number of animals in experiments required by acquiring the maximum data from a minimum number of animals. The bottom line is that remote telemetry allows the collection of multiple physiological parameters as well as animal movements without a human entering the animal housing area and therefore without effecting the parameters being measured. Telemetry allows the collection of data in quantity and quality that would go otherwise uncollected and which could not then contributes to the integrated emerging picture of a disease process. The use of the telemetry along with extra pain and distress monitoring by veterinarian research staff allows the real time edification of end points and was with limits. I want to say that there are also smaller units that are available for rodents, and they collect data such as blood pressure, temperature, heart rates, and activity. Study charts show trends over time for various parameters here at the beginning of a Plague study. Actually, this is a physiological parameters being measured in real time collected, planted, and evaluated on validated software. This is the study chart showing trends over time at the beginning of a Plague study. You can see that the values are the temperature [ indiscernible ] pressure, left pressure, and [ indiscernible ]. This chart of the animals respiratory rate over the study period. This shows breeding rate changes over time. Day three is the purple they. There is only part of the day showing. You can see how from days zero today wanted date.

Date three Hal the perimeter has changed. This is a similar one for changes in body temperature over time. This chart dated may also identify valid study and points you can see the animal is crashing. It does from a high temperature and then crashes suddenly. I'm not suggesting that would be the best of endpoints. My own feeling is that by that time the animal has perhaps done all the suffering it is going to do. At that point it may perhaps already be unconscious. It may not be feeling a lot of pain at that point. It might be that we can identify earlier end points to do that animal more good. One other thing to think about in these studies is, is it possible to use analgesia? We typically don't because of the concern that either nonsteroidal anti-inflammatory agents or even opioids could affect the inflammation process, could affect the breathing rate and thus interfere with the experiment.But there are other possibilities such as low dose Ketamine being delivered by an automatic pump or [ indiscernible ]. These are only a few examples of alternates is being used to study disease processes and develop medical countermeasures. The hoped-for results are faster and less expensive development of the most effective medical products to take care of soldiers, sailors, airmen, and the marines. Epiderm, [ indiscernible ] and the corneal [ indiscernible ] in vitro sell contra constructs that all of which are [ indiscernible ] at the U.S. Research of a Chemical Defense. Prior research focus on the use of less lettuce so cultures are or animals to study the chemical effects -- these commercially available multicellular tissue conserve to bridge research gaps between cell and organismal studies. The data contained from this study to initiate inflammatory [ indiscernible ] due to mustard agent defects. Further research may enable the evaluation and selection of the inappropriate tissue loss for retaining corporation in future studies. On the downside it was only the highest dose of mustard that initiated the inflammatory cytokinin response. That was only an epidural. There are limits to these models that need to be overcome. The U.S. Army Center for Environmental Health Research has developed -- they unit monitors this behavior using pairs of electrons above and below the gills of eight bluegills each swimming in their own flow through water chamber. As the fish move in traction it generates electrical signals and the water that are monitored by the end points which include ventilation rate and body movement rate. At the heart of the system an electronic brain integrates [ indiscernible ] that may affect fish behavior. If six or more of the eight fish are behaving abnormally and alarms indicating a possible acute water event as pictured on aldicarb figures the automatic sample for follow-up chemical analysis. The response to make chemicals acutely toxic levels has been conducted. A number of sites with applications, the aquatic bio monitor can be used as much resource waters such as community water reservoirs or finished product waters after declaration. Commercially available as the aquatic bio monitoring system. A second generation of in vitro or quoted by a much assistance is now evaluating cell types including bovine micro vascular endothelial cells. The electrical [ indiscernible ] varies in response to exposure to toxins. The unit is smaller than the fish monitor and is more portable facilitating use in field situations. With support from the U.S. [ indiscernible ] Protection Agency the U.S. Army Center for inverts the Health Research also tests for the effects of in the current [ indiscernible ]. The tadpoles are allowed to develop to early adulthood whereupon sex ratios, growth, reproductive, and other developments of parameters are studied. [ indiscernible ] destructors and to of both an old and [ indiscernible ] have been tested to date. A thyroid disruptor will be tested in the coming year. [ indiscernible ] is a 96 hour test that uses early stage embryos of the South African clawed frog to measure the effects of chemicals on mortality, the formation and growth in addition. It was proposed as a screening asset to identify potential human teratogens and developmental toxins. In late 1988 it requested that [ indiscernible ] evaluate [ indiscernible ]. Developmental toxicity working group subsequently agreed to further consider the method. An expert panel meeting in May 2000 review the current status and future directions for research and development of the method. The panel concluded that it was not sufficiently validated or optimized for regulatory applications. The panel recommended further standardization have to improve variability and recommended expanding the number of endpoints assessed to increase the performance for identifying a developmental toxins. The expert panel concluded that the use species is reasonable sense of the embryo genesis processes is highly conservative. There is widespread interest in and the four alternative approaches to detect potential toxicity. It is not sufficiently optimized or validated for regulatory studies, and problems exist with result variability that need to be better controlled. The use should be explored, it said. This different frog species was explored and a change was made. This is a smaller species of clawed frog with the ability to reproduce more quickly making housing and breeding the frogs more efficient. Unfortunately due to extreme assay variability, difficulty of use, and concerns about the applicability of the assay for human health screening and evaluation we do not have plans to further develop this. However they continue to be used in other toxicity tests. The file toxicity study example contains a possible item steady. This study evaluates the specificity of it neurotoxin to be used in the pivotal animal efficacy studies. Study results will be used to support a biologic license application to the FDA. Mice are injected with a toxin in a 96 hour test. There is no early end point used. Subclinical signs include the lack of response to external stimuli, death with a line separating dyspnea from death been difficult to discern. A surviving mice are euthanized as 96 hours. This is a possible study for an early in point where if identified an alternative in vitro [ indiscernible ] did your tests. In 2006 in collaboration co-sponsored a workshop on the alternative methods to replace the assay for potency tests. The consensus was that the review the method could be used in specific circumstances or in a tiered testing strategy to reduce and refine the use of mice in the current toxin test method. However, none of the review of methods could be used as a complete replacement for the test method. The panel discussions noted that it was -- that with additional development at that validation efforts one or more of the review of methods might be useful as a replacement for the test in the future. It was noted that additional validation studies were needed for most methods. In in the absence of a presently acceptable replacement alternative to the mice neutralization test an interim measure might be to establish an early end point to toxin studies toward minimizing animal pain and distress. As has been previously pointed out by USDA the USDA Center for Veterinary Biologics has published a policy stating that animals exhibiting clinical signs consistent with the expected disease pathogenesis that are unable to rise or move under their own power may be humanely euthanize to and considered as the deaths as outlined in non CFR. Can allowance be written in to limit animal pain and distress associated with the mouse this position tests. This is the end of my presentation. I'd like to acknowledge and thank Lieutenant-Colonel Brett for the telemetry information and slides. Also [ indiscernible ] for their contributions on the aquatics monitor. I welcome your comments and questions.

Thank you very much. Any questions?

Colonel, I have two questions. Has that been replaced with any other assay?

A different heritages assay.

This is an area of interest.

It may have been. I just don't know about it. That is something I can find out.

This is a tough assay. It has a lot of variability. Through the developmental time and have a matter of hours and the variability is really due to the fact that there are multiple effects going on. In 96 hours you go through some of the stages. Each chemical seems to affect different stages of development. These are pretty hard wired developmental stages. I am curious about how well it was applied and if it was applied well. I would like to have an answer on that.

Are you aware of that being used at another laboratory?

[ indiscernible ] because it is hard wired, and we certainly know almost every signal transduction pathway that is activated during development in certain tissues. It is like the worm in some ways, but the nervous system is much more advanced. It is a beautiful system, but the variability comes into play because 96 hours is way, way long. There are a lot of different exposures for servicemen.

How might [ indiscernible ] help and advice and promoting these.

The lead discussants are Doctors Cunningham and [ indiscernible ].

As far as the areas I think that there are some areas like the high throughput assays that are being done where I had a concern as to why some of those were being done and whether there has been a lot of thought as far as the objectives and goals AND milestones after the data has been put together and how that could help with the five-year plan. I also was not -- I had a concern as far as what that they would tell you and why you are doing it for the purpose of having assays that are more productive or were there other reasons why that would be done? I also saw a lot of overlap with those high throughput screen assays between agencies. I was wondering, this kind of leads into the second question. How could that be coordinated more? My concern here is that funding is limited. Everybody has a very restricted budgets. Having so much overlap between agencies I was wondering if there was a better way to coordinate that. Is this of way to actually help coordinate that? Anyway, that was a concern. I do think that the memorandum between the three agencies that are -- three groups that are putting that together is a great first step. I really applaud the efforts. I guess I just had more concerns about further down the line what you would do with that information. The other thing is, with those types of projects I think thinking through how you are going to go from start to finish and what you're going to end up with is really important because the compounds that are going to be coming through that were manufacturing are going to be more and more complex in try to figure out how to do it safely. Specifically I'm alluding to nanomaterials which has a high capacity for turning a lot of these traditional assays on their side because of their properties are so different. To go through compounds that are already out there that people know a lot about, have a lot of historical data base is built around them already, to be able to get that screening process down and optimized, I think that is a good thing to do before we start really looking at the safety testing of new compounds. Just to make that a the but more efficient and that you can optimize it for the new compounds when they are going through the process. Those were my comments.

Okay. Dr.Fox?

Yes. I will keep my comments brief. It's getting late. I agree with Mary Jane. I'm curious. I think one of the gaps is the [ indiscernible ] testing for most of these compounds. I have not seen except for the last presentation which was a combined EPA and DoD. I have not really seeing a lot of developmental. I certainly know EPA are doing some of this work, but I have not seen a lot of development and testing. I think in a variety of ways we should be looking at developmental toxicity and fetal toxicity as well. I think these are some gaps. Are there areas to be strengthened? I'm not sure how I can answer the question. The presentations were so diverse that I think I would just not comment on that. I think that Mary Jane has pretty well touched on the role here. I think there should be a little bit more interaction between the agencies. It seems like there may not be enough. Some more interactions to avoid duplication would be a real strong point for suggestion. Otherwise I think I'm done.

Dr. Mooresman?

I will try to be as brief as possible. I really applaud the efforts that are currently underway. I know this is slice of the total, but the presentations today were truly impressive of the amount of work going on. My first comment would be related to things we talked about under the five-year plan. Methods to communicate to the ongoing work to the broader audience because I am not sure everybody is very aware of all of the things that are going on. And it would be great because I think there would again be opportunity for broader stakeholder involvements and collaboration potentially. The second one is related to that. That is that because of all of this could work point on I think that these individuals, these groups that are doing this kind of work become ambassadors, if you will, for the implementation of these methods into decision making processes. So I think as these methods developed and become validated, accepted then indeed as Bill talked about earlier this bottom up approach, these labs, these individuals become a part of the solution of really thinking with the end in mind and making sure that that assays really to address the decision making process. And then finally if I were to address issues of gaps or strengthening area it is probably easiest for me to poke holes in the FDA presentations only because some of the centers weren't represented. So it is probably more a reflection of maybe not be able to share some of the details, but as I think of some of the emerging technologies and issues like for instance the naturals, the botanical, the herbals, things like that. As those things are being increasingly pressed into commerce and being scrutinized around the world for greater scrutiny and regulation that has those things work their way into the system that the defaults isn't a whole bunch of new animal tests that this kind of integrated research program can be a part of the solution to make sure that we are applying the best science, the best technology as we seek to protect human and animal and environmental health for these agents. Thanks.

Okay. Dr.McLeod?

Let me note that in terms of my current service and the community now -- I have the of virginity to serve on the predecessor committee before this activity was congressionally authorized. And that whole time I have never had the opportunity to have this kind of a series of presentations. I applaud the organizers for having put it together. I think it was an excellent presentation on a very fascinating array of activities. I found it certainly very informative. I think the agency relevance of the activities presented was a very clear to me. However, it was not always clear to me how the different -- the activities of the different agencies related to the activities of other agencies. I think in some cases it was very apparent that there were purposeful relationships that had been developed and we're on going. I certainly applaud those. In other cases it appeared that if there were relationships there but it was perhaps more fortuitous some as a result if of any planned coordination. As I look at these things that are going on sort of as a -- somebody standing on the sidelines I see some of these activities in different agencies that I advise. I see them in turns of clients. It is apparent to me that I think we have some pretty good interactions that occur on a scientific level between investigators that are interested. For example, ocular irritation or dermal. Those things seem to be occurring quite well. I am not certain at the grassroots kind of pants level investigator. The extent of the interactions that are occurring. That wasn't really apparent to me in terms of these. What I am suggesting is that perhaps a lot of what has occurred within the this nesting of activities has occurred through this network of agency representatives. I am uncertain as what is happening down at that of the level. I guess just seeing a glimpse of what is going on here today, if I looked to the second question here, what could [ indiscernible ] do to the strengthen their leadership role, one of the things might be to actually -- two things. One, really simplify the portfolio is. I am not sure it is clear that there is a portfolio other than what was presented as a series of snapshots in terms of different agencies. Perhaps that could be pulled together in some manner. I have seen that happen before in interagency activities. If it could be done efficiently and effectively it could be useful. The other is, I think there is an awful lot to be gained in terms of the breadth out of exchange of information. What I have said is, I think people who are digging in the same ditch, if you will, they pretty well know what is going on. But I am not certain the transfer of information across -- and I think that is where we have to improve because there are some common themes that needs additional attention. I have already made that clear today. One of those in my view is a validation. Again, I appreciate all the efforts that people put into the killing these businesses together. I found it very useful and I hope that in the future this can be done on a fairly regular basis with the community. I think it would be very useful.

Anybody else? Dr.Brown?

I really liked that idea of a portfolio of the test. I was very impressed with the breadth of types of tests and the numbers. I think it would be nice for there to be a portfolio and I think that should include the status of these tests and any sort of a validation process. A used to be disseminated outside of government policies and could be used by others.

I did not get across in Turley and Montauk. With regard specifically some of the things you've talked about in terms of the coordination activities are really working well and could be a model for other interactions between agencies. We have a governance committee. Is are responsible for pulling this together. Our monthly meetings between the agency, people working on these. It is really a remarkably cohesive group and is moving things forward very quickly. When you have programs of this type there is -- we actually see that as an advantage. There are many ways of approaching the same toxicity pathways. We don't know the best pathways. We are hoping that these issues is sort of a -- they can sort do the best buy farms and be can't take that information and move them into the other programs to did I hope I did not leave this points out. I think there are some things that we could learn from the success so far that we have had. Thank you for all your comments. They are very carefully considered and thoughtful.

Other comments? Dr.Stokes?

I also want to thank the committee for your comments and also all the presenters from the different agencies that have talked today. When we first started on this plan one of the things we did is we went out and survey agencies for their activities. We did get responses back. We invited their representatives from the agencies that have the most robust portfolio to talk with you today and make you aware of those activities. The suggestions have been could. As was showed the implementation activities -- we are establishing a research and development that committee. One of the purposes is to -- is to report on the portfolio of activities discussed ensure that information because apparently it doesn't get shared. There is not a good mechanism for sharing it across agency. We hope that will be effective and that the group can serve as a nucleus to report back on those portfolios.

Well, the chair would like to thank everyone. I think these series of talks are uniformly excellent. I think we had a very good discussion today. We have a 6:30 dinner which is on the second floor. One unfinished piece of business, Roger and [ indiscernible ] were agreeing to a question and answer that they would like to go on record. I am not quite sure how that happens, but Roger, if you could write that in tomorrow morning at the beginning we will write that into the record.

The dinner is actually rather cross. It is on the third floor right across the hallway from the store right here.

I might not be able to get an answer to that question but tomorrow. I also want to point out that I was discussing that as a research and development activity. That's is wanted makes sure that that was the context of that particular activity. I will request information. I will do the best I can.

Understood. I thought we would at least read the question in.

Okay. With that, I think we are adjourned. We will see everybody again tomorrow morning at 8:30.

[Relay Event Concluded]

  Back to top


Day 2

Event ID: 1024972
Event Started: 6/19/2008 8:19:21 AM ET
Please standby for realtime captioned text.

If everyone could take their seats, please.

Good morning, everyone and welcome to the June 19th meeting of the Scientific Advisory Committee on toxicological--This meeting is called to order. I would like to go around the table and ask all the participants pledge to give their names and affiliations. I am Jim Freeman with ExxonMobil Biomedical Sciences.

John. I am the director of the National toxicology Program.

[ indiscernible ].

Frank, professor of pharmaceutical sciences at St. John's University, New York.

Marilyn Brown.

Mary Jane Cunningham, integrated Laboratory Systems.

George Research Laboratories, director of toxicology.

Dan Morrison, Procter and Gamble. Mike, with the California Department of pesticide regulation.

Helen, lab animal for engineering, California, Berkeley.

Donald Fox, a professor Professor.

Roger McClellan, independent consultant for toxicology and risk analysis. Let me just note that I came from three weeks in southern Africa. I have been adjusting my time clock. I thought I would speed it up by going to my home in New Mexico and coming back here. I think I am already at noon today.

Richard McFarland, Food and Drug Administration.

Hajime, Kojima, Tokyo.

Jens, Linge.

Karen M. Burnett, U.S. Environmental Protection Agency.

George, the U.S. Department of Transportation.

Jody with the U.S. Department of Agriculture.

[ Audio/Speaker not clear].

Paul, lab animal veterinarian, National Institute for Architectural safety.

Laurie white, NTP.

Marilyn, consumer safety commission.

Bill Stokes.

Doug, the Program Manager supporting NICEATM.

Diane, [ indiscernible ] Columbus.

Mike Luster, senior adviser at NIOSH.

Ray Tice.

Sue McMaster, EPA.

Tom Burns.

Judy Strickland. Steve.

John [ indiscernible ].

Debbie.

Thank you, very much. I do not think there are any announcements for this morning. Before we get into the actual listed agenda, I want to finish up what we were talking about yesterday. During the session when we were hearing about some of the different federal agency research activities that are relevant to the five-year plan, during the explanation of the five-year activities, Roger McClellan pose the question and we wanted to get this into the record. Roger has written down a question and will read it into the record and Karen will have the action, not today, but to come back to us with an answer from the agency. Roger, I will turn this over to use the Mac thank you, Dr. Freeman. I appreciate having the opportunity to clarify and put this into the record. This follows the presentation of Dr. Karen [ indiscernible ] guest today. Thank you for an excellent presentation on a significant new EPA presentation. At which to comment and then follow up with several questions. I and concern that the tox cast--Program, the use of delegated to ecological message. Using an array of in vitro methods that have not been validated. This raises serious questions as to whether results can be used to predict human toxicity. One approach is to make sure that the tox cast includes a large and diverse array of chemicals known to be human toxicants, the chemicals have will establish potential for causing human diseases, both cancer and other diseases. If this approach is used, it might be possible to validate some of the tests that are used for tox cast. In my judgment the tox cast of comparing the in vitro test results to the to --Largely obtained in rodents is not a valid approach. That approach might--For predicting rat or mouse toxicity, however the real objective is to predict human toxicity and protect human health. I am encouraged the references he made to including 100 or so known how are the chemicals selected? What are the diseases they are known to cause and will the same chemicals be included in the cooperative program being carried out at NIEHS? That is the summary of the material.

Thank you.You have a written copy that you can give to the secretary, right?

I will.

Thank you.Karen, go ahead.

I do want to point out that this is a research and development activity. It was discussed as such. Also, EPA is a member of a group of other agencies that are participating in a mutual agreement and MOU to consider moving forward with certain strategies. That incorporates tox cast. I will get back to you. If you can give me a written copy of the question, I will get back to you and the committee.

Thank you.I assume that Karen's response will be put into the record. I think this is a very important matter and I appreciate the attention we are giving to it and the attention that Karen and her colleagues at EPA will give to it. Thank you.

Thank you Karen and thank you, Roger. The first item on today's scheduled agenda is the report on independent scientific peer review the validation status of new versions and applications of the Murine Local Lymph Node Assay or at the LLNA, the test method for assessing the contact dermatitis potential of chemicals and products. First we have an introduction an overview of the proposed methods and applications. Dr.Marilyn Wind will be the presenter. Marilyn?

Before I start the actual presentation, I want to indicate that I am doing this presentation for Joanna Matheson who is the co-chair of the ICCVAM working group who could not be here today. I also want to publicly thank the members of the immune no toxicity working group for all of the hard work they did as I am sure that you noticed in your notebooks the background review the documents and the peer review panel report are enormous and had a huge amount of information. Both the immunotoxicity working group as well as [ indiscernible ], ICCVAM spent a huge amount of time working on those. I want to thank them. I also want to publicly thank Dr. Luster who will be presenting the recommendations from the peer review panel. It was a huge amount of work for the peer review panel. He did a masterful job of running the peer review panel. We really appreciate it.

That being said, my disclaimer that everything that I am saying are my views and not those of the Consumer Product Safety Commission. A okay. I am going to zip through this, but you will see that we started with the nomination from the Consumer Product Safety Commission in January of 2007. Considering the amount of information that was available and the amount of data that came in, the fact that we were able to hold the peer review panel in March of this year was truly amazing. As I said, only due to the hard work of the committees. The peer review panel looked at a number of things. They lurked at the LLNA limit those procedure. They looked at the applicability domain. LLNA was originally looked at, there was not enough data to make any conclusions on the mixtures, metals or aqueous solutions. We looked at the three nonradioactive LLNA methods and drafted NICEATM LLNA performance standards and whether LLNA could be used as a stand-alone for potency determinations. The draft background review document was a comprehensive review of all of the available a data. In this case, people were very, very forthcoming and that giving us the data that was not out in the open literature. That increased our database enormously.

We had the draft ICCVAM test recommendations that dealt with the usefulness and limitations, the recommended protocol and future studies and questions for the peer review panel. Again, I think that all of you know that the LLNA protocol was initially described by Kemper in 1986 and the purpose was to identify chemical sensitize hours through qualification of lymphocyte proliferation. There is a minimum of three those levels part of the highest and should be the maximum soluble concentration that does not cause systemic toxicity or local irritation. The stimulation index or the SI, is calculated as the ratio of the radioactivity incorporated into the cells of the our regular lymph nodes. The threshold for classifying a substance is the second sensitize there is a SI greater than or equal to three.In order for the study to be considered acceptable, the concurrent positive control must deal with the SI equal to or greater than 83. This is the test method protocol, which I will not go through.

The first thing that we've looked at was the LLNA limit dose test method protocol provoke the sole difference between this method and that of the traditional LLNA is that it only uses a single dose which is the highest dose that does not induce systemic toxicity or excessive local irritation. The M information included in the BRD is based on a retrospective review of the additional LLNA data that were either submitted as part of the original submission back in 1999 were extracted from the peer reviewed publications and submitted to NICEATM in response to what the FR notice. We have deterred from 471 studies representing 466 substances. 211 of these substances were included in 1988 ICCVAM evaluation.

The results which the limit those test procedures almost always agree with the traditional LLNA. You can see that when we looked at the accuracy, if we looked at the 211 substances that were in Kimber et al 2006, it was 98.6% accuracy at with the additional substances, we had 90 a .9% accuracy. The limit those procedure should be used for the hazard identification of skin sensitizing substances if dose responds information is not required. All of the protocol specifications recommended by ICCVAM should be used. The user should be aware that the limit dose is the highest soluble concentration. It does not induce overt systemic toxicity or excessive local irritation. There is a small possibility of a false negative results that exists. When but 6% when compared to the traditional LLNA. --1.6 % when compared to the traditional LLNA. Using the limit to 13 saves animals. That is a reason we looked at that and it is recommended. The second thing that we did was to look at the applicability domain and do an updated assessment of the solidity of the LLNA for mixtures, metals and aqueous solutions. Did a comprehensive update of available data and information regarding the current usefulness and limitations regarding mixtures, metals and aqueous solutions. [ indiscernible ] traditional LLNA data that were either submitted as part of the original evaluation or extracted from the peer Review publications or submitted in response to the FR notice. The current database represents over 500 substances.

There were a total of 18 ministers. 10 our pesticide formulations. Four are dyes.The remaining four were not identified. Even protested an aqueous solutions. The result is 17 metals compounds represented by 13 different metal. A total of 21 tested in aqueous solutions, six are pesticide ingredients. The remaining 15 represented a variety of product classes. The LLNA performance was compared to a guinea pig data only. Knows human data was available for Mr.S. LLNA have less than 60% accuracy, sensitivity and specificity compared to depict a Tepper but the ball's positive and false negative rate were 50 and 44%, respectively. There were improvements in accuracy. 64% and sensitivity, 100%, when the performance evaluation was restricted to those mixtures that were aqueous. For the aqueous solutions the LLNA had 50% accuracy, up 33% sensitivity and 100% specificity compared to a human data. The false positive rate was 67%. However, only four substances were available for the analysis and aqueous solutions. By comparison in the original analysis, LLNA performance compared to human data for all classes of substances was 72%. The LLNA had 50% accuracy, sensitivity and specificity compared to the guinea pig data provided false positives and false negative rates were high at 50%. Again, the end was six. By comparison to the original analysis, LLNA analysis compared to depict data for all class of substances was 86%.

For the metal compounds, excluding nickel, the LLNA had 86% accuracy, when hundred % sensitivity and 6060% specificity compared to German data for all metal compounds with a N of 14 per the false positives and false negative rates were 40% and 0%, respectively. The LLNA had similar accuracy and sensitivity when compared to the guinea pig data with an N of six per but the false positive rate was 100%. That was based on a single substance.

The draft ICCVAM test recommendations for LLNA applicability domain without more data are needed before a recommendation on the usefulness and limitations of the LLNA for testing mixtures and aqueous solutions can be made. The LLNA appears useful for testing of metal compounds, with the exception of nickel. However, the false positive rate of 40% should be considered when evaluating positive results for and metal compounds in the LLNA. And that this situation, LLNA results should be subjected to a weight of evidence evaluation using supplemental information. False positive results are suggested, confirmatory testing and another acceptance consented consents--Should be used.

This is the LLNA: DA test method protocol. The data from the Daical Chemical Industries was tested in one laboratory. We receive the individual animal data for these tests after release of the BRD. That was provided to the peer review panel on January 30th, 2008. Two of the 31 substances [ indiscernible ] were tested here and that the LLNA: DA at varying concentrations in three different experiments in order to assess intra laboratory [ indiscernible ]. They evaluated the reliability and relevance of the LLNA: DA. And not the first phase there were 10 laboratories would 12 coded substances produced the second phase was seven different laboratories with five coded substances. The combined 17 laboratories used 14 different coded substances. Two substances will not easily tested among the original 31. Individual and a animal data were not used and have been received now. They were just received on May 17. The LLNA had at least 90% accuracy, sensitivity and specificities when compared to the traditional LLNA. The false positives and false negative rates were 10 and 5%, respectively. Performance of the LLNA: DA was identical to the traditional LLNA when compared to a human data. The LLNA: DA have slightly lower performance when compared to the Guinea pig data. 80% accuracy as opposed to 88% for the traditional LLNA.

The recommendations, a draft ICCVAM recommendations for the LLNA: DA that it might be useful for identifying substances as potential skin sensitizes and non-sensitize ours. If all results are suggested, confirmatory testing in the traditional LLNA or another accepted skin cents a taste sensitization test methods should be considered part of these recommendations are contingent upon receipt of the additional data and information requested. There needs to be a discussion regarding the potential reason for the negative result in [ indiscernible ] which is commonly used as a positive control for the traditional LLNA.

As noted before, we did not receive the additional data, some of it too late for the peer review panel to look. For it the LLNA: BRD [ indiscernible ] it is identical to the traditional LLNA protocol except there is a IP injection of BrdU instead of [ indiscernible ] on Day five of harvest of the lymph nodes. Lymph nodes of proliferation is used with the cell flow [ indiscernible ]--For the SI at the proportion of BrdU is divided by the proportion of the BrdU labeled cells in the control group. The subject groups with a SI of greater than three is irritation [ indiscernible ]--There was data submitted by the research labs where 45 substances were tested. Three substances had no traditional LLNA data. The original study records have not been obtained. Three of the 45 substances produced an equivocal results in that this method, not one of which has been commonly used as a positive control and that the LLNA. The rationale for the repeat testing of these substances and possible reasons for the results have been requested but not provided. There has not been an evaluation of intra laboratory reproduce ability.

This test method had at least 90% accuracy and sensitivity compared to the traditional LLNA. The ball is positive and false negative rates were 21 and 0%, respectively. This method have slightly higher accuracy, 69% and lower false negative rate, 27%, but had a higher false positive rate, 44% than the traditional LLNA when compared to the human data. It had a lower accuracy than the traditional LLNA, 76% as opposed to 86% when compared to guinea pig. The draft method, test method recommendations were that it could be useful for identifying substances as potential skin sensitizes and non-sensitize ours. At this time more information and data are needed before a recommended use of this method can be made. The rationale repeat testing of the three substances that produced equivocal results in this method, not one of which has been commonly used as a positive control and that the traditional LLNA. An evaluation of intra laboratory reproduce ability is critical if this test method is to be used in laboratories other than that of the test that the developer. Original records including the original animal data including [ indiscernible ] in this evaluation.

The next test method protocol that we evaluated was the LLNA: BrdU-ELISA test maverick. The lymph node cell proliferation is assessed by measuring the incorporation of BrdU administered through the cell using ELISA-last date was available for a total of 29 substances that were tested in one laboratory. 24 of the 29 substances had been previously tested in that the traditional LLNA. Into the laboratory data for five substances tested multiple times in one laboratory. The 2-phased into a laboratory study has been completed the data has not been provided. In phase 1--In phase two, 10 chemicals were tested across seven labs. They all tested the same concentrations of the coded chemicals. A note here is that the dosing solutions were already diluted to the reckless and concentrations and provided to the laboratory by the steady Management Team.

Just a note here that they looked at a number of different SIs. One thing that is not on this slide that was discussed to a large measure at the peer review panel was the power of each one of these SIs. A note that as the SI decreases, the number of animals you need to use increases in order to get a statistically valid results. For the draft test recommendations, this method might be useful for identifying substances as potential skin sensitize ours and non-sensitize ours. However, at this time, more information and data are needed before it can be recommended. A detailed protocol is needed including a defined an adequate justified decision criteria for distinguishing between sensitize ours and non-sensitize ours. Quantitative results for all of these studies included in this evaluation, it was provided on February 25th after the peer review panel meeting. A formal evaluation of intralabratory reproduce ability and the steadies have been concluded, but we have not gotten the information on the results. So, the study, design and protocol were provided, again, and that February but after the peer review panel.

Draft LLNA performance standards, these were proposed for the assessment version of the LLNA that very only from the ICCVAM-recommended LLNA by using nonradioactive versus a radioactive metals for assessing lymphocyte proliferation in the training [ indiscernible ] lymph nodes. It should adhere to the ICCVAM LLNA procedures and all other aspects. Examples include the strain of mouse, the timing of exposure, root and site of exposure, measured endpoint which is the lymphocyte plover for ration. All procedural modification should be accompanied by a scientific rationale. Other more significant changes to the traditional LLNA would necessarily be subject to a more extensive evaluation and validation process. The essential test netted component should follow the LLNA procedure described by ICCVAM and the EPA help effect guidelines. These require a concurrent positive control, five animals per dose Group, collection and analysis of the individual animal data. The only change would be the method used to analyze lymphocyte proliferation. There is a proposed [ indiscernible ] substances that includes 22 substances, 18 required, of the 18, 13 are sensitize ours. Five are non-sensitize ours and there are four optional substances to demonstrating improved performance over the traditional LLNA. Representative of the full range of responses, the chemicals are representative of the full range of responses in the LLNA ranging from-2 strongly positive. These substances have available LLNA guinea pig or [ indiscernible ] data.

We proposed accuracy standards based on a chemical by Chemical match. An alternative protocol must obtain the correct call of all of the required referenced substances on the list. Desensitizing substances, they must also obtain a EC to threshold that falls within 0.5 X to 2.0 X of the ECthree included in the House substances' list. The set of optional substances could be used to demonstrate improved accuracy. The proposed intralabratory repair disabilities Standards Board that they ECt values for [ indiscernible ] should be derived on four separate occasions and at least one week between tests to make sure that they are independent. Acceptable reduce ability would be a ECt of value for HCA that is and the .5 X to 2.0 X--The proposed intralabratory prepared disability standard are two specific chemicals with known skin sensitizing potential the DNCB and HCA are to be tested. The ECt values for DNCB and HCA should be deprived of least once in at least three separate laboratories. Acceptable reproduce ability would be within the .5 X to the two X.

We then looked at the end LLNA and the use for potency categorization. Evaluation of the usefulness and limitations of the LLNA as a stand alone assay for the hazard categorization of skin cents a sensitization potency. When we originally looked at the LLNA, it was looked at for its validity and that determining a yes/no response. Either something was or was not a sensitize our. Here, we want to know if we can differentiate between the strong and weak sensitize ours. It was evaluated for its ability to categorize substances based on the EC3.

This is the proposed classification category for a strong sensitizer. This categorization has been proposed for it and the GHS, the globally harmonized system. We wanted to see if you could make this determination. The data that were analyzed and closed 170 substances with LLNA human or guinea pig data. 112 substances with LLNA human data, 97 with a human NOELs and 15 were non-sensitizers per 105 of these substances had LLNA and guinea pig data. 52 sensitizers and 53 non-sensitizers and 47 substances had LLNA, human and guinea pig data.

All of the optimized calculated LLNA EC3 Values at the optimized calculated LLNA EC3 value, there is approximately a 60% correct call for the two human cut off the values that were being considered with a better performance of the human cut off value of 250 micrograms per centimeter. The categorization for strong sensitizer is better than for the week sensitizers. Of 75% strong versus 53% a week per of the guinea pig classification had a poor predictor performance about 50% correct than the LLNA EC3 for strong sensitizers, but a better predicted performance for non-sensitizers. The draft ICCVAM recommendations were that all the other is a significant positive correlation between LLNA and EC3 values and human sensitization, threshold does is, this correlation is not strong. There was a R squared of .405. It should not be used as a test methods for predicting skin sensitization potency categorization but should be used as a we of evidence evaluation to discriminate between strong and weak sensitizers. The independent scientific peer review panel was held in March. It had evaluated the modifications and new applications of the Murine Local Lymph Node Assay assay. It included international experts in dermatology, toxicology, biostatistics, immunology and veterinary medicine. Over 50 people from five countries attended. As I said, they had a mammoth amount of data to look. They did an absolutely outstanding job. Now, Dr. Luster will tell you about what their recommendations were.

Okay. As Marilyn said, the panel review was held in early March in Washington DC. The charge to the panel included reviewing the BRDs for completeness and identify any errors of the addition as well as to determine how well the validation and acceptance criteria of toxicological methods were addressed. To review and comment on the test methods used and their limitations, also on the recommended standardized protocols and on the method performance standards, as well as comments on the proposed or suggested future studies.

These are the seventh topics that were covered. The LLNA limit dose procedure, the LLNA testing for mixtures, metals and aqueous solutions, 39 isotope standards, last the potency determination. I will present some of the highlights from the panel discussion. The details, you can find in the text that is provided. Regarding the limit dose, again, this follows the traditional LLNA, except that it uses only highest dose instead of three doses. The panel agreed with ICCVAM that the limit dose should be used for hazard [ indiscernible ]. We thought it could be used even in the dose response data needed to identify negatives versus positives or through range finding steadies. This would, ultimately, lead to the reduction in the use of the animals. We suggested that the term reduced LLNA be adopted. This would be consistent with ICCVAM, rather than the limit dose LLNA. We also suggested that a better description of what constitutes a high dose is provided. Normally, it is considered minor irritation, but some description of what that irritation would be would be helpful. We also, commented on this division index of three which should include cents a sensitizers--Experimental groups to determine the level of significance. Regarding the applicability domain, this is to identify metals, and mixtures or aqueous solutions. We agreed with the ICCVAM recommendation on Mr.S that there was insufficient comparative data with Guinea pig or human sensitization tests to make a recommendation at this time. The panel appreciated the pastness of what constitutes a mixture or Mr.S and recommended to trying to separate the Mr.S into a categories. This might help provide an opportunity for some accuracy analysis on groups or different types of Mr.S. Regarding metals, the panel agreed with the recommendation also that the LLNA can be used to form metals except in the case of nickel. The inconsistency with nickel and sensitization might be with the be the culprit of the panel would have liked to see more data but the commercial used metals such as palladium as well as some [ indiscernible ]. Regarding aqueous solutions, this was the same as Mr.S. There was insufficient comparative guinea pig and human data to make a recommendation. The panel also suggested to better try to define aqueous mixture solutions and again, try to subgroup these.

Regarding these three isotope assays, the panel generally agreed with ICCVAM that all three of the assays reviewed might be used for sensitizers but had issues that preclude them from recommendation at this time and was based on the fact that it was not the original animal data, it was not available for review. We thought this was important. Regarding the LLNA [ indiscernible ], as a measurement, again, it might be useful but the recommendation is contingent on review of additional data and information. The test requires pretreatment of 1% SLS to enhance sensitivity. This was discussed considerably. It might just alter the skin permeability. We thought what was needed was to know whether the SLS pretreatment affects the lymph node directly or the test substance, as this might have a potential on inducing increased numbers of false positives. There was also a concern over the length of treatment and other clinical evidence of [ indiscernible ] had occurred. We thought that the accuracy analysis and range of chemicals tested were adequate, but there was a need for further intralabratory validation.

Regarding the LLNA: BrdU, we thought this would be a useful test. Again, it is contingent on the review of additional data and information. The data had a broad range of chemical class is tested but had a fairly high rate of false positives. You can see three of 18 in the sensitivity range. I will mention it later, but the panel did not feel that one needed to have 100% [ indiscernible ] as far as sensitivity is concerned. As I said, I will talk more about that in a minute. If there is a need for intralabratory validation. We felt that the [ indiscernible ] phenotype was useful to reduce false positives as well as provide a mechanism of the action. It is probably too costly for routine analysis in most laboratories. Regarding the BRD: ELISA, it is use the above recommendation is contingent on review of additional data and information in addition to any detailed protocol, it is required. The accuracy analysis was conducted appropriately, but there was a need for a broader balance of reference chemicals. The SI simulation index to obtain a positive response from controls in this assay was reduced from 3.0, which is normally used to 1.3. The panel does not advocate that a 3.0 must be used for a modified assay. They prefer more of a statistical approach. They were concerned with the 1.3 value. It was probably too low as power calculations indicated that the group's size would need to be increased well over five. Marilyn had indicated the numbers in her side of what that would bring it to. It was a significant increase in animal use if you needed to use the 1.3 as a cut off for positive simulation. The intralabratory validation was required or at least required additional tests--it was limited to one class of chemicals, [ indiscernible ]. There was a need for intralabratory validation, or at the speed that was not available at the time of that meeting.

Regarding the performance standards, the panel agreed that the use of non radiograph isotopes and created a minor modification but thought that other modifications might be minor and suggested defining these that are functionally similar to the LLNA. For example, measuring the [ indiscernible ] proliferation and only during the induction face and not during the [ indiscernible ] phase. We also recommended expanding what would be considered a minor modification and include such areas such as [ indiscernible ] or strange or sex of the animal. --Or strain or sex of the animal.

Contending with the performance standards, we strongly agreed with the recommendation that individual that debt should be collected, not the use of pooled data and this would allow to establish statistical [ indiscernible ]. There is a [ indiscernible ] that pooled were adequate. It was previously indicated that this is a OCED guideline that day use pooled at lymph nodes. We agreed that the group's size of five were okay. Positive control should be run concurrently during the development of a modified test unless one is using a known positive. We thought that once the assay is established and being run routinely, then a positive control can be run periodically, particularly if that laboratory is performing the LLNA retain the. We felt that the current draft database does not include the inclusion of the EC3 value as a component of the accuracy evaluation. At this time the EC3 is, again, the fact that you have to match up the concentration in your reference sample to what is previously published. And has to be a range between .5 within a particular concentration that was previously published in order for the reference sample to correlate. For use in hazard identification, a modified method should be evaluated. We felt with all 22 substances on the ICCVAM list, this would include the 18 they suggested was the four optional substances that they have suggested. We also felt that, ideally, the LLNA protocol should be equivalent to the traditional LLNA but felt this would be unlikely. We should not expect 100% concordance within the reference samples. However, we felt that it was not fully concordant, then the provider should at least attempt to provide some rationale of why those were not positive. Also, prioritize, give some priority to those based upon their potency. For example, it [ indiscernible ] is not accorded, that is a super strong sensitizer [ indiscernible ] that did not match with the previous LLNA. I thought this would be a point of further discussion but the panel all agreed on this is that we felt some weight should be given between the balance between animal welfare and human safety. Even if the rationale cannot be explained, one does not have to reach 100% accordance if there is a significant savings in animals. However, obviously, you would not want to take this too far. We did not make any suggestions but wanted to have that flexibility in the future where one does need it based on a certain degree of concordance for a day when to be approved. The panel considered using the ECt range for intralabratory River disability Analysis. That is because there is a large data base available with the positive control for the intralabratory River disability test. That is HCA. We also agreed with the use of the EC range, again for the same reason. The intralabratory analysis includes testing HCA and DNCB, both nonpositives. There is a large data base and what to find a consistent range that those chemical plans are positive in the traditional LLNA. As I mentioned before, we could not feel that one could use the EC3 range at this time for the reference substances. That is because there is not a large data base for a lot of those chemicals in the better as far as the actual concentration that will be positive. We suggested exchanging some of those compounds that where there is sufficient EC3 data for that particular chemical in the [ indiscernible ] assay or retain those reference standards without using a EC3, but at eight EC3 later ones historical data accumulates and we can get a very good idea of what the exact EC concentration would be that would be positive. [ indiscernible ] potency determinations, the panel agreed with ICCVAM that it should not be considered as a stand alone assay categorization, but rather, should be used as a weight of evidence. One should also be looking at the human patch testing, potency determinations from peptide reactivity, etc.. We thought the reason for the inability to make that correlation at this time was not because there is insufficient data with the LLNA but there is [ indiscernible ]. That is why the analysis could not be fully conducted. Did I cover everything? I just want to mention that this is the peer review panel, a very good group, very diverse that work well together. I wanted to mention, and put together, Kim Henrick and [ indiscernible ]. The Boer panel group members. A lot of writing had to be done in subgroups and these people cultured those subgroups. I thank them for that. I thank Marilyn also progress she sat next to me during the whole four days and make sure I did not fall asleep.

Thank you, Dr. Luster. Are there questions or points of clarification for Dr. Luster or Dr. Wind?

Can someone tell me what the dose that BrdU is an sacrifice time is following the application of the chemical? How much BrdU is injected in the animal? It is on a milligram per kg basis, I am sure. Is it 50? Is it 100. I do BrdU assays in the brain and retinol. I am wondering what the concentration is for the assay and the sacrifice time. I read the whole report and could not find it in anything.

[ Audio/Speaker not clear].

It makes a big difference because the BrdU has to be [ indiscernible ]. I also just note for the record that BrdU is very expensive. Radio isotopes are very expensive and haphazard, but BrdU is expensive and has animal has as for handling of the animal and a [ indiscernible ] of the animals. The veterinarians' here are much more qualified to talk about this. I have to file a special [ indiscernible ] procedures for the BrdU. I would like to follow this up later. I am curious.

[ Audio/Speaker not clear].

You should answer into the microphone. Would you hit the microphone, please.

I can tell you that after the injection it is 24 hours after the collection. I need to find the dose for you.

This is a IP injection, right, and we need to get it to the ears. I have a few comments that I can make later. Just thinking about it after reading this, we will talk later.

Any other questions or clarification?

Identify yourself, please.

[ Audio/Speaker not clear].

So, we actually do not have a dose per wake of animal probably does have a volume. 200microliters per volume per mouse.

It is mice organic.

I misspoke. It is actually 200 micrometers or mouse and five hours after the BrdU Administration [ indiscernible ] lymph node are processed. That should be clarified as well. We are talking about the flow psychometrics protocol that you mentioned, the one that measures the BrdU by ELISA is a 24-hour post injection collection.

If you could get the concentration, I would be interested to know the.

I do not know if it is appropriate for him to answer that question, but he could probably give you the specifics.

Do you have a dose? Do you know what the dose is, George?

The dose is administered by the weight of the animal and not the flow of the [ indiscernible ]. It is eight dose, not a concentration. --It is a dose, not a concentration. It is 20 microliters per gram of Wade. 425-gram mouse, they would get 250 micro liters.

Can you give me the makes of per kg?

The concentration of the BrdU injected into the animal is-I believe it is 100--

It has to be per body weight. It is the expert kg.

I will give you the dose. The dose is the concentration times the volume. The Volume I gave, which is 20 microliters per gram times 100 milligrams per mil. By the way, the IPID, the kinetics have been done and are identical between a two and 10 hour range where five hours is the common sacrifice time. Does that answer everything?

Yes. I think that answers the question. You have to do the conversion. It sounds like they make a standard solution and very the volume of solution by the weight of the mouse.

It is in the protocol, which is within the documents, the background review documents.

Okay. Thank you.Are there any other clarifying questions for either Dr. Luster or Dr. Wind? Okay. We will do is move on to the discussion period. We are going to move into the public comment period. We have some reviews alls from Mr. [ indiscernible ] and on the local lymph node the one that would include Dr. Cunningham and [ indiscernible ] and DeGeorge. Is anyone registered for public comment?

Dr. DeGeorge has registered as a public comment for for this time.

Dr. DeGeorge will comment not as a member of SACATM but as a public commentor.

Unfortunately, the comments on the two slide presentations that you saw are being duplicated to be distributed to the members of the panel, which simply consists of small annotations of a few of the slides that you have been receiving in the handout. So, that should be forthcoming. I have to proceed without that. I will go ahead. Although my laboratory conducts the LLNA, I am not specifically representing a lab. I am here on the basis of my experience of conducting over 400 local lymph node assays and over 200 individual, at separate chemical entities.

I was going to bring up, which was already mentioned that the IP kinetics dosing of BRD can be done. It is technically more difficult and is prone to more Miss Injection's than the IP injection, which is why the IP injection was chosen. However, it can be done either way and they can be compared side by side. [ indiscernible ] it is less expensive than radioactive, any radioactive compound. Weather it is [ indiscernible ]--Wether it is [ indiscernible ]. I would like to ask the SACATM panel as they address the discussion questions, which are in the first page of Section seven, to keep in mind and to make specific recommendations that are lacking in what has transpired in previous expert reviews and the tremendous amount of work that has been presented to you in a compressed form. Question two, discussion questions, did you have any comments on the panel's conclusions? And the extent to which the ICCVAM [ indiscernible ] acceptance of the test methods have been addressed appropriately. I think there is a problem. Alternately, throughout the documents, the background review documents and the presentations, which you will get annotated, there is a reference to using 18 Performance test substances. Then, there are references to using 22 Performance test substances. The difference is, and here comes the photocopied, annotated slides, the difference is the inclusion of two false positives and two false negatives for those that would like to prove that there is modified LLNAs is better than the existing LLNA. Although it is not mentioned, I would assume that if you were to not get a correct answer and not one of the 18 known positive negatives, which 13 are positive and five are negative, if you missed one there but did four optional ones and got those correct, you could make up for not getting 100%. Today was the first time I had attended all of the previous LLNA Reviews where Dr. Luster mentioned, that I heard, that 100% chemical for chemical identical results would not be necessary for the LLNA, modified LLNAs to be accepted. In the background review documents, it says, alternately, specifically the statistician say and a subgroup in the peer review panel, expert peer review panel and ICCVAM that says that you should conduct accuracy calculations and statistics. If you were to get 18 out of 18 chemicals correct and be required to do so, there would be no reason to, in that seven separate areas, require calculations of accuracies, selectivity, sensitivity. That number would always be 100%. It was anything less than one of a%, you would fail. I believe that the true intention is to not hold the modified LLNAs to a higher standard than the original, traditional LLNA, which had an accuracy of between 72 and 86%, depending on if you were comparing it to a guinea pig or human. With respect to the flow psychometry LLNA, which I know the most about, it was originally designed to use a white, over 40 chemicals, a wide range of chemicals. In retrospect, that hurt. We included what are termed equivocal substances. This is a misnomer. The term [ [ indiscernible ] should have been used. In the future it should be discouraged to try to pick compounds that are not clearly positive or negative in the gold standard whether that be at that time we consider the gold standard the radioactive LLNA, and we want to get 90% -- at least 90 percent the same as that. Now, the gold standard has switched through the human and has swung back to be the human data is not really complete. We have one study for demonstrating that the performance standard a positive sensitize our is positive. So out of 18 five are non sensitizes, 13 are sensitizes, and five of those have only been tested once. In addition another 13 have only been tested twice.

Two more minutes, George.

So there will be more data in the modified LLNA then the data that is being compared to which by the way is derived from the literature and not studies. So I call upon SACATM to espouse a criteria for validation that specifies a minimum accuracy. I offer as a reasonable number 90% concordance or accuracy. And in the case of specificity and selectivity or positive and negative 80%. These are well above the original standards and are commonly recognized to be acceptable. Question number four, please address C and D. The panel SACATM, please address the test method performance standards. We have chemicals that are performance methods, but in draft form. There is not a lot of confidence in the data. There has been mentioned throughout the background documents that substitutes or alternative compounds could not be used as long as they are robust. I ask that SACATM put into the record that that be allowed. Letter D, there is a requirement for -- there is a discussion of proposed additional studies. The flows [ indiscernible ] assay as well as the other two all were judged to be in need of additional studies. However, there was no -- if this is was a requirement -- to be a requirement --

Okay. You need to wrap up, Dr. George.

If this is a requirement it should be explicitly specified like the additional studies need to be done. Because that would be in conflict with the bulk of the documents which say a 18 chemicals need to be tested. Last point, sorry, interlaboratory validation is noted on the hand out I gave you. You cannot move into interlaboratory validation with animals until an intravalidated is completed. The second priority should be interlaboratory for validation which must be based on the completion and approval of the intra. So where that is a bullet point that should be moved elsewhere and should be considered and discussed here.

Thank you very much. Okay. We also have public comment from Kate Willet. Please go to a microphone and introduce yourself and your affiliation.

Hello. I am Kate Willet. I represent animal welfare. I am in the surprising position to congratulate ICCVAM. This was one of our major concerns last year, that enormous amount of resources have been diverted. It has happened very quickly and I appreciate that. I have one comment on the performance standards. This was included in written comments that were provided to ICCVAM. And that is on that number of reference compounds. I know Dr. DeGeorge just discussed this again. From our perspective in terms of reducing -- I think it is the importance that the performance standards were for an alternative detection method. That is the only difference between the particles. I can understand developing performance standards for the method in general, but if all you are comparing is a detection method between their radioactive and the BrdU, it seems to me that the number of reference compounds is excessive. If you only compared detection methods you should use a few compounds that have highly reliable data and challenge the ends of the spectrum in terms of sensitivity. And that was the charge of this set of performance standards. That is my comment. And the other is a question for ICCVAM SACATM. What are your plans to deal with the follow up for some of these assays? A couple of them were left with no recommended pending additional data. It sounds of the additional data is coming in. How is ICCVAM going to review that? Is there a schedule or plan? My concern is that while I understand the value of using -- of [ indiscernible ] the domain of the LLNA because it reduces animals it is an animal production method and it would be nice to spend ICCVAM resources elsewhere. The search -- sooner you could get on with other things it would be great. My question is what are the plans were wrapping up this review.

Thank you. Would you like to answer that question?

You are correct that we have a dozen more data in. Our intention is when we get all of the data and that we will reconvene and have them look at the new data so that we can make recommendations.

A okay. Thank you very much. Any other public comments? Okay. What we will do is move on to the SACATM discussion. If we could have the questions put up that would be great. I will read the discussion questions. One, do you have any comments on the panel's conclusions and recommendations on the BrdU BRD? Two, do you have any comments on the panel's conclusions and recommendations in terms of the extent to which each of the applicable criteria or validation and acception of alternative test methods have been addressed appropriately and ease draft test method BRD or BRD addendum? Three, do you have any comments on the draft test method recommendations for the seven methods and applications? Four, do you have any comments on the panel's comments, conclusions, or recommendations for the LLNA methods and applications regarding their usefulness and limitations, their recommended the test method particles, test methods performance standards, or their proposed additional studies? Five, do you have any comments on the panel's comments, conclusions or recommendations regarding ICCVAM draft LLNA performance standards. Our lead discusses for these questions are Drs. Brown, Charles, Don, and Eric. Dr.Eric is not here, but she did provide written comments. I think we will start with those and read this into the record.

These are written comments submitted by Dr. Eric. The LLNA limit dose procedure 153 of 153 non sensitizing agents detected, 318 sensitizing is detected. The numbers make this as they look good. For testing aqueous solutions, metals and mixtures, 18 tested. Some without ginuea pig data validation. Seventeen tested. Is 12 or 14 sensitized with two of five false positive. Not enough to say how good this will be for metals. Twenty-one agents, at least 20 percent water tested. That is not good enough, so can't offer opinions about this. On the nonradioactive LLNA per call the LLNA test method, performance > 90% for the 19 plus 10 sensitize [ indiscernible ] sensitize their examined with false positives, less than 10%. I am not sure if this would be good enough for metals, mixtures, or aqueous solutions. For the nonradioactive LLNA protocol LLNA BrdU test method flow [ indiscernible ] used with 3-5 test cases. Some gave equivocal results and no multi lab studies yet. Reference studies need work. This is promising, but not ready yet. On the LLNA BrdU-ELISA test method this is still in progress. Twenty-three, [ indiscernible ] tested with accuracy of 80%. Not detailed protocol yet. Imager to make judgments. On the draft ICCVAM LLNA performance standards, no comment. On the use of LLNA for potency determinations purpose unclear. Was this for validation study? That is all.

Thank you. Now, the lead discussants that are here, I will try alphabetical method. Dr.Brown, you are first.

I was a bit overwhelmed by the amount of material we had to go through. I admit I did focus on the final conclusions and the panel relying on their expertise and the number of individuals and the expertise brought to that panel. I have to say I was impressed with the process and the number of individuals and the thoroughness of this report. I really have, I guess at -- I don't have a lot of comments other than a disappointment that did go into effect and that we can't make more conducive recommendations from all this information. And I don't know if there is a way to, when you are bringing together 50 individuals globally for this amount of time and this effort if there is a way to make sure we have a lot of the data that maybe could have made more of a conclusive report on the part of the review panel. And that data it came in just fractionally -- to late. And if there is a way to make sure we have the data before we actually set the meeting, but maybe that is not possible based upon people's schedules. I can understand the logistics of that. I actually saw share some of the comments that were mentioned by the public which is what are the next steps. I think closing this out and making concrete recommendations, from what I hear in the field their is a reluctance for the LLNA based upon the radioactivity. If we could get to something that did not have a radioactive basis for the test we could get a lot more acceptance. That is kind of the bottom line for me, getting out there and getting people using it. I would urge this to be a very high priority to get this in formation and get this finished. I thought that the process was a very well done, and I think that has a lot to do with the first three questions. They've, were things complete? Were there omissions? I did not catch them. There I was also unclear on the purpose of the performance standards and how they would be used. I think that another one of the public comments was, what is the gold standard? I think we need to be sure we make that clear when asking for people to provide data. So anything accordance with human data. The approved alternatives, other animal tests, what are we asking them to compare to? It would seem to me that the platinum standard is really comparing with what happens in humans because that is what we are trying to mimic. If we are relying on animal data or other alternative data if we don't have the human data then I simply think we can use that as an alternative I would think one of the purposes is to try and increase -- to improve our predictive tests. So I think if you are comparing to an animal tests or another alternative and maybe it is not as good, but if you compare human data and it's great and it is a better test. I think that is sort of the platinum standard. Early on with these all -- with these alternatives we have to accept the fact that we may have small in this because some of the alternatives have not been accepted or actively used as much as we would like. There may be nothing we can do about that.

Thank you, Dr. Brown. Dr.Stokes has comments.

Thank you for your comments. I just want to reiterate a couple points. Number one, as was pointed out the community did work very swiftly once this domination was made. This was -- and we were creating the background review documents. These were not submitted to us. That is the difference. This was a huge undertaking.

Maybe that is why they were so thorough.

We did not anticipate we would have difficulty picking individual and all data. Based upon our review of the original local lymph node we knew that this was -- it is in our submission guidelines. We undertook this hoping that it would be readily available. Just to give some insight into that their practice is they will not provide you their data until there is a peer review of literature publication. That is not typically how we operate in the U.S. states. We go ahead and provide that information. That is one of the reasons there was a delay in -- some of the data we did not receive. We had to defer. And then Dr. DeGeorge shared with us that this data has been created over the past eight years. It was a huge undertaking in terms of time and effort to obtain the original records that we were asking for. They did not have sufficient time or resources. Subsequent to that meeting we were hoping to be able to finish this. We ask them to provide that to us. We have got and some of that data. Hopefully we will get the rest. We will solve these kids will another expedited peer Review meeting to follow up on that. We know that there is a lot of interest in these because of the advantages that we offer. We want to get that information out. As I mentioned yesterday agencies have a traditional method that they use to make decisions right now. So when we have a new method we always compare the performance to that. In this case you see the comparisons of the new method which is an accepted method. UC is compared to the traditional method, the gineau pig test because those are what agencies will accept right now. And we compare it to all the human data we have. We are fortunate that we have a considerable human data for sensitization testing. We will continue to do that. As I magentas today the LLNA was able to be accepted, not because it could predict the traditional so well but because it's performance for predicting human was comparable to the traditional. We will continue to do that. Again, it depends upon the data provided. I do want to comment that we were very fortunate in getting the most robust response in industry. You saw the data base of over 400 sets of data for the LLNA compared teetoo hundred that we have with the original. We were very pleased with the willingness of industry to contribute.

Dr. Charles

I would like to commend the expert panel for going through this about the data and coming up with recommendations in that timeframe that they have. I have been through this before, and it is not an easy task. With regard to, specifically -- regard to comment specifically on what we had to look over to, in terms of the limit those I concur that the inclusion of a discussion on how to determine the maximum dose -- if you're only going to be using a single dose in a screening process for determining if you have to be able to define what you consider excessive irritation. You have to really define your in point otherwise you have a bell shaped response curve. You're on the wrong point of the curve. You have five false negatives. I think that needs to be in there. Also I agree with the panel. A modified requirements that a concurrent strong positive control should be performed for every single test. I know that this is also being proposed for the revised toxicity tests guidance. I think they will be sitting at least 10% of the number of animals using the test if the positive control is a drunk for protest. The positive control is merely to see [ indiscernible ]. All it is telling you is yes or no. Can you -- you just a couple of animals. Instead of doing five animals. Or as some of the panel recommended doing this on a continuous basis, you do this every week. Do you need to do this before times a year to prove that your methodology is consistent and works. In terms of the LLNA I can for the need for the week sensitizes, especially with regard to the fact you are adding in a 1% SNLS. However, even with three animals there is pretty good correlation with the upper traditional LLNA so that goes to the fact that we need further recommendation from the panel that you need five animals. I concur that you probably only need for, especially if you have -- if you have at least so far adequate power and the -- in the alternative test systems that you are looking at. Again development [ indiscernible ] is a desirable outcome. I propose that we follow up in the hopes that we can have an alternative to radioisotope method. One more. In regard to that number -- the test method performance standards, the number of chemicals used to validate, five of them -- five of them were what he considered equivocal or only had one that tests performed on them. But even but the panel said that in terms of validating and that new but it is the old that it was going to be extremely difficult. It is reasonable to replace chemicals with chemicals where you have more robust data. That is all I have.

Thank you. Dr.Don?

Needless to say the panel did a wonderful job. I only have one or two comments. The tables summarizing the power analysis for the [ indiscernible ] modified LLNA methods are not as transparent as they should be. That is, more footnotes or the operations are needed for tables 1.1 and 5.1 in the report. For example, the mean response and the standard deviation for the control group are not given in each of the tables. But of course they can be back calculated if one is familiar with the analysis procedure. This information is importance of fronts because the standard deviation of the response of the control group has a direct impact on the power calculations. This is because the standard deviation for the control group is assumed for the standard deviation of the treatment. But more importantly the standard deviation or the appearance of the control group seems to be a vehicle driven or vehicle specific. As demonstrated. For example, the power is calculated for the Beth said has shown and tables 4-1 really bored because the vehicle was used as the control group. But when but the vehicle was used as the control group the standard deviation for the control group was much better, smaller. Enhance the power is calculated was much higher, up 1095% with only just five. To me the importance point being made here is that's if and when the standard deviation or the variability of the response of the control group is a vehicle driven, then it is likely that the accuracy of the message could also be a vehicle driven. If it is too late for this analysis, this is a myth that -- message for future, something that is worth considering for future studies. That is all I have to say.

Thank you. Other members of SACATM?

I would also like to commend the peer review panel on the tremendous job with the amount of data submitted. However, I think that perhaps they should have taken a little bit more time to evaluate it rather than try to meet a deadline or do it within this few months' time friend because overall I found that some of the conclusions, statistical analysis, the data that is presented from a scientific point of view, toxicological analysis was rather if confusing. The conclusions weren't exactly matching the data. For instance, there were major changes throughout the study. Major changes. And I think that reading through most of the report meant that chemicals we're added in and out, taken out of the particles. I'm not sure if other chemicals we're added in. If chemicals were taken out what would be the results of the analysis during the conduct of the studies, especially if the study has been going on for so many years. Also and whether it's a bigger problem with the reference standards. As was mentioned in the presentation about 10 of the 22 chemicals were only performed in one study. That I would find very difficult to compare. And another for had just to performance studies. That makes 14 out of 22, the majority that are only done less than two times. This would not be acceptable in our lab if my students only did and experiments on one chemical just one time. Also I found dead little confusing as to the specificity and sensitivity in comparison with LLNA, the nonreactive methods.And the lack of the human data. On the one hand the traditional LLNA, we are quoting specificities of false positives of up to 40%. But yet on the presentation of false positives, I was 17%. It seems to have been a correction. It said three out of 48 substances. The presentation had three at the 17 substances -- I'm sorry, three out of 18. I presume that was a typo. Seventy-three of 48 doesn't come out to a 17%. The study, the BrdU study did mention that it presented over 40 substances. So is not clear as to what percentage is is being used. Whether it is three of 48 or three of 18 used to determine the false positive rate. It is in fact only 6% 6.2 percent actually, then that would seem a better false positive than the traditional LLNA. All the false positives are often considered to be an advantage. Another thing is the use of 18 chemicals versus 22 chemicals. Those other four chemicals are considered optional. The optional chemicals, I am not sure what the purpose of the optional chemicals are. This false positives and false negative chemicals, perhaps that can be counted on. Because if they are false positives and false negatives in order to get a concordance with the traditional LLNA you would have to make sure that your false positives and false negatives with the nonradioactive methods that's the traditional LLNA. That means that four out of bed 22 would have to be false. Does that constitute a 100 % concordance? Finally and terms of additional studies I presume that these have no -- I am not privy to the cost of the studies, but I presume with the number of animals and the labs that were asked to do these studies this must have caused NICEATM ICCVAM millions. To require that additional studies be done, it seems that wouldn't it be more feasible and cost-effective to wait for the additional information to come in, the additional data to be presented considering the time constraints of the peer review panel. Or as the laboratories, the regional laboratories to look back, give them more time, or even have to perhaps reduce some of the studies, a limited number of them said clarify the data that has been presented.That is all I have to say. Thank you.

Thank you.Dr.Stokes? Comments?

With regard to the data I think there is a little bit of confusion. We did get summary data on but the results of the three studies. It was just that we did not have the individual animal data. So we have summary data on responses. Where there were three individual or five individual animals we did not have the records. We typically require that there be a quality assurance reports that it was done. We also wanted is to be provided to the peer review panel. We did have summary data in terms of sensitivity and specificity. For the optional chemicals there were 18 chemicals selected. There were four additional optional. So 18. There was a considerable amount of time spent by the immunotoxins a working group in coming up with those 18. We started out looking at all of the chemicals that were available. We applied the different criteria that are listed under our performance did it's as to what characteristics should accompany those. We picked chemicals that did not produce equivocal response is. We picked chemicals that had data in but the traditional method, at least one of the candidate methods. It also had a human data. When we applied that criteria that's definitely reduced the number of chemicals we had to choose from. We also wanted to provide a range of diversity in terms of the vehicles that reduced, the chemical characteristics of each of the substances. And we also wanted to have a range of potency and terms of responses that are shown. So of the 13 positive chemicals with only 13 positives and those kinds of criteria being applied you can see that it was tough to come up with us. So as a result some of those substances only have in fact one study. I think everyone would agree that ideally it would be better to have multiple studies for each of those substances. Again, I remind you that these are draft recommendations. We will be taken into consideration your comments. We will also be taking into consideration all of the common-sense conclusions and recommendations from but the peer review as well as public. So we have not made those revisions yet. We will do that beginning shortly after this meeting. We do appreciate your comments. Those will help us in going back and revising this performance standards that we have developed.

I just wanted to comment and add to that. One of the slides concluded that it may not be necessary to reach the same level of accuracy. I'm not sure what level of accuracy means. I think there should be numbers associated with accuracy, specificity, and sensitivity. 90percent accuracy would be considered acceptable. 80percent sensitivity, specificity, also I think would be scientifically his target. Things like that would make this summary, the future summaries and evaluation of the rest of the data. It makes it much more clear.

Thank you.Any other comments from SACATM? Dr.Fox?

I have a few questions and comments. I'd like to ask Doctor Lester to give us the biological basis of this from a molecular perspective and biology perspective. I would like particularly to know in addition, to your review. This is a sell cycle reentry. Do you know whether or not the mitochondria is also being measured at the same time? Can you give us an overview of the actual biological like the basis of this? I have a couple of things after that.

Bottom line is that it is is looking at the induction of the response. So it is picked up by The dendritic cells in but the damage and translocated into the lymph node. At that point it interacts with cells that are coming through the circulation. And if it recognizes this particular cells it recognizes that particular antigen and undergoes cell proliferation. So it is a proliferation event that eventually leads to the solicitation, the critical response you're familiar with with hypersensitivity. And I don't think the DNA does much proliferation. That is I don't think it undergoes a lot of proliferation. It's mostly in the nucleus that they are measuring that as a. That method, I think, picks up all -- it will pick up all the in a proliferation, but I think it is is picking up -- I think it picks up all DNA, but it measures primarily nuclear. Is that sufficient?

I just wanted it read into the record of exactly what it is detect the biologically because I wanted to follow it up with two other questions. Like the odd to the test that we did for the review for the validation, we recommended that there was histopathology involved, and I don't see any recommendation on the community year of a recommendation for histopathology when I read the whole as a it says no histopathology was done all the it does make a note that it should be considered. This is a pretty weak or condition. I think consistent or parallel with our previous recommendations for five ocular irritant see that we should think about establishing histopathology if we are going to continue with the LLNA. I guess my final question really is there may still be a better alternative than this. It seems to me realistically that there has to be a way to assess a toxicity and skin irritation better than applying it to the gineau pig or mouse and looking at them decide activation there must be a better way to do this. I saw no mention at all of any alternative to using whole animals in this report. Maybe I missed it. It is a huge report. I did try to read it before I came, but I did would be really important for us to discuss a non animal alternative. It's just seems to me. One final comment. I did calculate the does. It is the dozen micrograms which is a huge dose by the way. BrdU actually damages the nucleus. That is a pretty high does, but I am more interested in a complete the alternative discussion and wondered if we could take a little bit of time. Some of the people probably know more about it than anybody else here. Thank you.

I just want to respond. [ indiscernible ] looked up one of the of locations from Japan. It was 5 milligrams. 5milligrams total. The other thing, Dr. Kajima will be talking about this. They're is a validation study currently being planned that we are also providing input into that will be led by [ indiscernible ] on an in vitro but that. We are in fact pursuing that and will provide input into what chemicals we're used for that study.

Can you give us any information? Is a public information?

I will defer. If you could maybe briefly describe but the in vitro sensitization.

So during the bell is a study --

That doesn't give me any information on any of the biology. Mike, do you have it?

I would guess it is similar to the supporting documents. They are looking at activation of dendritic cells. So that process is involved in activation period of looking at a division barker's. I think CD1, is that correct? CD one is the antigen that is activated. Cd 86. There are several. And just to respond I think that was something -- we discussed that during the panel meeting as well. We made a very strong suggestion that there be some histology associated with the reduced LLNA. That is in order to determine how high does that is and if it exceeds. So histology was part of the ahead. I don't think it came out, but it is embedded.

Okay. Biphenyl response would be maybe there is a way to encourage the alternative use of animals. We want to provide much more data and details about the alternative whether it is a [ indiscernible ]. I would be really interested in knowing as a scientist. I've wanted of the pilots it. I think this approach is the correct way to go. This is pretty easy. This is routine.

We can certainly provide that at the next community meeting. A more detailed description --

[Audio and video feed have stopped]

Was it the change in time? I am not sure if this was clear. To me I just find that there were multiple questions. We come back and I use this harpers to say, we have to generate all this data. All these different compounds. Then it's almost like we are starting over. Each one of these in terms of evaluating. I think I can envision some much simpler approaches if the question being asked was, is BrdU a more appropriate in Decatur, measure? Am I making my point clear?

I think so. Thank you very much I think you just threw a long pass over the rub.

I think I can try and answer that as concisely as I can. In every evaluation we do we are looking at how reliable it is and how accurate or relevance it is in predicting the particular events used for classification. So the question was, given the question you are only using one dose level rather than three in the case of the alternative methods each method was compared independently against the traditional radioactive element. And even when you take into account the small changes were the changes in protocol one of the issues that was trying to be addressed is whether those changes were considered to be minor changes or major changes where a major change by have an impact on the performance. So in but the ICCVAM guidelines on the local lymph node and the test guideline and the EPA guidelines it specifies that you use male mice. So one of the things is there is a statement and there that you can use another strain of mouse or another sex if you can demonstrate that it doesn't impact on the performance. And performance is assessed through accuracy and reliability. That is what we did with all these. In addition, we did to other things. We put the draft performance standard. Because they weren't around at the time that the original local lymph node was evaluated. It was something we introduced later. Performance standards are used to help the accelerate the validation of an alternative test methods that is functionally and mechanistic place similar. And so if those performance standards had existed they would not have been used, both in the development of the not ready right of methods but also in the evaluation. Considering that they didn't exist then we are holding those to that standard, but we are kind of looking and seeing how they perform in that context. The other thing we did is we looked at the applicability domain because the traditional local lymph node was said not to be used for metals. There wasn't enough data on complex mixtures and there wasn't enough data on acquiesce solutions. That was a re-evaluation compared to the radioactive methods which might have impacted also on the nonradioactive method. So it was a fairly complicated scenario that the panel had to go through. In fact we try to set it up in sequential fashion as to what test method they looked at first in order to kind of prepared them for what they evaluated later during the meeting. But it was a very perplexing.

I think Doctor when would like to supplement that answer.

Yes, I just want to make sure that everyone understands that when the request was made we knew that there were a real lack -- nonradioactive test methods that were being developed. And as indicated one of the reasons that LLNA wasn't being used is because there are a number of countries and a number of places where the use of radioactivity is not allowed. Also, the difficulties in using radioactivity and conforming to the rules are rounded make it difficult to use. So we thought it was importance that we look at that nonradioactive LLNA methods. That being said we did not develop those methods. They were methods that were under developments and we're brought to us when we indicated that we were having a review of the LLNA. So the three methods that we've reviewed, we've reviewed because they were at a stage where they were sufficiently alarmed that we could review them. So there were a bunch of different questions that were being asked. In terms of the performance standards I think Ray was right on. We have performance standards that make it easier for me it's the -- to be developed and not have to go through this same rigorous the validation process. In terms of focusing that was something that was being pushed by the Europeans. All we needed to do was [ indiscernible ] and you could make a determination of anywhere from two -- different potencies.And we felt that was very importance, particularly since under the GHS there was an expert group that was looking at the question of how to use LLNA in determining classification. So that is why we did that. Puff that was a totally different question being asked. The problem is that this panel addressed numerous questions which is why it is so confusing.

Thank you.That was helpful.Half we want to get the elephant out in the room. They're is a continuing concern that in some quarters we have created is such a complex structure for validation of these new tests. When we come back 10 years from now, at the 20th anniversary we are only going to have a few additional tests. We are going to have other agencies follow suit. They call it RSD. So we will have more and more what I will call ad hoc tests. If there will be marginallwhy sure validated.

I am certainly agreeable to that. When you say our role -- my view is it is an advisory committee. Sometimes your response to requests for addressing issues. In other cases you are free to offer unsolicited advice when we think it appropriate. And in my view I think this meeting has been one of our best because I think we get a good mixture of both. I don't put too much bounce on advisory committees. Have least on my service.

I appreciate your insights and precautionary concerns. What ICCVAM has advocated from the very beginning is interacting with people that want to develop an assay. We would like them to come and talk to us. We connect them with regulatory scientists that have experience in that particular in point. Before the devaluation studies. That way we can work with them and talk with them about the chemicals that can be used to convince regulatory agencies that this is as good as something that they propose to be used for. If you just step back and look at the number of chemicals and the number of laboratories that have been used for these three methods, if the performance standards had been available a significantly fewer number of animals would be used and it would be a lot less expense. They have generated probably three times as much data as we have proposed in that draft performance. So this is our attempt to try to get ahead of that curve, get these out there. We are doing this routinely with every new method. We are providing performance standards. If we had done this in the 1998 it's certainly would have benefited and expedited the development and validation. So we are taking the time for every new method.

Dr. Fox?

Just as a follow up the that it seems that we often buy into the question immediately before sitting back far enough and asking if the question is a valid question. There is that thing about your urgency isn't my emergency. Sometimes somebody asks us to look, but maybe something else is more important instead of just jumping in and starting this one. So I think I would concur with Roger. I don't understand this. Just because you're doing the assay a little differently the half life is only two hours anyhow. Maybe there is a different way to approach it up front and bring some of those things to your advisory community and ask some of the more scientifically oriented people, the basic scientists whether or not this is an inappropriate question or assay or something just more scientific. Maybe there is a better way to be thinking about the question as opposed to buying into the question right up front. Does that make sense?

It does. That sounds like the ideal way to proceed in the future whenever possible.

Any other comments from SACATM? A okay. Thank you all very much. I think at this point we will take our break. Why don't we try to get going at 10 after 11 promptly.

Actually, the photographers would like to take another photograph. I would like us to go right across the hall. If we can do that quickly and then get on with the break.

[Session on break until 11:10 a.m. ET]

[No audio feed]

We are gathering information that will help us make advances on the replace the side of this strategy. I do want to annoy its ongoing activities within Europe. They have a process called the Iraqi talks project. This implements most of the longer-term recommendations developed. The overall aim of this project is to develop a simple and robust strategy to acute human toxicity strategy. A European R&D integrated Project is what this is. It is a partnership of the consortium which is evaluation guided development of test batteries. ICCVAM partners. This partnership started into a dozen five and is estimated for completion in January 2010. Some of the other actions were review of the validity of the up and down procedure in 2001. This test actually reduced compared to the traditional test animal use by over 70 percent. It is widely adopted. It is used and has approval internationally. As I mentioned the in vitro methods we talked about yesterday can further reduce animal use up to 30 percent where appropriate for use. Then we have the workshop that we just sponsored. The rationale for the workshop, in addition, to its being a significant public health problem alternatives for acute systemic toxicity testing is one of ICCVAM highest priorities in our five-year plan. Worldwide this is the most commonly performed product safety tests. So there are actually more animals used for this test than any of the other tests that are connected. There are procedures like long-term bioassays that gives you more animals. This test can also result in significant pain and distress to test animals when severe toxicity occurs. That is one of our concerns is trying to refine procedures to reduce or eliminate pain and distress in testing. Because of that this workshop actually begins to implement the activity we included in our five-year plan which was to convene a workshop to identify a standardized procedures for collecting a mechanistic information from in vitro acute toxicity testing the aid in developing batteries of predictive in feature test methods that can further reduce and eventually refined animal use. And to seek more productive and more humane and points that may be used to terminate studies earlier in order to further reduce pain and distress. These were recommendations made by an independent peer review panel that met in the 2006 and reviewed the cytotoxicity methods. We also have proposed several ideas for future studies that would help advance the testing area. And then finally another rationale is that in Europe they have an impending ban on the use of animals for testing cosmetic ingredients. They won't be able to be used for acute systemic toxicity testing for those ingredients. If they are those ingredients won't be able to be used in cosmetics. And then they also have a just and limited the program which will require additional testing on the thousands of chemicals. Yesterday you heard about toxicity testing and the 21st century, the report from the National Research Council which envisions a significant reduction and replacement of animal use with batteries of predictive in vitro assays to evaluate alterations to [ indiscernible ] the pathways that can be elicited using a system biology report approach. What they are saying here is very similar to what was envisioned. Not a deficit the same language. You also recall the language and the NTP vision which is to support the evolution of toxicology from predominately observational science at the level of disease specific models to a predominantly predictive science focused upon a broad inclusion of target specific mechanism based biological observations and sells systems and perhaps animal studies. So keeping in mind these two visions difference of toxicology in the future the acute working group talked about how we could address this in our workshop. We felt that the development of predicted path with based methods could in fact be a proof of concept for applying these divisions to regulatory testing and test systemic toxicity versus the local toxicities we have been working on in terms of dermal irritation and corrosion. And ocular irritation. We propose that the workshop discuss approaches to identify key Texas to pathways for key system in Texas city. Including the collection of in but the information that could be used to identify and develop the necessary methods needed for active predictions. And in vivo of mechanistic information that might identify a predictive barker's that could be used. This is where it is still necessary to use animals for this type of testing. So the workshop addressed the collection and use of a mechanistic data the target of the development of predictive in vivo of methods and earlier more humane end points. Again I want to emphasize that we organize this in connection with partners in Japan and Europe. It is very well represented with over 120 in attendance from six different countries. We had 14 speakers that made presentations on various topics. We had five breakout groups. The workshop goals were to review the state of the Science and identified knowledge gaps regarding key pathways' involved in the acute systemic toxicity. That was pathways' at the whole organism. Organ system, cellular, or molecular levels. Secondly to recommend how these could be addressed by addressing mechanistic biomarker data during currently required testing. Third, to recommend how key pathway information could be used to develop a more productive mechanism based testing systems and to identify the markers that can serve as predictive earlier end points. Fourth, to recommend how the mechanism based test system and earlier more inhumane and points could be used to further reduce, refine, and eventually replace animal used for acute systemic toxicity testing will ensuring that continued protection. We organized this into four different sessions. The first one talked about regulatory testing needs. That is where we talk about how significant are public health problem poisonings are at this time. Then we went through these specific testing that is required by the different regulatory authorities. Number two addressed how humans as well -- humans that are poisoned as well as animals that are used in safety assessments cities, what data is collected from those and also what types of data might be collected that would give us better insights about what is happening, what the path of is a logical processes are. We talked about the current marker's measured as well as potential other markers and talk about key pathways. We talked about humane end points, what those are and how you go about validating them so that they can be used. We talked about the state of the science methods that are currently being developed to acute systemic toxicity. There will be a workshop report posted on our website this summer. We plan to publish a workshop summary so that at least these recommendations that have been made by this scientific group will be in the literature for others to take a look at and consider if they are working in this area. Again, I'd just want to acknowledge all the people who contributed and participated in this workshop. There were many people that came. Toxicologists, physicians, researchers. I also want to acknowledge the acute toxicity working group who served as the organizing committee for this meeting. Then of course the ICCVAM agency representatives who supported oversight for the entire effort. And of course our staff. So these questions actually will be used after Dr. Winn talks and gives her presentation.

Thank you. Next we have Dr. Winn with the workshop recommendations.

One of the important things that the working group learned from this workshop is that we should not spend as much time as we did wordsmith ing of the questions that we've wanted the breakout group to answer. Every one of the outbreak of groups started their report with, we recognize that these were the questions. However, we answered some other ones. So it was kind of interesting. Let me start since all of you have copies of this. The first slide of course is my disclaimer that the views presented our mind and not the commission's. Slide number three is the workshop breakout groups. The breakout group one was Keith pathways for acute systemic toxicity. Breakout group to was the current acute systemic toxicity injury and toxicity assessments. Breakout group three was identified earlier humane and points for acute systemic toxicity testing. A breakout group for was the application of mode of action and mechanistic information to the development and validation of methods for assessing execute systemic toxicity. Breakout group five was industry involvement and test method development validation and use. Breakout group one come on the key pathways for acute systemic toxicity was cochaired by doctors as a cost the and pallet jack. The objectives were to discuss the current understanding of key pathways for envy low acute systemic toxicity. Identify knowledge gaps for in vitro pathways and the various chemicals and products. Identify and prioritize future research initiatives to address the knowledge gaps and what was necessary to advance development and validation of envy of methods for assessing acute systemic toxicity. Reviewing in the molecular [ indiscernible ] that are or could be measured [ indiscernible ] further study to better understand the toxic effects of chemicals and to better understand and treat acute human poisonings. General cellular function, there Ronald transmission, both central and peripheral. Sodium potassium, xenobiotic metabolism, cardiac conduction and Arabic metabolism. Oxidase give preceptor activity, immune response of the function. One comment is that physicians on the these break up groups to actually deal in emergency rooms with acute poisoning is was really interesting in terms of what they said that they needed in terms of in the emergency room actually treating patients. What they said they were beating was helpful to a certain extent, but not helpful to address our particular concerns which we're how do we identify the early changes that we can -- where we could look at approaches and use those instead of the whole animal for testing. The Dallas-related to the diagnosis and treatment of human poisonings -- here is some of the stuff that they were concerned about. The definitive identification of toxins, the toxins and serums concentration versus time exposure data, accuracy a patient history reports, laboratory confirmation of known toxins from reported cases, time course of accused life-threatening poisons which they felt was really important. If they could treat patients better if they had that. Chemical interactions. Ford technological evaluations and measurements to address these gaps it was suggested that we needed by -- biomarkers of organ system damage. The recommended research and development activities, the highest priority is given to load of action and human so based systems, high throughput screening initiatives, computational toxicology and associated data management capabilities. Of a lower priority, but still important were determination of toxic telekinetic information, methods to evaluate recovery and reverse ability and methods capable of evaluating organic and hydrophobic materials. Breakout group number two was cochaired by Dr. Hayes and Dr. Martin. Their objective was to discuss and identify observations and quantitative objective measurements that could or should be included in current acute systemic toxicity tests to elicit Keith toxicity pathways that would support the future development of predictive methods. Their conclusions and recommendations -- and you will see this. A bunch of overlap on the recommendations of the various working groups. Biomarkers, clinical observations, and quantitative measurements are expected to provide more information and a better understanding of the pathophysiological effects and the modes and mechanisms of acute systemic toxicity and current animal tests. They recommended non invasive or minimally invasive methods should be developed for collecting additional biomarker measurements to maximize the use of the limited number of animals currently required for acute toxicity tests. Early time points after dosing are best biomarker studies. Given the vast number and the relatively small blood serum volume available there is also a need for reduction of sample volume and increasing number of tests. Target tissues should be properly stored for future research and development studies related to new biomarkers. The activities for obtaining more information on key [ indiscernible ] pathways, short-term activities that were recommended would standardize the procedures for sample, collection, and processing as well as biomarker detection and quantification. Investigate non invasive devices and procedures for obtaining detailed physiological data in animal studies and developed by zero markers for patho physiological effects and [ indiscernible ] mechanisms of acute systemic toxicity. Develop alternative test systems to model key pathways. Long-term activities with the use of the [ indiscernible ] technology to identify sensitive biomarkers, imaging, pursuing non invasive techniques such as ultrasound or other imaging techniques and nanotechnology.The use and early detection of toxicity. Breakout group three dealt with identifying in early humane end points. That was cochaired by Dr. Diggs and Dr. Nemea. Their objective was to discover what in vivo of data should be discovered to elicit keep pathways which might lead to the identification and validation for acute systemic toxicity testing and what data should be a priority for collection to aid in identifying earlier, more humane end points. Their conclusions and recommendations -- and I would like to point out that their objective was very focused on a humane end points. You will see in the course of this that some of the recommendations that were made made not as be applicable in terms of the necessary regulatory requirements. But as I go through this you will note to those. They focused on identifying data collected to [ indiscernible ] the key toxicity pathways that may lead to the edge of the casing and validation of earlier and more humane end points. They discussed the concept of using evidence toxicity as an earlier, more humane end points scored other than death. They recommended the fixed those procedure has the preferred acute oral toxicity test method for routine use unless adequate scientific justification is provided for use of one of the two alternative methods, the up and down procedure and the acute method. Evident toxicity would minimize or avoid the use of more abundant conditions work death as an endpoint which are the andpoints for the UDP. They indicated that there was a need for objective criteria to characterize evidence toxicity and internationally harmonize guidance to support better use. OECTG420 is not equivalent. The equivalence criterion, the tests guideline now includes death along with evidence toxicity. OECDTG420 introduced an animal override. Every test dose allowing for a classification based on the outcome. Some workshop participants stated that this component of the FTP was not validated. That posed a problem. They've recognized the need to develop to globally standardized scoring systems that allow for of waiting of observations for evidence -- both in evidence toxicity scoring systems and

They recommended applying the fixed dose, fix concentration approach for the dermal and inhalation acute toxicity studies in order to use evidence toxicity, again, as a earlier and more humane endpoint for such studies. Some of the U.S. regulatory representatives at the workshop did not agree that the FDP should be the preferred method for any acute systemic toxicity testing, including potential applications to acute dermal toxicity and acute installation toxicity. The recommendations for using the FDP were made only in the context of identifying humane endpoints, but there are scientific and regulatory reasons for using a method other than the FDP. The UDP for acute oral toxicity was to promote [ indiscernible ] values to satisfy U.S. regulatory requirements. Biomarkers were sufficiently productive as humane endpoints that should be the team make-testable opposite on track observational for regulating activity, by temperature decreases, body weight decreases and change in feed and water consumption and hydration status. They recommended the routine observation and reporting of clinical signs of pain and distress and identified several types of data to collect during future animal studies. These data could a in identifying earlier, more humane endpoints. More objective measurements are needed versus the traditional subjective evaluations and objective measurements would facilitate collection and interpretation of mechanistic data. The use of earlier endpoints would avoid found dead animals and [ indiscernible ] tissues. System a collection of standardized data will a in addressing potential earlier endpoints. They recommended Research Development and validation activities to address humane endpoints. The should address knowledge gaps currently associated with predictive early humane endpoints. They highlighted the need to develop objective criteria with which to characterize the evident toxicity and to publish internationally harmonized guidance on these criteria before initiating routine use of the FDP.

One note is that an expert group that has been meeting, looking at the OECD guidelines for installation toxicity has addressed this. This is not one of their concerns that there is not internationally harmonize guidance on these criteria and people saying that you will know it when you see it does not quite cut it, scientifically. It poses an issue. The implementation of her recommended activities, they emphasize the importance of approaches to data mining and sharing of information among international stakeholders to make use of existing and newly generated data where possible and emphasized the need for opportunities for additional training towards applying the recommended measurements and observations as well as interpreting their results. They said that dedicated funding is central to future progress and recognized the need for other incentives to motivate stakeholders to pursue these activities. They said that will defined strategies will facilitate implementation. Breakout group four dealt with the application of in vivo mode of action and mechanistic information to the development and [ indiscernible ] of in vitro methods. The cutters were a doctor Andersen and Elmore. Their objectives were to discuss hockey toxicity pathways indicated by in vivo measurements or other physiological and clinical biomarkers and observations are currently or could be modeled using alternative in vivo test methods. Their second objective was to identify and prioritize research, development and validation activities for in vitro test methods that model the key in vivo toxicity pathways and more accurately, identify accurate systemic toxicity hazard categories. The in vivo toxicity pathways that they felt should be modeled by in vitro Systems were integrated batteries of test methods that would be useful in identifying pathways such as basal cytotoxicity, [ indiscernible ] cytotoxicity and self proliferation. They felt that there should be in vitro tests that could assess the interactions amongst tissue stops and that those presented a challenge. Such interactions would include activation of inflammatory responses, balling toxicity in one organ to lead to enhanced organ toxicity or in remote tissue Targets. Interactions between the initial target issues and immune system components. Content High Throughput Screening could be used to identify cellular targets and in vitro models of these targets could be developed for routine application. The use of the no and cellular and tissue bubble pathways that are altered within animals undergoing a kit systemic toxicity testing. The long term goals for the in vitro modeling of in vivo Becket systemic toxicity included identifying an initial cellular pathways, such as of sedated stress, loss of membrane structure function, interaction with the receptors, developing a quantitative mauling of cellular Cascades that follow these initial interactions, leading to acute systemic toxicity. The dose response models for--Based on the patterns and dose response characteristics of pathways by chemical treatment date. The use of in vitro test methods to evaluate toxicity pathways to access both specific endpoints and dose response characteristics, looking at integrated cell responses' and interaction with cellular target such as over expression of Transport ors in cell lines and examination of uptake rates of chemicals into these cells. The knowledge gaps that were identified included a major knowledge gap, which is the understanding of the in vivo mechanisms of action, of chemicals that would help direct the selection of in vitro test method systems. There is little experience in X experiencing correlation between LD50 and integrated cellular responses to other than the basal cytotoxicity. There are no quantitative procedures that have been developed to describe cascades of responses and predict LD50. The use of human-based [ indiscernible ] cell Systems is addressed to--Would allow at date approach to protecting target tissues and the LD50 for acute toxicity. They recommended R&D activities and collecting standardized information from the in vivo studies conducted for regulatory purposes to better understand mode of action and use this information to guide selection of in vitro test methods, identifying--Specific cellular models to assess the critical toxicity pathways and incorporate genetic variability, apply a broad array of in vitro test methods to screen for modes of action, convene meeting of expertses in each of the associated target tissues and their cellular pathways to address the development and validation of in vitro test methods for measuring cellular response pathways underline toxic responses and to convene expert panels to address the issues of development, and cell mines using appropriate biomarkers contest implementation and data analysis procedures. --Direct their differentiation to express the biomarkers normally expressed in the tissue of interest. Select and use test chemicals that are active in atoxic response pathway as well as negative controls. Develop databases of genomic changes and develop computational system biology approaches to predict acute toxicity.

To implement the it recommended activities, they suggested identifying model cellular systems processing chemical activity in that the pathway and identifying Agents that relate to toxicity in the systems, interpret the results using standardized test panels to compare with the rodent LD50, use statistical tools currently being developed and implemented to facilitate interpretation for association between potency and specific pathway test methods and the rodents LD50, determine the effectiveness of each system alone and in combination to predict in vivo toxicity and to consider incorporating these test methods into the assessment of acute toxicity in parallel to the in vitro basal cytotoxicity method that is currently used. To develop a proper analysis procedures to compare performance of the new test methods in relation to the date basal cytotoxicity test for predicting the it broke into LD50. The last break out group was industry involvement and test Development which was cochaired by Dr. Stott and Scala--In order to advance the development and validation of more predictive in vitro test methods and earlier, more humane endpoints for acute systemic toxicity testing. As I mentioned yesterday, when they discuss the current uses of date cytotoxicity testing by the industry, they said that since a large reductions in animal use have already been made for a kid systemic toxicity testing, the impact of in vitro test methods or further M reduction--The in vitro test methods could eventually replace the in vivo acute toxicity test methods if a full battery of in vitro tests were available to take into account the many mechanisms and mode of action of acute toxicity. The availability of a validated in vitro test method for acute toxicity and the inclusion of such a test in a formal testing guideline would facilitate its widespread usage. However, industry tends to follow the most efficient trap. In other words, they use the standard in vivo test methods that regulatory agencies assuredly accept. Industry is concerned about how the resulting data, the in vitro test might be interpreted by regulators and whether they might result in an unfavorable regulatory action. That was a one of the impediments that they saw in terms of sharing any in vitro data that they have with us.

They noted that the current cost-benefit ratio does not justify using the invalidated in vitro test method to set starting doses, because the number of animals used is already at a minimum. They indicated a willingness to provide data to ICCVAM to advance the validation of more predictive in vitro test methods. They explicitly stated that they would need guarantees about how those test method results would be used and that incentives of some sort would likely be necessary.

Companies are also likely to consider any mechanistic information to be proprietary. That was another concern, because if they submitted a and some of these results and another company was able to use them to gain acceptance of their method, then that would pose a financial disincentive to the company. They recommended creating a public/private consortium that would facilitate data collection and submission. I am pleased to say that subsequent to this, Dr. Stokes and I have had some initial discussions with the industry to start developing such a consortium and see how we can a in addressing their concerns so that they can provide us with the in vitro data.

The other thing that is not on the slides and addresses why they did not provide information to EPA, that I think I mentioned yesterday, was that most of the companies had either done LD50 studies in the past. So, they did not need to do them again. Therefore, they were not likely to do JA steadies at that point, since they already knew the LD50. In addition, they stated that based upon the chemicals that they developed, they knew a lot more about the chemical. They were able to pick starting doses based upon their knowledge of those chemicals. They did not need to use the in vitro test method in order to set the starting dose. That is the end. I think we need the questions back up from the end of Dr. Stokes' presentation.

Thank you, Dr. Wind. Are there any clarification questions from SACATM for Dr. Wind or Dr. Stokes? Okay. Thank you. We will move on to the public comment. I know that Kate has requested to make public comment. If you could go to the microphone and introduce yourself. I am cate from PETA. I went to this meeting and sat in on several of the breakout groups. What I noted from the outside was while that I cannot say that there was no productive conversation that came out of it, at some of the session was quite productive. I think that the charge questions and the people in the room were not the right combination of things to actually progress the level of discussion very far. What I noted in the breakout groups is that a lot of them end up discussing the same things, what are the potential biological pathwayses a vault in acute toxicity? This came up again and again and felt there was a distinct lack of expertise in this area among the involved people. There were a lot of people there. That is true. For example, as Bill mentioned in the [ indiscernible ] they have been studying this for some time and I am sure that many of the key pathways have already been identified. There are in vitro methods in Development to study some of these pathways. I think the discussion could have been started at a higher level of some of these experts had been invited to this workshop and, perhaps, the breakout groups or study sections could have been organized around what the in vitro technology that was already in progress. That is just one comment that perhaps, experts in the field from around the world could have been invited to elevate the discussion. We would have spent a lot less time going around in circles about what the possibility might be. Thank you.

Thank you. Are there any other comments from the public? Okay. We will move onto the SACATM discussion. We do have a recruit to grizzle. --On the next steps from the workshop. I am not sure what the delineation is there. We will try to keep an eye on it. The lead discussants are Helen did and George DeGeprge. Dr Becker has submitted written comments.

These are the written comments from Dr. Becker. Regarding question one, it says please see discussion of breakout group five of the safety testing works workshop [ indiscernible ] to develop cost-effective techniques to enable such measurements. The reality is that methods must be cost effective. If they are not, this will be a major barrier to their use. [ indiscernible ] consider having a central laboratory to generate the data [ indiscernible ] and served as a repository for this data and the in vivo data. Need to define a set of procedures to collect meaningful data format the delegation, which suggests a centralized approach. It will not help if [ indiscernible ] methods are used. If results could be generated with little or no cost or transaction effort, then this would be more likely to be successful. The fourth bullet under this, public/private consortium to facilitate data collection and submission propose a discussion of breakout group five of the acute safety testing workshop. Regarding question two, recommended biomarkers, measures that are predicted am ready to use now from the acute safety testing workshop in February of 2008. Bullet one, a behavioral observations already conducted, [ indiscernible ] and printable observations. Most labs have FOB for these. If this is a FOB it will be costly and not likely to move forward probable it two is body changes [ indiscernible ] already included. Would it be possible to have such data generated by the industry? The reality is that industry testing of this type is largely dictated by regulations. Therefore when such regulations incorporate the endpoints into the guidelines, the protocols regulated by agencies [ indiscernible ] data will be regulated.

--That are not currently done routinely, there is considerable need for research to develop cost-effective techniques for obtaining such measurements. The reality is that these methods must be cost effective. If they are not, then this will be a major barrier to their use.

Consider having a central lab, government-funded to generate data, his apology and serve as repositories for this data and in vivo data. We need a defined set of procedures to collect the meaningful data format the delegation, which suggests a centralized approach. It will not help if thible methods use that have different variations, etc.. The add hoc delegations to generate are not very useful. Different labs can use to the protocols, etc.. This can completely prevent data used for validation Review. For example, the ICCVAM peer review of the relatively straightforward [ indiscernible ] bioassay. If they can be generated with little or no cost or transaction after this would most likely be successful. For the biochemical histopathology endpoints, a study is--Per usual procedure. Samples are taken to a central laboratory and they conduct the new measurements. Study lab reports the usual procedure to central lab and central lab--I am not sure what he means there, with the results of the new endpoints. With question three, consider specific funding for research to standardize and validate these methods. Right now there are considerable funding sources in the U.S. for basic research, but identifying sources for funding for validation, standardization and actual validation studies is difficult. Part of the SACATM working group, it would be very beneficial for the NICEATM SACATM--For the high set of priorities that speak to the specific elements of research and development, translation and validation. He lists they report to URL there. From page five, the focus for the current federal funding seems to be for basic research therefore new, revised an alternative methods will have very little, a very difficult time for making it to the researcher at the bench to a regulatory [ indiscernible ]--Must be achieved for a test method to be adopted and used in a regulatory program. This raises the question, are there gaps that exist in the planning or lack of planning for these validation activities? If so, the five-year plan should identify these gaps. Furthermore, in the translation and validation studies were given greater attention, without a strategic plan in place, NICEATM and SACATM activities will not have a clear path forward to devote to focusing these activities on the highest priority methods or areas. That is all.

Thank you. Okay. Do you want to go first, please?

There were certain strengths and weaknesses in this report. I will start out with some of the encouraging strength that I see have been mentioned, very similar to the report yesterday on toxicity testing in the 21st Century. I understand that this workshop followed up on that toxicity testing report. That is incorporating, now, a mechanism of action targeted to the organ and toxicity and cell Systems, alternatives, coupling that with the volumes of information on basal cytotoxicity that has been accumulated over the years. This is a step forward. Every emphasizes a lot of the efforts that have been espousing in the last few years, the development of in vitro tests for production, not just for screening, but also for looking at the mechanisms. We can get a lot of information that can be used to mimic human toxicity as-I practiced early on in my career as a clinical pharmacist and in clinical toxicity. I can appreciate the idea of trying to predict the human toxicity, whether it is-I do not think it is going to be helpful for either animal or in vitro tests to be able to predict diagnostic measures in humans. I think those are still, predominately, observational and clinical case studies. However, in the treatment and follow-up and understanding of the toxicity, that is where the alternative methods could be of great value. For instance, mimicking one hour, 24 hour, up to 24 hour alternative toxic tests, understanding reverse ability and recovery in the cell test, the accumulative defects. You can even go to the cell tests and have the advantage of time. Did not need to do two your studies. Tests can be reducible, population bubbling within the culture system, allowing these cells to double over a week time can be equivalent, sometimes, to up to generations of cells. You can measure the accumulative nonspecific binding to macro molecules. These are methods that can be used to help understand the toxic effects and then be used to understand the human toxicity. The weaknesses and the parts I feel pretty strong about that I disagree about are the working group four, the break out Group four. That is the industry response to using for developing acute systemic toxicity tests.

It is pretty much summarized in slide 25 that the current cost-benefit ratio does not justify validated in vitro--Because of the number of animals used is already at a minimum. This is not accurate and unacceptable. The number of animals that are already used is at a minimum is dependent upon the species that are used. It is true that the higher species, fewer [ indiscernible ] species like dogs and primates are not, perhaps, at best, have been kept at the same numbers or gone down. The number of rodents that we consider lower species have not gone down. I see this as an impediment to implementing alternative methods. This is completely misguided as far as an industrial response. I understand the regulations, historically, the regulations and constraints they are put under in order to market for the Product Development. However, progress has to go on and the world has to move. It is a biotechnology effort that is advancing and we cannot feel that this kind of response hinders progress is a hidden way of hindering our progress here, as far as developing alternative tests.

I suggest, then, that rather than form a public/private consortium, which will probably be more of like you said in the public comments, circling around the same issue over and over, I suggest we move forward as this group has been doing and try to develop a test that target the mechanisms as well as screening tests. Finally, the only thing that I think has not been addressed or mentioned in some of the breakout groups is the implementing of these activities. That is probably the most difficult part of this. That is, how do we foster research and training in this field? It is going to require that the regulatory agencies, the congressional committees be aware that the funding that is available for development, research and training has to increase. This is an important venture, not only for the near future, for the near term, but also in the future. This workshop is based on the future. I have two graduate students in the audience to 20 years from now, perhaps, will be sitting at these tables. I would like to see that in 20 years they will be able to comment on the progress we have made. If we allow the industry and industrial concerns to just be concerned with the bottom-line figures, then the whole field will not progress. Thank you, very much.

Dr. Diggs?

I like to say that I was pleased to be a part of this February 2008 workshop. I thought it was very productive and a lot of good things did come out of it. I think that identifying earlier endpoints in acute systemic toxicity evaluation will have a significant impact on the reduction of animal use in Research of this kind and minimize pain and distress, ultimately. This is an area that does need significant focus. I think that for the question number one, ICCVAM might need to identify the specific mechanistic data of interest they are looking for and, perhaps, even request identified data from a specific industry or institutions that they need to go after the material that they are interested and and if they are going to be effective and efficient in getting that material back and getting the specific endpoint information they seek. That is numbered one. Number two, how can these recommendationses be implemented? I think that there needs to be additional work in identifying the specifications of refinement of the biomarkers. The group that I was in and another group went through a specific biomarkers of interest. I think that although that was a good start on the list, I think there are two Groups here. There are biomarkers as D r. Becker mentioned in his comments that are already there and available and being collected on weight and temperatures. There are other biomarkers that could also be used if we put more research and investigation into it to. I think that those biomarkers, again, need to be clearly identified. Once they are and once we know they are credible and doable, they need to be communicated, clearly, to the agencies and Industries for implementation. Again, I think that ICCVAM needs to present those to industry. I think they are not going to pick those up on their own. I think they need to be handed to them as options that can be provided and the agencies receive those as well to communicate down.

I think that we could expect that the use of earlier endpoints might actually save time and money as Dr. Becker mentioned. I would assume, and I think this would be right, but if we put an end to the studies earlier, we would be saving as a result of that and it could be in and of itself and encouragement to the groups to actually adopt and accept these and implement them. That might be one of our implementation strategies. Number three, data gaps. Again, I think someone mentioned this. I do believe, again, that the data gaps need to be identified, again, by ICCVAM. We had a good start and a good list of ideas. They need to be fine-tuned and clarified and, again, communicated back. Those that are a high priority communicated back to the agencies and to groups of interest that might be able to focus on those. I think it goes back to yesterday, what groups need to hear that these are actually, these gaps exist and the list that we presented yesterday of organizations and agencies that can take that information forward and present it back to their groups to work on those areas and bring that information back to ICCVAM. Again, I think my overall thought is that ICCVAM needs to go out and present both biomarker information and data gap information back to agencies and individual identified groups to get that information back and to expedite this. I think that is it.

Thank you.Next would be Dr. DeGeorge.

Can you hear me? With respect to discussing question one, how might the industry be encouraged to adopt alternate strategies and earlier endpoints, I think that we need to try to identify areas of industry that might be more easily approachable to make a? In the Wall. We have already come across proprietary information being a concern in certain companies that have Technology. I have broken down industry into product companies. There should be a mechanism that I will explain further for blind submission of additional data that can not come back and bite them. Second, a service companies. This is CROs that are doing more and more of the attesting as labs that were large and belonging to oil companies and chemical companies have closed and the testing has moved to CROs, large and small, and even international, China, India, etc.. We all know about China. That is an industry. That might or might not be easier. CROs are going to look for some kind of financial support in order to be able to contact all of their sponsors and spend the time to encourage them to code their data and submit it. The sponsors can be encouraged. We often hear and I have said it a lot that sponsors do not want to give data. If you are willing to do the work as a CRO and willing to code it and there is a policy in ICCVAM, NIEHS, NIH that maintains confidentiality, there are cooperative companies that would like to get that data out. They want safeguards. We have to get past their lawyers. Number three in the industry are non-profits. CIIIT, the American chemistry Council, the former CMA, I think I have those acronyms correctly, the largest nonprofit CRO in the whorled, they do a lot of the government work. Not all of it is strictly national interest concerns with secretiveness. They could be encouraged as a liter, as a pioneer. I think-I do not know what relationships ICCVAM has with [ indiscernible ], but I think they should get together and the Army and other DOD departments can work with that. [ indiscernible ] meeting can help.

So, I stop by breaking down the industry. The smaller nonprofits will be looking for money. Patel has a lot of money. They can be encouraged to include, along with their quoted project cost to the military, which is a one of the biggest customers to collect and deliver mechanistic data. Number two discussion question, incorporating humane endpoints, please comment on how industry can efficiently do so. Well, bear in mind that I have broken up industry into product technology companies, it DeGeorges, Service companies and non-profits. Each might require a slightly different approach. Some common elements would be in animal tests the NIH guidelines that died animal use and care book, which is updated fairly frequently but could be updated more frequently could include some guidelines on additional things to incorporate into the studies. For example, collecting serum and collecting it and thinking it. I know that ICCVAM does not have money but they will bank all of these serums and they can analyze it or [ indiscernible ] of cells and tissues to look for mitochondrial DNA, things going on in the [ indiscernible ]. So, along with that is the recommendation that the NIH Guide to animal-it is called the guy to animal use and care--the guide to animal use and care. There is right there a prerogative, a prioritization on use. And so, there is where we can put additional considerations for humane endpoints and collection of data that would allow us to do things. A simple example flowing into question that's going back to question one is, the blood glucose levels because of tests have become incredibly cheap to test. The Army can measure over 100 different parameters in microliters of blood on little strips that are connected to a PDA. It was supported by SPIR grant money. That is like 100 or $200 per glycogen not just take blood samples and record the data and bank it? That is banking data. That is cheaper than having them to store cells or serum. We are talking about death of animals, looking at lactic levels, glucose and [ indiscernible ], I think she would probably have a level of expertise and can chime in on that, I think it would be a good early endpoint. How ultimately predicted it is, remains to be seen. I think it shows that there is a lot of promise of there. It comes one of the few things that came out of the emergency room physicians that wanted to know the respiratory rate and telemetry of the cardiac system and all kind of things that go on in humans that are hard to measure in Road ins, this is not hard to measure. Simple and electrolytes and blood levels. You need a tiny little bit of blood. And other easily in degradable endpoint would be that whenever you observe an animal, weigh into. The body weight changes, [ indiscernible ] morbidity. There are a lot more observation points during studies the then there are [ indiscernible ]. I see that underused. It is easy and fast and requires very little training. That can be integrated into the use guide care book. [ indiscernible ] recommends it as well.

Functional observational battery abbreviated. The whole functional observational battery is a test. There are simple things, the NEA reflex, you will take the animal out of the cage to take 10 microliters of blood, turn it over and see if it writes itself in a normal Badr. You are supposed to be during caged side observations and doing open field observations in certain studies. The encouragement is to do open field observations, even if they are not required in acute studies. Maybe Research said should suggest that. You can get the body weight and assess lethargy, another early endpoint. So, there are other FOB battery endpoints that are easy to do besides the writing relfex one that I saw. Continuing on between question one and two, somehow we are going to need to have our repository to save the coded blind tissues, media and [ indiscernible ] and bank them somewhere at NIH, NIEHS, ICCVAM, etc., so that we can do retrospective studies. Today we talked about if it would be great if we have the performance standards and do what we wanted to measure six years ago in that the LLNA? Well, let's keep those. Fortunately, in my lab I have lots of studies back many, many years. I do not know how good all of them are, but some of them are really good. I think that we need to look into banking. Continuing on answering one, alternative method predicted strategies, some things you will not know. You will have to bank. Put in policy that non-validated R&D mechanistic data submitted by industry, especially product companies, a technology companies will not be used for later adverse consequences on that industry's potential products. ICCVAM can lead the way on that, get some lawyers to write up confidentiality agreements, protection agreements that say what you submit to us if we find later it has some kind of possible potential toxicity marker that we will not pull your product off of the market and we will not make you decode everything. Now, the temptation is if there is something bad to get it off of the market. If you do not do this, he will not get any data. At least have the data and know and have the opportunity to know rather than have no submissions at all, a solid, air tight confidentiality agreement. CRO have to do it all the time with with the companies, clients that they served. I reemphasize that. Developed a clear and thorough confidentiality policy to protect current and potential future technology productses from the Industries that you seek to gain mechanistic data from. Once that is on paper, they will be more likely to hand over stuff of that they normally throw away.

Moving on to a three. This is a short one. How might data gaps be filled? This is asked in almost every meeting. It is so hard to answer. The only thing that I can come up with, there seems to be a lack of participation of [ indiscernible ]. That is the lab animal science community. These people have a wealth of information and understand earlier endpoints when an animal is going south, when they are not behaving normally, besides measuring body weight. In the laboratories I have worked and that throughout my career, there are people who can grab a rat and say it feels it then, feels lethargic. Look at the eyes. They looked dry. They look wet. I do not own horses but people who own horses can just look at a horse and tell what it is feeling. There are rat whisperers. Also, rejected the veterinarians and veterinary community and provide training for veterinarians in toxicology fields and practices. There are veterinarians that know what to look for. They will look for the things that we want them to look for. They will not look for a shiny coat, say, for example. They will look for what we think are true indicators. Thank you.

Okay. Thank you, George. I have a couple of commentses. One is about reaching out to industry. I think some of what we talked about yesterday might be relevant. I know it was hinted at but might not be specified. Reaching out to the community that we would reach out to, some of the ocular things. As the [ indiscernible ] organizations and that sort of thing, as far as humane endpoints. You have a balance in those industries labs program have the science side, to get the data to fulfill what is needed and you have the [ indiscernible ] and animal welfare side. A lot of these things are driven by the animal welfare side. That might be a good way to reach out. The other thing is about hearing about the fear factor of getting new data and the repercussions and the need for confidentiality agreements, whatever. I do not want to say that that is overstated, but there is another factor here. The companies that generate the data, there are, depending on what part of the industry there can be [ indiscernible ] requirements. If you internally think that this is important, you might have a responsibility to report that. That responsibility within the company might be interpreted in a more conservative fashion than it is by the regulators. That might also result internally and, how do we report that on the material Safety and translate the data reports and product the selection. It is not just about repercussions--it goes beyond that to. It is insidious. I think you should be aware of that. Are there any other comments from SACATM before we finish up for lunch?

I will try to be real brief on this. I find the whole discussion extremely frustrating. It brings me back to the earliest days of my research career in that the late 1950s and 60's when I did put on a white coat or white coveralls and went into the clinic, examined animals, went into the necropsy room.I experienced the frustration of working with animals and trying to Coco predict when they are going to die. I did it literally, hundreds of test parameters or whatever. I find much of the discussion to be absolutely blunt, totally naive. Many of the participants have never had experience working with a living subjects whether it be a laboratory animal or person as it moves into extreme [ indiscernible ] and then dies. It is a very complex process but is also very much governed by the nature of the insult, whether it is, and that this case, we are talking about in acute intoxication. I find much of the discussion drifts away from that. Even in terms of acute intoxication, I suggest someone go into a large poison center, see the nature of the cases coming in. They are so broad that the nature of what one does in terms of making that evaluation is very much driven by professional experience. It varies greatly. I look at this and in that some ways, I have to feel that you have the wrong group of people. They all brought their black bags with their favorite teams to the party and wanted to talk about that in terms of-There are, literally, thousands of different mechanisms of acute intoxication and death related to the range of different age ofs that you are working with. One of the things that I think if you have to start at the starting point and recognize that. Some how these people have oversimplified this and somehow think they are going to come up with this magic solution approach to. My summary is, again, I am very frustrated with this. I think there are many misstatements of fact. I could go through those. We had some grievous misstatements of fact today about different sectors. I think that does not advance the cause here. I would almost say that this is not a real high priority issue. If you want to do one important thing, it is emphasized the rationality that needs to be brought to the table when you do acute toxicity studies and recognize that in some cases, the likelihood, the likelihood of an exposure occurring that could result in acute intoxication is so remote that is not worth expending one rat on. That, I would say, is what would be the single most important thing that you could do to reduce the number of animals used in that this area.

Thank you. If we can keep it to one more. Marilyn, I think you had your hand up.

I have just a few commentses. Let me start with I think I misinformed Dr. [ indiscernible ] about when he asked me about the number of animals. When we look at the number of Romans used, we have to look at the fact that only a small percentage or percentage of them are used in that toxicology. I have indicated to him that there were increases in the number of road ats. I was not thinking strictly in a toxicology environment as the transgenic explosion has occurred in terms of trying to understand mechanisms of disease and such. I might have misinformed him in the preparation for his,s. I apologize for that. One of the things-there was a comment made about validation funding. That is, really, if there was some way of being able to have a place for people could go when they were looking at trying to validate something, an alternative method, if any of the three Res, if there were places to go for finding other than the small amounts coming from private organizations, that would be fantastic when people were talking about the accumulation of information of data, of a samples, whether it is data or samples, I am wondering if there could not be some other way of this information being submitted to the agencies-again, I concur that it would need to be blinded in some way. I do not think it is fair for a company to feel that they will be punished for something that is unknown that they could not have predicted now. The advance knowledge. We did not know that something was problematic. If they submit this information, it does need to be coded. It needs to be looked at predictive and correlation with, not necessarily a specific product, but may be a class of chemical, a type of injury, a type of clinical observation so that if you had, for instance, biomarkers that could be correlated with some type of tissue injury, if you have that information, he would be looking at those kind of correlation, not necessarily a correlation to a specific agent or chemical. There was a comment made regarding the it is guide for the use of laboratory animals. There is an effort afoot to look at revising that guide and there will be several public comment. Things like suggestions for increased commentary on the three Res and humane endpoints and things like that, that would be a good time and opportunity to make those comments. I would also make a comment that when you talk about weighing animals at every clinical observation, you have to remember that everything is not a rodent. Just reaching in and picking up a non-human primates and getting it on a scale is not an easy thing and is not without stress and risk of injury to both people and animalses. So, we have to remember that we work with a spectrum of species in the toxicology environment. We cannot make rules like that without thinking across a broader type of field. That is it.

Okay. Thank you.Can recall that a wrap?

No apology needed. I did understand the use of rota species today, I did not, for the record, it was to imply that the increase in number of road ats is in the toxicology field. There is a increase in rodent use within the pharmacology and toxicology and biological sciences. Where this could also enter lapped and interact with those other fields. Thank you, anyway.

Okay. Thank you, all. We will cut into lunch a little bit. We will call it 45 minutes that will take this to about one 1 :35. I would like to round that down to 1:30. Let's see if we can get everyone back here then.

[Lunch break until 1:30p ET].

Okay. If everyone could take their seats we can get going with the afternoon program. Okay. Thank you. Our first agenda topic for this afternoon is the nominations for the biologic carcinogenicity. Dr.Ties is going to make the presentation. In October 24th the nomination was received versus our online nomination process that's the value of the numbers is that this to predict personages. There has been additional [ indiscernible ] the above the public is provided for consideration that talks about some additional information. And the nomination for the request that his or her name remain anonymous. That was the process that we followed. However, I think you can tell from the recent information that was provided to but the n ominator was. Consider information on misses Wallace and limitation and proposed that the evaluation be assigned a local priority in keeping with our five-year plan and our various priorities that we have established. We reviewed the draft recommended priority document on the 23rd of April which was our next meeting. We established a Federal Register notice effect establishing desk. We listed the nomination as an agenda item and requested public comment. You are considering the draft for this today. Some time as quickly as we can we are going to consider the additional nomination of materials, public comments in establishing the final evaluation priority. The priority is that based on the available information reconsidered that and evaluation of the ability of the essay "accurately identify carcinogens and should have a low priority at this time. There is one caveat we have added. While this represents the proposed Current priority for evaluating the performance recognize that future planning and priorities must be flexible in order to take the vantage of opportunities resulting from advances in science and technology, development of new test methods and to respond to do testing needs. The question that is before SACATM is do you agree with this nomination and if not please explain.

Thank you.Next we would have the opportunity for public comment. Are there any comments? Okay. There isn't. We will take this to the SACATM discussion. I just lost to the of the discussions are. I'll bring that back. Before we go to the of the discussions I have two questions. First, we are being asked to vote. Rather you are being asked to vote on this particular nomination. I would like clarification of why we are being asked to vote. In other words, when is a vote required and when is it not required? My second -- well, let's just answer that first.

In 2003 ICCVAM published a process by which anyone could make nominations for any kind of activity related to our mandates. This could be the nomination of a new method that we wish to evaluate the validity of that they think might be promising. Could be the nomination of a test but the to undergo validation studies. That is the background by which nominations can occur. Again, we will accept those nominations from anyone. Once they come and we provide background information and refers this to an appropriate working group for development of a draft priority and draft recommended activity. So once that is made -- and in this case the community voted unanimously to give this a draft low priority and not proceed with their recommended activity that can and we're then dominated activity. The next step in that process is to solicit public comment. And then that the next part of that process is to bring the draft priority and draft recommended exited the SACATM for their comments and for them to a -- being presented to us to evaluate. There isn't that needed to its. If enacted into the priority that we decided to apply. Obviously if we were considering a new alternative method we would in fact have to look at existing data and come up with a comparison of new proposed.

Okay. We do have a record results. Dr.Cunningham. With that of the discussions will be Michael, Daniel, and Roger. Why don't we start out.

Thank you.I got to be the first. This is really the shortest and simplest assignment. It is one that is very typical for me to address our deal with. Because both sides did not provide enough information or detail regarding the position. For example, based on the prairies describe to any further evaluation of this should have a low priority at this time. Unless we have a comparison analysis done on these priorities there is no ground for us to say I don't want to deal with anything else anymore. So nonetheless I am not ready to pump up the nomination to high priorities. This is a moderate party. We are too busy with high priority items already. In any case there is always around for evaluating the current protocol for the 2-year bio assays. For example, if the pharmacokinetics studies showing that there is no difference do we still wanted to a two-year study? I can go on with another example. If you have in the past if we have enough studies showing that only the high response, the studies with higher response rate would be associated with human carcinogens than do we still need to have 50 animals for those did to a two-year study? Basically what I am trying to say is we always have from for modifying the current plans. This all boils down to the priorities within the next five years. That is all.

Okay. Thank you. I forgot written comments. I think everyone is aware. It's just for the record there we're written comments on this provided but Dr. Frank Johnson. I think you all have a copy of that. Dr.Markman?

I guess I would like to a first comment that I really respect and appreciate the nomination or the suggestion there of because I think in principle I have been here on many occasions. This is a research is directed tests originated back probably with NCI before the days of in a tepee and use in and are in the context beginning to understand the carcinogenicity of agents. It was only later turned into more of a regulatory tool of understanding exposure to humans and the risk their of to these agents. So in that respect I'm not sure that it ever did have the kind of validation toward humans that it's maybe could have or should have. However, having caviar did my response that way I would say that I concur with the placement of the carcinogenicity and the five-year plan. It is important, but I would put it into that second tier. There are other end points that are of more immediate need and concern. I think some of these other endpoints will begin to address the complexity of the needs that will face us as we tried to develop non-animal alternative methods versus toxicity. Then secondly I guess I would say that ideally although all of the methods -- I guess I agree with Roger on some of his comments. All of the methods should be validated against human data in the light of this response. I think this one in particular maybe one of the more difficult ones to do that. We can certainly address it in the context of the known carcinogens, but much of what this is used for is to address carcinogen's for which they human data is lacking. The low dose exposure to the theoretical or presumed human carcinogens, we just don't have. We have a wealth of information on the background incidences of cancers in humans, but no direct causal evidence linking them to a particular agent of the bills. I think it is full delegation will be limited in that regard.

I looked at this with considerable interest and with a long background of experience having conducted a number of these some of which were conducted by organizations I directed before the National toxicology Program and others. This includes the court protocol of NTP and modifications. But this includes extending the time pram. It has a remarkable impact on what one sees when you have that additional six months of observation. The other is studies that we did, particularly others. -- their were also modified. We do the fates of the material and ancillary studies that give us insights into the mechanisms of action. Now when we tried to get those incorporated into particles it was a tough sell. In fact we get very little and. They just took a contrary view. Let's see whether is something there and then grilled to the mechanistic studies. We had an alternative view. This do it in parallel because it's less expensive and makes better use. So I was delighted yesterday when I sort of saw how things were emerging in terms of this kind of alternative approach. I am also very familiar with the extensive analyses and evaluations that have been done. They are basically gold and aims of the data base. They clearly point out the bluntness of the tool, the extent to which carcinogenic outcomes can be predicted of the effects in terms of the maximum tolerated dose. I think in my opinion the bioassay yields a large number of false positives. I think it is also important to recognize that the studies and their results have a wide impact. They play a major role in the biennial report. They play a major role in terms of high art and other government agencies. Ben and there is a lesson that I want to bring out on this. An awful lot of emphasis is given to what is basically a yes or no answer to whether you do get cancer or you do not. I have noted many times that I think it gives a misleading impression to the other scientists and the public about the toxics. So the message I thank it's critical that you moved beyond a yes or no answer to understand the potency. So I am delighted when I see that reflected in some of the work done. So another point I would make is that I think that if this is turned down for the present time I think that it should be done so indicating that it is recognized that this is not a validated method. The importance of that is in terms of any future actions that comes to the committee in terms of looking at it in a feature related to carcinogenicity or other and points that come out of these. It is recognized that it is important to compare the results to the human data, not starting with this as the gold standard. So while I have considerable enthusiasm for a critical review of the 2-year resident bioassay using the rigors of validation methodology I have to say that I think this time it would be inappropriate to proceed with the validation. I do not think that the resources are at hand to do the extensive review that is needed, but with the caveat that a carcinogenicity essay or other chronic effect predicted that comes forward, it is recognized that this may be a flawed in vivo as a and that an evaluation should start with the human data and then secondarily this test.

Thank you.Any other discussion?

I am pretty much in agreement with the general comments of the rest of their group. I think that given the restricted and limited resources that are available and that ICCVAM has done a monumental job at promoting alternative methods with available budgets and also in consideration of the mission to advance alternative methods, I think that setting this had anything but a low priority would place a considerable strain on their resources. You probably can use at least the available data bases that are available.

[Audio and video feed have stopped]

There is a good chance for coming up with a more harmonized outcome. Okay. We were charged by this group for coming up with a proposal. The way we went about developing a that was worst of look back on our cumulative experience with validation. Each of the organizations has. And of course that goes back up to 15 years. We have had many lessons observed. And so we learn from those and put those into our practices we really haven't learned. Those aren't lessons learned. We had a couple concepts that were put together and read talked-about it informally at a meeting in February. We've met twice during the meeting to further talk about that. Then we had a working group meeting. It has been Dr. Marylin Winn and myself. So now I would just like to go through some of the aspects that we are considering for inclusion in this proposal. The goal obviously is to ensure that new alternative test methods are put to use and will provide for equivalent or improved protection reduction, refinement, and/or replacement of animal use will be done wherever scientifically feasible and consistent with providing for protection as noted the purpose of this agreement than is to promote international cooperation, collaboration, and communication among the National validation organizations in order to ensure optimal design and conduct of validation studies. The idea here is that the studies will generate the information to the support national and international decisions on those alternative methods that are proposed for regulatory use. Secondly to ensure high-quality independent scientific review. They came up with the process for receiving Deb predatory acceptance of scientifically valid alternative methods. And we had our workshop and asked for public comments transparency and the opportunity for stakeholder involvement was something that we heard over and over and loud and clear. That was something that all the sticklers wanted to be able to have the opportunity to participate in. Thirdly to enhance the likelihood of harmonized recommendations by National validation organizations that would be submitted to both national and international regulatory authorities. This in turn would lead to a more rapid international adoption of those methods. And lastly the purpose of this is to avoid duplication of effort and to the average limited resources to achieve greater efficiency and effectiveness. Compiling all of that data and carrying out careful reviews of its is a very time-consuming process. We can share that burden. The proposed membership of this agreement would be first and foremost it is a voluntary group. Nobody is required to do anything other than what they have agreed to do. It would be from the U.S., Japan, the European Union, Canada and foreign members initially. As I mentioned his today there are somewhat equivalent. The third member is [ indiscernible ]. The fourth is called Canada. The inclusion of other members and the appropriate status will be decided by consensus by the members. This is very consistent and is modeled after the ICCR agreement where shortly after their formation they have now been approached by other countries that are also interested in participating in that effort. So this cooperation provides a framework for cooperation, collaboration, and communication. Three related but independent critical stages. The first was test method validation studies. The second is the independent peer review of the validation status of those test methods after the validation studies are completed. And then third we take into consideration the outcome and deliberations during those peer reviews, the development of formal test method recommendations. They recognize that consistent and effective cooperation, collaboration, and communication are essential during all three stages in order to support and to achieve international regulatory acceptance of alternative test methods within the shortest possible time frame for and in the most efficient manner. Other details related to the process, the heads of each member organization become a member organization is being ICCVAM, NICEATM, and Health Canada are responsible for ensuring coordination in accordance with the agreement. All decisions would be by consensus, and all decisions with respect the laws, policies, rules, regulations, and directives of members. So I would like to just briefly take you through each of these three critical areas of cooperation and talk about some of the key aspects and the desired outcome. With regard to validation studies the key aspect there is to share information prior to validation efforts. Typically there would be a leading member validation organization. They would provide to the other members the study objectives, the specific regulatory testing purpose, the proposed delegation study design, a detailed study protocols and substances to be tested including their rationale for their selection. The objective here then is to develop a consensus on these critical aspects before the validation study starts. We think that this is a very significant. It is something that they've done will greatly expedite agreements on what these methods are good for and what they're not so good for, their usefulness and limitations. Okay. The second critical area then is independent scientific peer review of both meetings and reports of those peer reviews. Number one is public availability of the review documents. One of the things that we think is essential is that all of the documents, all the information and data that is seen by an independent review panel should be made available to the public. And desirable all of the groups to pick knowledge the average tenanted on the materials. What did that can occur at a meeting and whether that meeting is in public is something that doesn't currently exist in other countries. We need to have discussions about how that might be addressed. A fourth key aspect of this is that the peer review panel report would be made available to the public. Obviously to all of the members so that it can be considered in developing file recommendations so that public comments on that report can also be considered. You all have received the peer Review report presented this morning. That is how we proceed in our process. That report is out now for public comments that are due by early July. That is an example of that. Once all that information comes in it will be considered in coming up with final recommendations. But the goal here is to conduct these in meetings in a way that will meet the needs of all the members and hopefully avoid the need for repeated Pire review in each of the different countries. The third critical area is development of file -- and so this is being developed in response to that charge. So the proposal will need to go back up, but basically what they are looking for is for these groups working with the regulators to come up with a proposal that they can present. But this proposal would work independent in the future. It would also be available to address issues that might be referred. I think there are additional comments.

Are would like to also emphasized that creating an international agreement is not an insignificant endeavor from the standpoint of getting permission as outlined this is a proposal. It is really for your comments on the concept and the elements of it. It is not a done deal.

Within that the road to restructure the State Department has the responsibility for all international agreements. So anything of this nature with have to be done consistently with the state's Department regulations.

So can you say this one more time, your response to the Dr. Cunningham, this group would be -- their relationship, it starts out as a working group? Then it progresses out?

What has happened is that was mandated that groups come together and develop a proposal. So they are developing this proposal as a working group with representatives as I pointed out on the sly. The working group reports back. So once we've finished our deliberations we will send a report proposal that we are in agreement on for comment.

Has there been any discussion on funding mechanisms for this group?

No. Right now the idea is that each of the four groups would be responsible for obtaining the funds for their level of participation as agreed on. Nope. That is a very important part of this. Cestas recommendations are not binding on any one or any organization similarly recommendations are not going to be high cat cracker editions. That simply serves as a framework for each of the four centers. So it's just serves as a firmer to make sure that information is exchanged and that each group hopefully will agree on the same thank. Him why there is no binding decision.

And the peer review process he talked about, you would actually sponsored death and then hopefully it would not need to be her real reviewed in this country. Do you really think that would be the case?

If the independent scientific review occurred outside of this country that is one of the things that we have to take a look at. We do have requirements for scientific peer review. We would -- that is something we have to determine the extent to which that occurs. Those recommendations will still come back for comment. That could perhaps be considered as a level of peer review that would meet that requirement. That is still being worked out.

Could you just very quickly roughly summarize what the experience has been over the last half a dozen years in terms of reduced conducted by the four different entities?

Sure. Speaking on behalf of Canada I don't believe that you have undertaking any reviews or alternative methods. Is that right?

By. That is true. We basically participate in exercises organized and run by ICCVAM.

What has not happened and the past is that there has not been publicly available. Review panel reports except out of the United States. So by having those made publicly available and forwarding does as part of the dossier de organizations that information will help speed the adoption of these methods. We do know that methods where we have independent scientific peer review how we have thorough background review documents.The International Review process is much faster than those that do not. They have gone through it and probably a short amount of time as any. On the other hand we do know that where there is disagreements that kind of documentation doesn't exist. Sometimes the review process takes four or five or more years at the recipient level.

Going back to my original question, how about Japan? Five.

The Japanese system, I think five or six members, five or six peer reviews. [ speaker unclear - audio low ]

And how about --

Maybe you would like to respond to that.

In Europe we had over the last 17 years 34 methods for which our scientific and advisory committee issued specific statements. Twelve of those methods were replacement methods. Sixteen work refinement and reduction. Soak in total or talking about 34. Use.

And we have held many?

Well, everyone to get into numbers --

I'm trying to get a feeling here. You know, let me preface it by saying I am enthusiastic about this. I would like to see it go forward. I think I struggled. I think how you get this far word and yet maintain the independence of each of the entities, I think that is going to be a real challenge.

As a scientist I am supportive of the international cooperation and coordination fine, but at the end of the day had to think that each country in the U.S., they do have certain responsibilities they have to look after. But I'm looking. That is kind of roughly what I thought I had there. Europe has been an front on this with numbers. We have some differences that must be behind this that are going to have to be resolved.

Sure. I just want to address numbers. As I listened yesterday's 17 alternative methods have been adopted since 1999. 510 of those are based on technical evaluations that included detailed background review documents and peer review through the process. We issued validity statements on 34 methods. It is my understanding that 11 of those have been adopted by regulatory authority. Four of them are now included in but the European [ indiscernible ]. They're is a little bit of difference. The other methods, some of them do not have regulatory ability. Some of them just have not been accepted yet. They're is a difference. I think overall generally where there has been a thorough review that has expedited regulatory approval. I think that the independents is important. When we have got back and looks -- and others give you a quick example. The first to this agreement occurred with a discernible. Their view is that these are valid replacements. The basis for that, we both agreed on the performance of the test with regard to sensitivity and specificity, false positive or false negative. But their assertion was -- and it was not backed up by scientific data. The rabbit tests had a 20 percent false positives and false negative rate for dermal crescive it's the. But there wasn't any basis. That was just -- those are reviews of experts. We looked at that we found that there was a 5% possibility of a false negative for things that were borderline. As far as false positives it was very difficult to come up with a biologically plausible way that things could be false positive. That is just an example of where -- if we were working early on we would have had a discussion about the basis of those kinds of things. We would have been able to work past that and avoided that difference in opinion. So as a result of that we recommended that it could be a screening test review could accept positive results. The negative test would have to be confirmed and validated.

Just to follow up my concern is that this is actually going to park some processes down, slow things down, just the opposite of what you're hoping for. When you get into situations where there is disagreement everything stops. I am just expressing my concern. This may have a reverse effect.

I appreciate that concern and I think that's is a valid concern.

Let's come back to discussion. We still have a public comment. Are there any discussions or clarification?

Has this been a run up the state's Department flagpole?

Not yet.This is being done under the auspices of the ICPR agreement which is being led by FDA. So it is being done under the process. And in fact there is a public meeting this afternoon.

Go ahead.

You mentioned that this is the ICCR. It seems it is being driven by [ indiscernible ]. Has there been any discussion with any who also have international curbs if they would concur with this kind of arrangement.

To the extent that there has been the opportunity for discussion at this meeting [ speaker unclear - audio low ] knowledge by the industries regulated.

[ speaker unclear - audio low ]

Well, the only impact it could have would be set expedite the agreement. The recommendations that would go to these organizations, instead of being differences coming out of validation they would be similar. Okay, we want to do this on a consistent basis because when we do it on an ad hoc basis it works well and is beneficial.

I had three other comments. I will just read them and then you can answer all at once. One had to do with, today we had a situation where we were asked to comment on setting priorities for something moving forward. I was just wondering how setting priorities was going to occur in this scheme. I also wondered if there were other models for this type of collaboration in other areas other than the alternative area where there had been this kind of international collaboration. The service ID seems that in general as we have these discussions they come with a recommendation, and I did specifically see that there was a recommendation if you felt that -- I think I can figure out what it is. You are advising that we support living for are not. Those are the three points.

Okay.

We would appreciate your comments about this. Any concerns? Expenses of support were not support? That is what we are looking for. With regard to other models having been at the meeting in September it appears to me that this is a very good message. We were only one topic of the five. There were other areas three at the rear of the vessels as Ford and of materials that might be included. It's one of those topics. Another topic is a standardized national labor in and nomenclature. And so based on the discussions that occurred this books like a very good for mark. I think that of these from my perspective and those that I have talked to that's just having a consistent framework for people get together to talk about things and share information is a very positive thing to do. Setting priorities. Because the priority was for an activity to undertake it probably would not go to this group for additional discussion. I mean, it certainly could receive their input, but typically the only ones that would be relevant would be if there was a proposed ballot is a study. Where a review of a test method that was nominated. And then it would be useful to seek informally the views of the others. Usually it is very well known what the priorities of these things are in other areas.

I just wanted to say that although this was presented in the context of the cosmetics this kind of activity is called for in the five-year plan of Chapter four and fulfills many of the items that were requested to develop international partnerships and to promote harmonized adoption. So this goes a long way toward setting up a framework that allows us to achieve this.

You need to understand what Bill said.

I have a question as far as the rule of SACATM moving forward. Would SACATM be morphed into [ indiscernible ] where you have Japanese, Canadian scientists being on an advisory board?

That's very deep.

Well, you have at Hawk nonvoting international representatives here at the table right now from Europe and Japan. And similarly we are chair of ICCVAM and chair of NICEATM. This is an international -- we are doing this to exchange information. We want to create awareness. I don't foresee that there is now a new a advisory board or a new rules being made, but whatever activities went on in the context of this international cooperation we are certainly going to be presenting here just as we have done.

The functions and authorization obviously come from the authorization Act of 2000. We will continue to carry out those activities. In your functions would not change.

To you have something to say?

I have one other comment. If you look in the appendices of the five-year plan -- if you look under the duties it says -- this is the law. It says that consistent with the purposes of his son described in subsection B and carry out the following functions which does say facilitate appropriate interagency and international harmonization of acute or chronic toxicological tests protocols that anchors the reduction, refinement, or replacement of test methods. So what we are doing already and we are proposing to do on a more consistent basis his tough fulfil that mandate.

Okay. That brings us to a public comment. We have one request. That is from Karen.

Okay. It has been stated that finalize and for recommendations to regulatory authorities which I assume would include the ICCVAM member regulatory agencies. My question is how individual ICCVAM member agency concerns, possible disagreements withrecommendations or whoever is a representatives will the regulatory needs of the US member countries and the statutory mandates be considered. The structure that has been presented seemed to be one level removed from individual ICCVAM member input and representation. That is my first question you want to answer that first?

Go ahead.I'll speak after you're done.

I am wondering if there is another potential conflicts in the that individuals from ICCVAM NICEATM involved in developing recommendations would also be interacting with member agencies so possibly get them to accept recommendations for the sake of harmonization. There might be a pressure aspect while possibly not intentional might occur.

In response to the first question I believe that is from 16 which is the goal for development of final test method recommendations for regulatory acceptance. It is in fact -- their goal is harmonized on recommendations forwarded to the international regulatory authorities. That is the goal that all the recommendations from all four entities would be the same. If they were than those would be -- you could refer to those as harmonized recommendations. That is of little misleading. We probably need to change that because there are not recommendations. There are ICCVAM recommendations. What that refers to is the goal is all four of those would be the same. With regard to whether this might conflict with steps toward authority of member agencies again, these are simply recommendations. They are not decisionmaking. They do not have any legal status as far as regulations or policies. It's simply signs based recommendations. As far as the pressure to accept, certainly there is always pressures to harmonize. That think that is no different than an expert consultation or any other organization where there are international represents. There is always pressure. But the ultimate decision on ICCVAM recommendations lies with ICCVAM. The ICCVAM community will in fact make the final recommendation. It will be done with consultation with the other organizations to make sure we understand their positions.

Considering that ICCVAM is not a consensus body how well minority opinions, if they occur from ICCVAM member agencies on any potential recommendation under consideration be handled?

All the decisions are documented. If there is a minority view by a member agency that it simply would be reflected in our test method evaluation reports. Today we have at -- we have not been able to achieve consensus. That hasn't occurred.

Other public comment?

Serve with the humane Society as a defined. I want to jump back to provided the bit of public policy historical policy because it is important. If it was specifically mentioned that the interest of the U.S. State department. Further said that I think it is very important for the members said the that the number one trade consideration of the European Union as expressed by their trans-Atlantic economic cooperation Council which is a subset of a sort of State Department consideration, both in Europe and here in the United States is insuring that we have cat consensus around with the cosmetics directive because of the impact on U.S. industry. And that began this dialogue which stretched out more specifically to the implementation of the directive and harmonization here and the United States to ensure that our trade was not going to be negatively impacted. It will also stretch more proactively to address those harmonization considerations under reach. Why is this important? It is important because they view as government recognizes that there is not currently a process in place outside of the OECD to really ensure that there is this harmonized activity to take place at the very onset. Certainly from that perspective the animal protection community is supportive over all of what is being considered. My one consideration has more to do with the EU and both the cosmetic directed into [ indiscernible ] the fact that they have deadlines they have to meet. I don't know how utilizing this sort of opportunity for harmonization, they may not place themselves quite frankly in a situation where they are parked down in a meeting what is their legal set of deadlines. And I think it would be good for you, Bill, to address that and then also the did the man who is representing ICCVAM.

Thank you for your comment. We do know that there is quite a sense of urgency in Europe with the impending deadline for the cosmetic directive of March of next year forcing Gold those studies. They do have an exemption for repeat those studies until March of 2013. The single those studies did include acute world toxicity as I mentioned. They do include ocular and durable irritation. I am sure this will be echoed. I know that's tossed has provided the time lines on what he projects best case scenarios are. They're is a sense of urgency. When I was at the meeting in March I had discussions with some of the individuals from the European commission. They did express their sense of urgency. As a result of that we have talked about how we can help. Part of that is moving to the BC zero p and isolated test methods forward as a test guidelines rather than as a guidance. We are Risley proposed does move forward as guidance documents. There was still a lot of gaps as far as the usefulness of that test method. But we feel like in order to assist with moving these forward so that they can be used where for regulatory purposes that we would work and kids and then submitted as test guidelines. So the ocular toxicity working group has been very active. We have had numerous calls. NICEATM staff has been working very hard to put these together, and we hope to be able to afford those in July. That way they can

Maybe I can just give you my [ indiscernible ], if you wanted. First of all, we all agree that the deadlines are really tight and we want to speed up validation and regulatory acceptance.When I look at your presentation, you speed up the process by only doing one peer review and doing several peer reviews in parallel. Especially for complex biological systems two peer review panels might come to a slightly different conclusions and recommendations. That is always possible. Also, by harmonizing the peer review process, you will be able to overcome this process [ indiscernible ]. However, I also think that that is a lesson from and we'll live. If you have too many stakeholders, at one point you are slowing down the process. You have to consolidate too much and that goes in with the comments by Dr. [ indiscernible ]. In conclusion, in view of the tight deadlines, we cannot afford to slow down the process. Thank you.

Thank you.May I make one comment?

Just for clarification, my understanding from internal discussions and from your presentations is for these purposes this ICATM as it would be constituted under the ISSCR would be under the cosmetic part and not under the other parts, that is the FDA review at this point not?

Not exactly, Richard. The ICCR is the group that said to come up with a framework for all of the organizations to work together to come up with consistent harmonized recommendations on the invalidity of test methods. Their interest is in test methods for cosmetics. In reality, the test methods for cosmetics are the same test methods that you use for other products sectors, to a large extent. So, the idea is while this framework is being developed, currently, under a ICCR umbrella and mandate, and in the end it would be a freestanding agreement among the four organizations that would apply to all of their work on test methods, not necessarily limited to cosmetics. Because there are no regulations, no test methods limited, specifically to cosmetics. There just are not. All of the ones that we have considered are applicable to cosmetics but also applicable to consumer productses, to pesticide products. It is being driven by that product sector at this point. It is not limited to that.

Okay. That sounds reasonable. There is not a specific time in two ICH at this point, unnecessarily provoke certainly the information developed around this table will be brought to our ICH representatives the same way we do it now in context. It would not limit things, directly, to cosmetics.

No. Any ICATM recommendation coming forward could be taken by the U.S. FDA who is the U.S. Organization for ICH to the ICH for consideration or by your center radiologic devices and health, the CDHR for the International Standards Organization. The ICCVAM are made to any regulatory agency to move forward to their respective international organization that they deal with. We would not normally deal with them.

I think we are now going to go on to the SACATM discussion. Let's go on to the lead discusants. We have Richard Becker, Mary Jane, Daniel. Richard Becker is here. I do not think we have written comments.

He did not provide written comments.

Mary?

I agree with this whole concept. I think it is a great step forward. Part of my concerns are in the details of the how to as we are going forward. One concern that I have is that right now there are four initial members. The agreement will be by consensus. I think that's that might be difficult in some aspects of this process. I was wondering if there could be some consideration as to when decisions are made, whether it should be a consensus or Coram decision. That might be something to look forward to as far as the details on the how to process. The reason being is that I do not know if you are going to get consensus on all of the decisions that you will need to make, not in draft recommendations but all of the process ahead of that. That was a concern.

My second concern is that as you are adding members, how that will take place. If this is a working model that works really well, I foresee other countries that are not right now involved in this coming on board, and how are you going to maintain your voting and decision ability for recommendations and drafts going forward as the numbers increase? Another concern I had was, I know this is a draft document right now and that all of the Partis have not really had a chance to look at all of the details, work out all of the details yet with the processes being different from each of the member organizations. I was, basically, wondering how you were going to come to a consensus on the different processes and how you were going to put that consensus, facilitate that consensus? I think there should be a lot of thought put into-for example, if there are different peer review process is by the members now, how will you pull that together and build a consensus to have one guideline or one way that you will go through the process to get the certain steps done? I also wanted to clarify my comments before about the scientific advisory board. I was not saying that for my own benefit. We have been rotating off. Part of it was that if ICATM seems like they need to have scientific advisers, I was asking that question based on the redundancy. With that scientific board be redundant with what each member country already has in place, and whether it would be more efficient to have the scientific advisory board be a combined board for ICATM? That was just some thoughts that I had.

A response?

I appreciate your comments. I think as we initially started out, we talked about very detailed process sees that would be consistent with what we are doing here. Because those are not currently being done overseas and in other countries, [ indiscernible ]. As we continue to talk and as I discussed this with the ICCVAM scientific advisory committee in may, there was a willingness and agreement by the [ indiscernible ] that they decided that in the future all documents that were going to the peer review panels would be made publicly available. That has never been done before. So, and not the course of discussions and how you do business and the need for transparency and the opportunity for stakeholder involvement, again, it is and iterative process. They agreed to that. Making. You panel reports publicly available has never been done by the designable. At that meeting they agreed that that would be done in that the future. These are two very significant decisions, procedural decisions by one of the four members that will help move things forward. So, it is a framework for discussion. It is not an autonomous unit that is making independent decisions. That is why I do not think there is a need for an independent international scientific advisory board. I think that is another redundancy. I think each of the three existing organizations do have their advisory committees and having liaisons seems to work very well as far as gaining input from those other organizations. It is not an easy process to come up with. Usage that is a voluntary. Is said that decisions are by consensus with regard to processes that you agree to follow. It is a very challenging to make that work. Anyone who has ever worked in international agreement tops knows that is not an easy thing to do. There are a lot of challenges and barriers to overcome.

Dr. Diggs?

I think this is a good concept. I applaud you. Hopefully, you will move forward with it. I think that most of my questions have been stated and asked and answered. Good luck. It is going to be a huge task and a tremendous amount of effort ahead of you.

I would say, first off, that I certainly applaud the approach. I would say so before, I guess, a purpose statement orebodies statement, cannot be said much better than what ICCR already says. A framework like this would, indeed, promote much better free-trade by identifying ways to remove those regulatory obstacles among the regions and, ultimately, to do that in a framework that maintains a high level of protection of consumer and animal and environmental health. I say that, in particular, as I might have stated yesterday that when we address the issues of occupational and transportation will exposure, consumer exposure, animal, environmental exposure in the products that we develop, there is a variety of regulatory agencies that we interact with your in the U.S.. When you multiply that by the Countries that we market and that, over 170 Countries, it becomes a daunting task to begin to understand all of the complexity of trying to get countries to understand the development of new assays, the refinements to existing animal studies or, God forbid, a non-animal method that they do not understand. So, anything to lower the barriers to those communications pathways would be much welcomed. Anything that would streamline or make it consistent the processes, the expectations, the reciprocity, the acceptance, the implementation of all of those kind of methods are the kinds of things that we would actively support. Having said all of that, I think the one distinction I would sit and can make between the ICCR purpose statement and the ICATM statements, on purpose that Dr. Stokes put up is that there is kind of one that I would say is missing in the distinction. That is when we talk about the ICCR statement about barriers to free trade and stuff, it is a different statement of purpose than, specifically related to the validation of methods coming before the committee. If I had spoken up during the local [ indiscernible ], which I did not, or in the acute tox, the acute tox workshop highlighted the fact that it was really important that we set a framework for the strategy for future work. We have talked in the last two days, not just about the validation status of methods coming before the committee, but in that the case of the acute tox addressing the methodologies that do not exist today. In that particular, I think on one of Dr. Stokes' slides he involved the stakeholders in that discussion process. I think the acute tox workshop was particularly effective in addressing the fact that there is a diversity of stakeholders that use that assay. It is not just the registrants and regulators as they look at the LD50 as a steady or as an endpoint, but in the case of emergency medical personnel or veterinary interpretations, there are a variety of stakeholders at that meeting and it set the stage for how that nothing gets used or interpreted in the context of various exposures or in a R&D setting or even in a clinical setting treating acute symptoms. I guess I would just say that I support the value that Dr. Stokes placed upon what he called a bottom up approach. I might call with the end in mind approach is that as we look at the feedback from stakeholders in the strategy setting stage, it is there that we have the opportunity to make a hard fast definition of what the objectives and goals are for the development of the new assay. Then, we can set a strategy for execution against those goals for new methods. Finally, we are in a position to clearly define what our measures of success are for those methods. I think we have been streamlined an edition process for the development of new methods with the path of least resistance. That kind of activity is occurring at an international level under a international framework for that to be adopted. It would make all of our lives that much easier. Thanks.

Thank you. Any final comments from SACATM? Marilyn? Dr.Fox?

I have two brief comments. One is that I certainly applaud the idea. I am looking even more to the future than ICATM. I'm seeing it as a possible venue to get countries that are not participating at the current time and setting the bid and alternatives like China and India who, even joy, maybe they will not join, but certainly they can be influenced in a global way by four leading countries or four leading groups. The alternative methods could be promulgated. There is a Group of toxicologists, certainly in that India and China that can be spoken to with a harmonized viewpoint. This body would carry a lot more weight than an individual country, especially considering politics with a big P on a single countries approaching other countries and developing countries and I see this in a more futuristic mode of having a positive impact on global alternative animal use. That is all.

Any others?Okay. Thank you, very much.

I have a quick announcement. If folks have not made their travel arrangements to get to the airport, please see Debbie or Sally.

Let's take a 10 minute break and we will get started just before 3:30. Let's make it as quick as possible.

[Session on break until 3:30p ET].

First, a quick announcement. We will have shuttle bus leaving for The Airport at 4:00 and 4:30. If that does not meet your needs, please talk to Sally or Debbie. Our next speaker will be Dr. Kojima who will give us an update on the Japanese Center for the validation of alternative methods.

Thank you, very much for giving me this opportunity. My presentation is the JaCVAM update. I am sorry [ indiscernible ]. [ indiscernible ] applies to Japanese activity. I talk about Japanese activity one year ago. Since then, the Japanese activity has progressed in this field. I will talk about this activity. Today's topic is the international validation study ongoing. There is a new validation study this year. There is the independent scientific peer review and recommendation to regulatory agencies. The international validation study ongoing. As you know, this progress thing--[ Audio/Speaker not clear]. This a validation study.

In vivo and in vitro is organized. This is January we is submitted the SPSF to OECD. -the SPSF has accepted it to the OECD secretary. We have published to the OECD guidelineses. We are progressing the validation study. This June we are starting the delegation Phase III study. In vitro is where we will start. Next year we will start the [Foreign speaker difficult to understand]. This is a [ indiscernible ]. It is the United States, Japan and EU collaboration. This is a Phase IIa. The validation studies were going on this week-[Foreign speaker difficult to understand].

So, this study is made in Japan. The steady name is STTA. This study is a trend has finished a few years ago. This study got OECD guidelines submitted last year. The peer review is finished. The members show--This assay at this point that can only be used for [ indiscernible ] testing. Positive responses will need to be tested.

The new validation study this year, we are going to be antagonistic STTA assay. This is a whole new validation study. The JaCVAM and ECVAM is participating in this study. -[Foreign speaker difficult to understand]. We start this month. The goal is by the end of this year. So, the new variation study, this is a cell transformation assay using Bhras cells. It is submitted to the OECD guidelineses. -[Foreign speaker difficult to understand]--This is developed by the Food and Drug Safety Center in Japan. We coordinated with ECVAM. NICEATM join the variation study. We start next year in this variation study. So, the eye irritation test, cytotoxicity test of short-time. This was developed in Japan by Kao. -[Foreign speaker difficult to understand]. Five variation studies will be started this month. The skin irritation test, it is accepted-[Foreign speaker difficult to understand]. This is made in Japan skin model. It is developed by J-TEC, a Japanese study. It is supported by JaCVAM. There are seven [ indiscernible ] participating. This is-[Foreign speaker difficult to understand]. This variation steady will start in June or July, I think.

Japanese is using the h-CLAT. This was developed by Kao and Shiseido. It is coordinated by JaCVAM and ECVAM-[Foreign speaker difficult to understand]. It will be started next year. So, I will talk about the independent science pure review and recommendation by regulatory agencies. The independent science intense scientific peer review. It was incorporated as a ICCVAM peer review. -[Foreign speaker difficult to understand] acute toxicity working group is good, not? Okay. Thanks. These Japanese meetings will be started in any moment. The LLNA-BrdU-it is a Japanese Independent. This peer review is confusing-[Foreign speaker difficult to understand] to evaluate only in Japanese. The next is the battery system to predict phototaxis of the. -[Foreign speaker difficult to understand]. This peer review will start in Japan. The shadow peer review. This meeting is already in progress. These-[Foreign speaker difficult to understand] to evaluate each test. This meeting will be starting any moment, the reduced RLLNA, the Pyrogen screening and accurate system toxicity. It -[Foreign speaker difficult to understand].

We have the regulatory acceptance. It is in progress. A review of alternatives to animal testing for safety evaluation of cosmetic ingredient using Qsai-drug. The LLNA-DA, the Japanese appear review is finished. The [ indiscernible ] peer review is ongoing for this assay. Only the Japanese peer review is finished. The next step, we will go to the-[Foreign speaker difficult to understand]. I talked about the Qsai-drug. -[Foreign speaker difficult to understand]. The same system with China and Korea. United States is a drug and cosmetic and Canada and the UK is the same system. Cosmetic, there are a lot of cosmetics. So, the Qsai-drugs it is-in the Japanese market, any 1-[Foreign speaker difficult to understand]. Analyze data to prepare the safety data. [Foreign speaker difficult to understand]--To harmonize the cosmetics. These are Qsai-drugs. -[Foreign speaker difficult to understand]. The United States has a lot of OTC drugs. The EU is all cosmetics. One is a hair growth. Also it is cosmetics. It is and difficult-[Foreign speaker difficult to understand]. It is difficult to and to evaluate. In Japan, the alternative to animal testing for safety evaluation of cosmetic ingredients using Qsai-drug. -[Foreign speaker difficult to understand]. We have seven Task Forces. Skin irritation, skin sensitization, penetration, I irritation, but a --Voted toxicity--[Foreign speaker difficult to understand].

So, my comment about ICATM and ICCR-test [Foreign speaker difficult to understand] the Japanese cosmetic industry associates and members and the Japanese society members. It is a small organization. -[Foreign speaker difficult to understand]. They help us to evaluate these messages for cosmetic [ indiscernible ].

Thank you.Thank you, a very much. Thank you for your attention.

Thank you, very much, Dr. Kojima. Are there any questionses okay. Thank you. Okay. Next up we have an update for the European Center for validation of alternative methods provoke Dr. Linge?

Alternative methods. Dr.Linge?

First of all I would like to thank the Committee for the invitation to come here. My name is Jens Linge and I work for the European center for the validation of alternative methods. [ indiscernible ] change jobs on the first of May . Our director is the acting head. She is the acting director for the constituency of health [ indiscernible ] part of the Joint Research Center. I am going to structure the presentation in four parts. I will talk about the EU actions and the legislation in the area of the three Rs. I will talk about time once. The bid is part of my presentation will be on the ECVAM key areas. Finally, I have a few slides on the International corporation on alternative test methods. We already have had a long discussion on that. This is a short summary, not a comprehensive list of our actions. This is mentioning the most important laws in the EU. First of all, we have the cosmetics directives and its amendments. We have the REACH regulation that led to the European Chemical agency, ECHA. It has operated since the first of June. We are in the pre registration period for all Chemicals. We have the directive of the Protection of animals for laboratory use. This is currently, this directive is currently under revision and expect it over the next two years to be a revised directive to pass the European Parliament. ECVAM has been established since 1991. It has been 17 years. We have around 60 staff members working on the [ indiscernible ]. Our research and development are finding projects. Over the last five years [ indiscernible ] $140 million we have invested an alternative methods. Furthermore, we have the Community action plan on the protection and welfare of animals and the European Commission is involved with a partnership with the alternative agency produces the European partnership for alternative approaches to animal testing.

You have probably seen this a couple of times already. Was struck me today is when you are looking at the first paper that was mentioned today was published in 1986 and took many years to go through validation and regulatory acceptance. That outlines the whole process. ECVAM is involved in this part of the process. In that contest we have a two part system that we do validate. That does not mean that there is regulatory acceptance. It was mentioned a couple of times already that we have the seventh announcement of the cosmetics directed that imposes tough deadlines on the phasing out of ingredients. We have testing and marketing bans in forced in 2009 and 2013. That depends on the toxicological endpoint. There is an urgent need for us to validate and have regulatory acceptance of alterative methods. I am talking about the toxicological endpoints before 2009 or 2013. This is our realistic expectation. For the low hanging fruit, skin corrosion, phototoxicity, they can be found in the guidelines. For skin irritation the regulatory [ indiscernible ] is waiting. We have not gotten regulatory acceptance. We have the more difficult and Points, eye irritation which we expect over the next few years, skin absorption/penetration, phototoxicity, skin sensitization and Jeannette toxicity. This includes the my Micro nucleus and the COMET that is coordinated by our Japanese colleagues.

This is coming to the more difficult endpoints. The deadline is still coming. It will be really hard to meet the deadlines. Here is a list of the [ indiscernible ]. We are talking about repeated dose [ indiscernible ] and last but not least, a developmental toxicity. This is especially important because within the REACH regulation this results in an enormous amount of animal testing. It is projected that 60% of the animals for REACH are only for reproductive toxicity. This is mostly the [ indiscernible ] bioassay. These are the difficult cases. Within the EU, we have several research projects to protect that. They protect this endpoint for as. The aim is to move these methods for word and to apply these methods and an integrated testing [ indiscernible ]. We do not expect a single method to replace these tests. We expect there to be several methods used in the testing strategy.

Now, I am going through the different endpoints per on which we are working at ECVAM. I tried to summarize it. There is an awful lot of text. I got feedback from all of my colleagues. I was happy to give them many details. First of all, we have the skin irritation validation study and the chemicals election published. Currently, we are drafting the OECD test guidelineses. This would be subject to the OECD by the European Commission. We have to reply to our national coordinators at the [ indiscernible ] level. We are also having a task force to work on skin irritation, especially to deal with comments at the OECD level. We tried to work on the [ indiscernible ]--We are currently doing a two peer reviews on similar methods. These are [ indiscernible ]. One is SkinEthic and one is the epidermis assay. In the area of eye irritation, we are conducting a retrospective validation of cytotoxicity and cell function based assay. We hope that you can prepare a dossier for the next meeting.

The ESAC is our scientific committee, which is similar in spirit to the SACATM Committee. For the eye irritation, we are working on human reconstituted to shoe models, [ indiscernible ]. There was a submission by COLIPA on the SkinEthic and EpioCular. A plan is being submitted to the Industry. We have two assay. We are working on the OECD guidelines and the human test that leds and trying to incorporate all of these suggestions and improvements required by our certificate Committee. To gather with [ indiscernible ] we are working on the evaluation of [ indiscernible ]. We are currently trying to get more statistical support. We only have been one bio statistician.

In the key area of sensitization, there was the key area of the [ indiscernible ] report and the methodologies for screening skin sensitization. On the elevation of performance there were [ indiscernible ] endpoints. Also, there is a manuscript coming up on the development of new approaches to the identification of respiratory allergens. This year we are planning a workshop in 2008 to evaluate integrated approaches for skin sensitization testing. We need to discuss the referenced chemicals for method development/evaluation and have presentations from the ECVAM taskforces. Furthermore, in the area of sensitization, we are also working on the peptide-binding assay, [ indiscernible ] and there is a draft study plan and prepared and submitted to the European cosmetic Industry. In the area of the local [ indiscernible ] assay, we are following up regulatory information. We have a retrospective evaluation in our new chemical database. We want to see people using the assay Burke of the guinea pig is still used, and so on. We are working on the local performance standards. A revised version was distributed in October of 2007. We postpone the decision in order to wait for the European peer review to come to an end. This is currently on hold.

We are also [ indiscernible ]. And the key area of gene toxicity we have a steady. The report was published in 2008. We are currently following up tech get regulatory acceptance. We are collaborating with studies done by our Japanese colleagues. We are working on it Gina toxicity assets. Currently we are waiting for the publication of the manuscript on chemical selection for g enotoxicity testing. This is due in the next couple weeks. This will be June a 2008. For the transformation we are running a pre validation study. The PRI's delegation was started in its any worry. The complete results is expected and that January of next year. We have advisory members in it that the age as 32. Kinetics is another difficult and point. We are running in the study on but the stability of 72 substances with a goal to establish a database. We are also working on models for predicting at the absorption of sticks to the brain. And on in vitro models for the prediction. Recently the workshop report was published on the physiological kinetic modeling. It was published in November or December of last year. And also collaborating with the initiatives. His so this is going more into research and development. We also have some students working on new methods. This is further away from validation. We are running robotic platforms which is used for research development. We also try to use its for validation protesting of nontoxic chemicals. So the robotic platform is ideal. For the more we had two articles in the area of profiling methods. We have been taking -- we have been taking part in the beating and two dozen date. This is for development and toxicity testing for biological and food we are now strengthening our collaboration of the European Food Safety authority. There has been a workshop report on the consistency of approach with the quality control. We have been involved in the quality control. There was a bridge up in January 2008. We are collaborating the European agencies and London and the Organization of target and safety tests. I'm never coming back. We have experimental animals. There has been a report on the overview of the test requirements and the area of food safety. We are also taking part in the validation study which was organized by the European reference on a marina by toxins. The study aims to validate the test for the detection of toxins and shellfish. This is something which is just about to be finished. We are waiting currently. We can start with the peer review. From in eco toxicology we are currently validating the essay. This is a validation study which started in January of this year. You also attending the fish toxicity exploit group. There has recently been at meeting. That is another works up on the use of [ indiscernible ]. Hell am I doing with time? This is just to summarize our activities and the area of in the crime destructors. We are also taking part in this European process. We are optimizing. We want to get these in the validation. There are a couple of fellas and studies. So here I mention three validation studies which were outsourced. We have been working on the second draft of the test. This was submitted but Japan. We tried tickets are [ indiscernible ] and it, but eventually the decision was to take it out in June 2008. I can't report it on the outcome of this. This is actually my last slide on the different toxicological and points. This is one of the most important. In the area of reproductive toxicity we are using roughly 60%. Currently there are no alternative methods. However, there is a reduction refunded president. This is called the extended study. Instead of using two generations a movable have this extended one generation study. Rubin we have been participating in the groups. There has been also involvement of a partnership. In March 2008 we had the technical documents for the extended one generation reproduction toxicity study. And we have been co organizing five workshops for this extended reproduction toxicity study. The meeting took place in April of this year. We are involved in the testing guidelines. So far netted the extended generations studied nor the [ indiscernible ] tests included in the first draft are part of this framework. That means the as a consequence of the in vitro tests are not part of this testing guideline at the moment. We have submitted or comments. So now I finished the part on the touch of logical and points. This is the slide to show the progress we've made in our database and activities. It was started in November. In but the fourth part of my presentation of want to come back to our discussion on the International corporation on alternative test methods. Let's take one example we had recently was discussed at length here. This is just to give you a bit of the European perspective. This includes the European performance standards from October 2007. In order to harmonize we have agreed and desirable. The differences between the ECVAM performance standards and ICCVAM performance standards, Dr. Charles mentioned that in the European version we have concurrent positive control. In but the European standards the positive controls are mandatory. We are using a minimum of four animals. We have five panels per group. Finally we also have [ indiscernible ] which can be obtained from a pool of animals. These are actually the main differences between our proposals. And of would like to finish with that and Ed knows the participation of all my colleagues. They have been putting together the big picture for this presentation. I want to thank you for your attention.

Think you very much. Any questions? Okay.

Karen from EPA. Can you tell me what is or are the scientific question or questions that you are asking with regard to the carcinogenicity assays? In other words what questions are those assays addressing?

You are talking about the --

In other words, you have an in vitro assay. What are you trying to address? What science question are you trying to address? Are you just trying to find out whether a chemical as carcinogenic potential or what is the question?

I'm not an expert in the field. Of course you want the label the substance.

So is a yes or no answer?

Yes.

Any other questions? Thank you very much. Well, that brings us to the end of our agenda. To any SACATM members have any business? We could have probably spent some time on it, but we met about a year ago. The tentative schedule is -- they are looking at next May. Do you feel that is an appropriate --

Well, I would not say the beach and more often just for the sake of meeting. However, that said we have some sense of urgency trying to goad the limb to note as a. If you needed as ticket together again before anything was finalized on that in if it can be done sooner than I wouldn't want to wait before we met again.

We have taken that into consideration --

I might suggest -- and I can't remember the context anymore, but I know in the context of some of the workshops we have done these conferences that or any kind of an open format. So there may be things like an update on local lymph node or some other where you want to take feedback. There might be the opportunity to do something like that on an interim basis without actually giving us physically back together.

But said there was a situation where you wanted to get us together one thing I hate to see is a lot of your time and resources used up and getting us together because I realize that is no small task. I think the idea of something like a webinar -- and I understand we have public meeting requirements. That complicates the factors.

I would like to concur with the other members of the panel. If there is a necessity for us to meet at an earlier time, that's fine. The two day format is probably more functional. That one day as we had previously was just not enough. If we need to set me to to discuss a couple of issues that is potentially something that could be thought about.

Thank you. Any other business?Behalf I would just like to thank everyone, at this to his remaining for your comments. I think this has probably been the most helpful meeting as far as feedback and advise. We have had it good attendance. I want to thank them for traveling to North Carolina for our meeting. We usually go to Washington. Thank you for taking the time that to read a considerable amount of materials and provide meaningful advice. It is very much appreciated. We certainly have a list of things he will be working on taking Bill's comments into consideration.

I agree. Thank you very much. I hope you have a safe trip home. Very good meeting.

And the chair would like to thank everybody. I think we can declare the meeting adjourned.

Thank you.The DOE, the final versions of the power point presentation will be posted on the Web. They have to be made compliant before they can be put up. It will be available as soon as it's ready. Please send your reimbursement forms as soon as you can. That is all. Thank you. She already sent you a -- yeah. She put the envelope in the form. Thanks.

[Relay Event Concluded]

Back to top