skip navigation links 
 
Index | Site Map | FAQ | Facility Info | Reading Rm | New | Help | Glossary | Contact Us blue spacer  
secondary page banner Return to NRC Home Page
                       UNITED STATES OF AMERICA
                     NUCLEAR REGULATORY COMMISSION
                                  ***
                  ADVISORY COMMITTEE ON NUCLEAR WASTE
                                  ***
                          121ST ACNW MEETING
     
                            PUBLIC MEETING
                                  ***
                              Ballroom B
                              Crowne Plaza Hotel
                              Las Vegas, Nevada
                              
                              Wednesday, September 20, 2000
               The Commission met in open session, pursuant to
     notice, at 8:00 a.m., B. John Garrick, Chairman, presiding.
     
     COMMITTEE MEMBERS PRESENT:
     DR. B. JOHN GARRICK, Chairman, ACNW
     DR. RAYMOND G. WYMER, Vice Chairman, ACNW
     MR. MILTON N. LEVENSON, ACNW Member
     DR. GEORGE HORNBERGER, ACNW Member.     STAFF AND PARTICIPANTS:
     DR. JOHN T. LARKINS, Executive Director, ACRS/ACNW
     MR. HOWARD LARSON, Acting Associated Director, ACRS/ACNW
     MR. RICHARD K. MAJOR,, ACNW Staff
     MS. LYNN DEERING, ACNW Staff
     MR. AMARJIT SINGH, ACNW Staff
     DR. ANDREW C. CAMPBELL, ACNW Staff
     JAMES E. LYONS
     NEIL COLEMAN, NMSS
     WILLIAM REAMER, NMSS
     DR. JOHN TRAPP, NMSS.                         P R O C E E D I N G S
                                                      [8:00 a.m.]
               MR. GARRICK:  Good morning.  The meeting will now
     come to order.  We welcome Jim Lyons, joining us on the
     staff side of the table, the new Director for Technical
     Support for the ACRS and ACNW.
               MR. LYONS:  Thank you.  I'm glad to be here.
               MR. GARRICK:  Good.  This is the second day of the
     121st meeting of the Advisory Committee on Nuclear Waste.
               My name is John Garrick, Chairman of the ACNW. 
     Other members of the committee include George Hornberger,
     Ray Wymer, and Milt Levenson.
               The entire meeting will be open to the public.
               Today, the committee will hear a project overview
     from the Department of Energy; a Department of Energy
     representative on Yucca Mountain will give a status report
     on the site recommendation consideration report from DOE;
     discuss major aspects of the total performance, system
     performance assessment, the site recommendations version;
     hear an update on chlorine-36 from DOE and the M&O; discuss
     the results of ongoing studies on the fluid inclusion issue
     from a panel composed of representatives of University of
     Nevada-Las Vegas, DOE, and the state; and, review relevant
     activities and status of site tour scheduled for tomorrow.
               We'll also continue to work on preparation for the
     meeting that the committee has with the Commission,
     originally scheduled for October 17, now rescheduled for
     December 18.  At that meeting, we intend to discuss such
     issues as site sufficiency review, risk-informed regulation
     in NMSS, Part 71, ACNW action plan and priority topics, and
     we will also discuss the recent trip taken by ACNW to
     England and France.
               Andy Campbell is the Designated Federal Official
     for the initial portion of today's meeting.
               This meeting is being conducted in accordance with
     the provisions of the Federal Advisory Committee Act.
               We have received one request from a member of the
     public regarding today's session.  Mr. Corbin Harney, who
     spoke to us yesterday, wants a few more moments.  I'm
     suggesting that we maybe do that just before the break this
     morning.  And if others of you wish to make comments, please
     contact a member of the staff.
               Also, please use one of the microphones, identify
     yourself, and speak clearly.
               Okay.  With that, I think we'll jump right into
     the first presentation.  I'm going to ask each of the
     speakers to introduce themselves, tell us what their
     positions are and organizations and what have you.
               MR. DYER:  Good morning.  I'm Russ Dyer.  I'm
     DOE's Project Manager at Yucca Mountain and I can't tell you
     what a delight it is to be here this morning.  I just
     finished a three-week detail in Washington, got in last
     night.
               Next slide, please.
               I'm going to give a quick overview.  There are
     about six topics that we're going to go through, look at a
     little at the project accomplishments over the past year,
     design evolution, the role of the site recommendation
     consideration report, the progress that we're making toward
     the closure of the NRC key technical issues.
               My understanding is I think that was gone over in
     considerable detail yesterday, so I'm just going to hit the
     highlights.
               Integrated safety management system is an
     initiative that is going on across the Department of Energy
     throughout the complex.  We are finishing up our last phase
     validation this week on that.  That is something that has
     taken a lot of energies, just to tell you a little bit about
     that, and then a quick project outlook, what lies ahead of
     us here.
               Next slide, please.  Accomplishments.  Next slide.
               If we look at the general schedule of things here,
     running out through the site recommendation and its
     associated final environmental impact statement and then
     further on out to the license application and eventually
     operations.
               We have put the draft environmental impact
     statement on the street.  We've had public hearings on that. 
     The next major activity that's called for in the Nuclear
     Waste Policy Act is the site recommendation.
               We are putting out a site recommendation
     consideration report this fall/winter which lays out the
     technical basis at this point in time to support and
     underlie that site recommendation decision.
               Next slide, please.
               The things that we got done so far in 2000. 
     Underlying the site recommendation consideration report are
     a series of process level model reports and then below those
     lie the what are called AMRs, analysis and model reports. 
     I'll show you a diagram in a little while which shows the
     cascading hierarchy of technical supporting documents, all
     of which get summarized up into a site recommendation
     consideration report and then finally built into the site
     recommendation.
               There are nine process model reports.  We have
     accepted seven of them unconditionally.  The remaining two
     PMRs have been accepted with conditions.  There were 121
     supporting lower level, more detailed analysis and model
     reports, 119 of those have been completed.
               On the design side, the pertinent document there
     is called an SDD, the system description document.  There
     are 24 of those that lie within our plans.  All of those
     have been -- 24 that are needed to support SRCR.  There are
     others that will be needed eventually to support the license
     application, and those have a further finer level of detail
     in the engineering analyses and 34 of those have been
     completed.
               If you have been following the repository safety
     strategy, about a year ago, we determined that we had
     focused entirely on the post-closure.  We needed also to put
     in a pre-closure element of the repository safety strategy
     and we've completed a preliminary cut at the pre-closure
     safety assessment.
               The total system performance assessment for the
     site recommendation has been completed.  Bob Andrews will
     talk to you much more about that, and about the base case
     calculations for the TSPA-SR and the sensitivity
     calculations that have been completed and are also in
     progress.
               Next slide, please.
               In the testing arena, and Mark Peters will talk to
     you in considerably more depth about this and we'll see more
     of it tomorrow, we've started hydraulic testing in the
     luvium single well pump test, that's been initiated in one
     of the Nye County holes down in the south of Yucca Mountain. 
     We should start multi-well testing and tracer testing in
     '02.
               Within the frost drift hydraulic properties and
     seepage testing has been initiated in the lower lith, lower
     lithofizal unit.  That's primarily in -- that would be in
     niche five.  The drift-to-drift seepage test is underway in
     alcove eight.
               And yesterday, I believe, although I didn't hear
     confirmation, we turned on a test over at the Atlas
     facility, over on Locie Road, an engineered barrier system
     ventilation model test.  It's a scale model of surrogate
     waste packages that have heaters in them and we're looking
     to validate our models to look at how ventilation can remove
     heat from that system.
     And the geo technical investigation of north portal surface
     facility is underway.  This is to provide us more
     information in the seismic design arena.
               Next slide, please.
               Data and software qualification progress.  The
     commitment was that we would have 80 percent of our data and
     software qualified at the time of the SRCR.  We're well on
     the way there.  We're actually over the target with the
     software qualification.  We've got a little bit to go with
     the data qualification.
               Our draft rule for the replacement to 10 CFR 960,
     10 CFR 963, we went through the public hearing process on
     that.  There was a draft final rule that we've submitted to
     NRC for concurrence back in May and, of course, that's all
     tied into the 197 and 63 issue.
               Planning process.  Of course, like most -- well,
     every other DOE agency, we're anxiously awaiting Congress to
     work on the appropriation.  There is supposed to be a
     conference committee scheduled for today and I'm hopeful
     that by the end of today, we'll have an idea of what our
     budget for 2001 will be.
               Of course, we've done planning based on
     assumptions.  We have a lot of stuff that we would like to
     do.  It does not look like that we will be able to bring in
     the entire 437.5 million dollars that the Administration
     requested for the project and the program for this year, but
     it will be reasonably close, I hope.
               Of course, that's the caveat at the bottom.  We
     have a lot of plans, but you can only do what you can afford
     to do.
               Next slide.
               Design evolution.  The design has evolved somewhat
     since the EDA-2 design of a couple of years ago.  One of the
     major changes is as we looked at the waste stream inventory
     coming in, the waste stream, in general, is somewhat hotter
     because there's higher burn-up fuel as part of the waste
     stream.
               Whenever we reevaluated the heat that was in the
     system, of course, one of the requirements that we have on
     the system is a maximum 350 degree Centigrade centerline
     temperature in the waste packages, so that the cladding is
     not subjected to unzipping.
               And when we looked at the effects of the blanket,
     the thermal blanket that would be put in if you put in
     backfill, it looked like we would violate that.
               So part of the exercise that we've done over the
     past year is looking at updating the design to remove the
     backfill, which, of course, gives you a system which then
     could be ventilated for a considerable period of time.
               This is just a schematic of what we're looking at
     now with, again, the horizontal cylinders emplaced on
     pylons.  We now have a drip shield that is part of the
     system.  The drip shield is of titanium, a titanium alloy.
               The material for the waste package is still a
     nickel alloy corrosion resistant material on the outside,
     with, I believe, a stainless steel, essentially a structural
     material on the inside.
               Other things, we've gone away from a concrete
     lining in the emplacement drifts which we had several years
     ago.  Right now, the ground support is just structural
     steel.
               Next slide, please.
               The design, of course, has not been static,
     continues to design, continues to evolve.  Looking at the
     underground, there are a couple of variables that we can
     play with reasonably simply.  Whenever we talk about design,
     it's really a combination of design and operational
     parameters that you can look at juggling.
               For instance, you can take the same design, that
     is, waste package design, the same emplacement drift
     specifications as far as diameter spacing, ground support
     that you use, and you can operate that system in different
     ways by either varying the spacing between the waste
     packages, so tailoring the line thermal load within the
     emplacement drifts, or by using combinations of active and
     passive ventilation for various periods of time, you can
     adjust and control how much heat the system will put into
     the rock mass.
               That's been, as you're undoubtedly aware, an issue
     that the Nuclear Waste Technical Review Board, for one, has
     been concerned about.  It's also one that the U.S.
     Geological Survey expressed concern about in the Director's
     review of the viability assessment a couple of years ago.
               So looking at different ways that one can approach
     the management of heat in the system is something that we're
     doing now.     
               The current reference design has moved the
     emplacement drifts somewhat further apart, about twice as
     far apart, about 81 meters apart, and would allow the rock
     mass, the drift walls to heat up above boiling for some
     distance away from the surface of the emplacement drifts,
     perhaps maybe ten to 15 meters.
               And looking at what it takes if we were to try to
     bring this whole system to a below boiling design, we're
     still evaluating what it would take to do that and what you
     would gain from such a change.
               Next slide, please.
               TSPA-SR, the TSPA to support the site
     recommendation, will incorporate the no backfill reference
     design in the base case analysis.  That's a change that
     we're moving to, and I believe that either Bob or Abe will
     talk about some of this, looking at the sensitivity studies
     that are associated with that.
               We will -- the current plan is to maintain the
     reference design, an above boiling operational mode and
     design is the base case, and look at sensitivity analysis
     from the TSPA to try to understand what are the pros and
     cons of various approaches.
               Next slide, please.
               The site recommendation consideration report. 
     This is not a pitch for pyramid power or anything, but it is
     an attempt to try to lay out, at least graphically, the
     hierarchical structure of the thing that sits at the top,
     which is eventually a national action, the Secretary's
     action, and then the supporting documents that make up part
     of the package that go to these higher levels, the Secretary
     and the President, to support a decision.
               The site recommendation consideration report
     currently consists of two volumes.  Volume I is an update of
     what we know and how TSPA works.  Volume II is a preliminary
     suitability evaluation.
               Eventually, the full site recommendation must have
     many other things in it that are called out by law,
     including comments from the state and affected counties and
     Native American Indian tribes.
               Sitting below these two volumes of the site
     recommendation consideration report are what we would call
     the technical basis report.  I talked earlier about the
     PMRs, the nine process model reports, and then below those
     nine PMRs, there are, I believe, it's 122 of the analysis
     and modeling reports, the engineering analysis.
               All of that makes up hundreds of thousands of
     pages of reference material that is referenced or pointed to
     by these higher level documents.
               Our intent has been to try to put this together in
     an integrated manner.  The way we chose to do this is we're
     going for a web-based approach on this, with hypertext
     links.  So one could go to a document and eventually be able
     to go from a citation in the document, directly plunge down
     into the reference citation.  It's going to take us a while,
     I think, to get to that.
               Next slide, please.
               The role of the site recommendation consideration
     report.  It is not a document that is called for by
     legislation.  But the Nuclear Waste Policy Act does call for
     the DOE to hold public hearings to receive residents'
     comments on DOE's consideration of a possible site
     recommendation action.
               And we wanted to put something reasonably recent,
     the last big document we had was the viability assessment,
     we wanted to put something reasonably recent on the street
     that could inform this dialogue and comments for the public
     hearings, and that's where the site -- the idea for the site
     recommendation consideration report came from.
               We are still on schedule for releasing it in late
     2000.  The intent is to summarize the technical basis and to
     facilitate the public review and comment process.
               The technical basis documents, all the bottom of
     the pyramid, almost all of that is going to also form the
     technical basis for the license application.  So we have
     encouraged certainly the Nuclear Regulatory Commission, as
     they are looking at the basis for sufficiency comments on
     the body of work that DOE has done, to not confine
     themselves to just looking at the SRCR, but to also look at
     the other documents, the PMRs, the AMRs, and down to the
     individual lower level reports.
               Next slide, please.
               And that was a proposal that we proposed to the
     NRC back November of last year.  NRC agreed in principal to
     this proposed approach.  So far, we've provided seven of the
     nine PMRs to the NRC staff and 119 of the 121 supporting
     AMRs, and 11 of the SDDs have been provided.
               Some of the documents that we have currently in
     review within DOE, the TSPA Rev. 0 and the Yucca Mountain
     site description, an update of that.  So we will get those,
     I hope, on the street here within the next four to six weeks
     and those will also support this technical basis that we're
     building.
               Next slide, please.
               Progress toward closure of NRC KTIs.
               Next slide.
               We've had three meetings, each on one of the KTI
     areas, since August, got a couple more planned.  I know that
     we've got five more that we need to get planned.  Those have
     been -- I'm very encouraged by the progress on those.  We've
     been able to either close or close pending virtually every
     issue that has been put up, which means that between the
     information knowledge that's known now and the plans that
     are provided essentially as a promissory note for things
     that will be done over the next period of time before the
     license application, those have been acceptable to NRC staff
     as a reasonable path forward.
               Of course, there is no guarantee that the answers
     that you get and work that has not been done yet is going to
     turn out the way you think it will now.
               Next slide, please.
               The sub-issues.  As I said, this has allowed us to
     take almost all of the sub-issues that were examined in the
     three meetings to date and either close them or close them
     pending.  The agreements that have been reached help us
     define a path forward for issue closure and we're -- my
     understanding is that we proposed dates for the technical
     exchange on the remaining KTIs.  The intent, of course, is
     to get those done.
               I think the last meeting that we have scheduled is
     February on TSPA, if I remember right, and the intent is to
     get everything in hand well before that date.
               Next slide, please.
               Integrated safety management system, and this is a
     very large effort within the Department of Energy, and it is
     an attempt to bring what I would call disciplined process
     and integrated management across the entire spectrum of
     things that DOE is involved in.
               Instead of just having nuclear quality assurance
     and nuclear safety and having a different culture and a
     different program for other aspects of safety, the intent is
     to bring one safety system umbrella over all that goes on
     within the department.
               For years, within the department, we have been
     sort of victimized by a series of almost independent
     programs that have competed for resources, rather than
     putting a single program in place that really helps you
     define across the whole spectrum of your programs,
     everything that's going on; helps you prioritize it, helps
     you identify where the safety risks are, and dedicated
     appropriate resources to the most important things.
               So this has been a very good exercise for us and
     we are in the last phases of that and I would hope by the
     end of this week we'll be able to put our name on the wall
     and say that we are moving forward in this.  This is not a
     one-time endeavor, however.  It's one of these things that
     is based on the concept of continuous improvement.  So once
     you have essentially your basic system in place, the charge
     and challenge is to continue to improve that system year
     after year after year.
               Next slide, please.
               Phase one verification.  We had no discrepancies. 
     We had six opportunities for improvement.  Those are listed
     here.  One noteworthy practice, the worker involvement.  The
     workers, especially at the site, have really embraced this
     program and have made it the success that it is.
               When told that they could stop work if they
     thought that there was an unsafe environment or a lack of
     adequate planning prior to implementing some activity, this
     really energized them and they have assumed absolute
     ownership of this program.
               It's great and you will see that tomorrow, I
     think.  You will see the pride and the ownership of the
     safety program on the part of everybody who works on the
     program.
               The phase two verification, the way it breaks out,
     phase one is that you have the paper in place.  Phase two is
     that you can demonstrate that you are implementing the
     program that's on that paper.
               So the phase two was completed back in July.  We
     are currently in the middle of -- I'm sorry -- the phase one
     completed back in July.  Phase two, we're currently in the
     middle of that activity right now and should get that report
     from an independent review team this week.
               Next slide, please.
               Well, the big thing looming on the horizon,
     funding uncertainty.  The appropriation, as I said, has not
     been set.  We are holding our fingers to see what comes out
     of conference committee today.
               There is a considerable discrepancy between the
     marks that the House took into the conference and the
     Senate.  The Administration request was 437.5 million.  The
     House marked 413, the Senate marked 351.  So probably it's
     going to be -- the final mark will be somewhere between 351
     and 413, I would hope much closer to 413, but any of those
     marks is considerably below the mark that the Administration
     supported.
               As we prioritize the work for the next physical
     year and coming physical years, that work supporting the
     site recommendation schedule has the highest priority and
     what we told the appropriators and the Congressmen was that
     at lower funding levels, the site recommendation schedule
     may be at risk.
               The schedule for licensing milestones is
     uncertain.  Of course, our highest priority is for the site
     recommendation work, and what that does to the work that's
     needed to meet these commitments that we're making as part
     of the KTI resolution meetings remains to be seen.  Some of
     those may be stretched out in time somewhat.
               Next slide, please.
               The SRCR and the technical basis documents below
     it continues to have almost all of our energy put into it. 
     There's an enormous amount of work that has been done to put
     together this integrated effort.
               We learned an awful lot in the viability
     assessment effort and trying to put things together, put the
     defensible basis together, and make sure that there was
     consistency through all of the program.
               But translating those lessons learned into firm
     accomplishments has taken a lot of energy.  We're putting a
     high priority, obviously, on the interactions to close the
     KTIs and to support the NRC sufficiency review.
               Looking forward to better understand the
     sufficiency review process, of course, we're waiting, as
     most people are, for 10 CFR 63, 963 and 197 to hit the
     street.
               The same caveat about funding limitations applies
     here.  Actually, I think this is almost exactly the same
     thing we said on the last slide.  Depending on the funding
     level, the SR schedule may be at risk and, of course, it can
     impact everything downstream from that, including the LA.
               Next slide, please.  And I believe I've gone
     through my last slide here.
               So let me, if we have time, I'd be happy to answer
     any questions, Mr. Chairman.
               MR. GARRICK:  Thank you.  Would you be willing to
     comment on the things that would have to happen for the
     design to stop evolving?
               MR. DYER:  Stop evolving.
               MR. GARRICK:  Or to fix the design.
               MR. DYER:  Even if we were to select, say, a point
     design concept, design is an evolutionary thing and I can't
     imagine it stopping, per se.
               MR. GARRICK:  What I'm getting at is there is
     obviously, in the minds of the experts, some things that
     you're looking to achieve, some things that have to happen
     before you feel comfortable with the design.  We keep
     talking about evolving design.  As a matter of fact, the
     National Academy of Sciences recommended many years ago in
     their re-thinking report that you remain flexible, that we
     remain a little more flexible in the way in which we're
     going to manage the high level waste, up closer to the time
     that we have to really do something.
               But nevertheless, there comes a point beyond which
     you have to make decisions.  You have to make that decision
     about the design that you're going with.
               And I guess what I'm asking, from a design
     standpoint, not necessarily from a regulatory standpoint,
     but from your own requirements standpoint, what are some of
     the key things that you think have to happen in order for
     you to be happy with the design?
               MR. DYER:  If we look at the requirements that a
     design must satisfy, first, making clear what those
     requirements are, there are technical performance criteria
     that must be built in, but there are other criteria that may
     revolve around economics, they may revolve around I'll call
     it technical credibility.
               And some of those we're having a hard time putting
     our arms around those about how hard and fast those should
     be as requirements, how do you build those into a
     requirements document, which you normally thought of as
     pretty much performance specifications.
               But there are other considerations that may be
     just as important as those hard and fast performance
     specifications.  But that's an area that we're exploring
     right now.  A program that is technically immaculate, that
     costs hundreds of billions of dollars is probably not very
     useful for this country.
               What's the tradeoff between a program that
     adequately protects health and safety of the public and the
     workers and a program that the nation can afford and
     support?
               I suspect I didn't answer your question, because
     I'm not all together sure that I know or that anybody knows
     what the right answer is.
               MR. GARRICK:  What I was getting at is where the
     focus of attention is.  Sometimes I think in the
     preoccupation with the regulatory milestones and the various
     reports that you have established as goals, it obscures some
     of the understanding of what's going on in fact with the
     design in terms of what really is important to safety.
               Now, we'll get into this more with the performance
     assessment, of course, but, also, I think that people, after
     you've spent three-plus billion dollars, are beginning to
     think that the time ought to be getting close to when a
     design ought to be surfacing that you're quite comfortable
     with or if you're not comfortable with it, you have a pretty
     darn good idea of what it is that it's going to take to meet
     these fundamental requirements of safety and performance.
               I was just pushing that a little bit in terms of
     what were the main issues from the perspective of the
     project manager.
               Any questions?  Milt, have you got any questions?
               MR. LEVENSON:  No.
               MR. GARRICK:  Ray?
               MR. HORNBERGER:  Yes.  Russ, if your budget does
     come in at, let's say, 413 million, will the SRCR be
     released in calendar year 2000?
               MR. DYER:  That's our current intent, yes.
               MR. HORNBERGER:  So when you mentioned it might be
     winter 2001, that was under a budget constraint scenario.
               MR. DYER:  Yes.  That would be a fairly strong
     budget, constrained budget.
               MR. HORNBERGER:  So for the NRC, you're still
     looking for May of '01 to have the comments back.
               MR. DYER:  I think that's right.  I believe --
     yes.  Right now, we haven't changed anything in the way of
     our interactions.  That's what the KTI meetings have been
     geared for and that is our preferred path.   
               MR. HORNBERGER:  Also, just one small
     clarification, if you can clarify for me on the technical
     exchanges.  Of course, you held the one on the unsaturated
     zone in Berkeley, I believe, and you have one coming up in
     Albuquerque on the saturated zone.
               And your little footnote said that unsaturated
     flow is also covered in the saturated zone flow.  Is this
     just a follow-up from the unsaturated KTI or is there some
     part of the unsaturated zone flow that gets lumped into the
     saturated zone?
               MR. DYER:  I'm going to get some help here.  I
     thought they were lumped together and we just treated them
     separately.
               MS. HANLON:  There are two aspects of that, Dr.
     Hornberger.  There are two aspects of that.  First of all,
     the unsaturated zone in Berkeley did not include the matrix
     flow for the saturated zone.  So that part will be moved
     forward to Albuquerque.
               The second thing is we will be discussing our
     evaluation of additional information on infiltration
     regarding the unsaturated zone in Albuquerque and how we're
     going to proceed forward to hopefully clothespin that item.
               So those are the two things that are put forward.
               MR. HORNBERGER:  Thank you.
               MR. GARRICK:  Any questions from the staff?
               [No response.]
               MR. GARRICK:  Thank you very much, Russ.  Very
     good.
               MR. DYER:  My pleasure.
               MR. GARRICK:  Our next presentation will be on the
     site recommendation consideration report.  It says here
     Steve Brokoum, but you don't look like him.
               MR. SULLIVAN:  No.  You have Tim Sullivan instead. 
     Good morning.
               MR. GARRICK:  Good morning.
               MR. SULLIVAN:  Actually, this will be a two-part
     presentation.  When I'm done, Carol Hanlon will give you a
     brief overview, from DOE's perspective, of the sufficiency
     interactions that have been conducted to date.
               I'm the Team Leader for the Site Regulatory
     Products, including the site recommendation.
               This is an outline of what I'm going to discuss
     here.  I'm going to focus here mostly on the contents of
     Volume II of the SRCR and I will explain why in a moment,
     and then some further schedule information.  Some of this is
     redundant with what Russ has presented, so I'll move over
     those parts quickly.
               Current status.  The report itself is in DOE
     review and it's on schedule for release in late 2000,
     calendar year 2000.
               An overview of the contents of the site
     recommendation consideration report.  Volume I of the two
     volume report was built around the requirements of the
     Nuclear Waste Policy Act, Section 114(a)(1), which requires
     three things; a description of the proposed repository, a
     description of the waste form and packaging, and a
     discussion of the data relating to the safety of the site.
               So on the next slide, the Volume I is organized as
     follows.  It includes, in addition to the information I just
     referred to, as a part of Section 4, the discussion of the
     data relating to the safety of the site.  It includes
     TSPA-SR results.  It also includes a section on pr-closure
     safety assessment.  Both of those support Volume II.
               And then Volume II itself -- Volume I will be
     similar in form and content to the viability assessment;
     that is, it is descriptive material and analytical results.
               Volume II is somewhat different in that here, DOE
     will make a preliminary evaluation of suitability in
     accordance with the proposed 963 siting guidelines.
               So Volume II is, in fact, organized around the
     proposed regulation.  Now, I'm going to spend a few minutes
     describing to you what the contents of that volume are.
               It's actually three parts.  The first is an
     introduction, the second is the preliminary pre-closure
     suitability evaluation, includes a description of the
     assessment approach used to achieve safe operations before
     closure of the repository, and it will also include a
     discussion of the suitability criteria identified in the
     regulation.
               The focus here is to ensure that the repository
     systems limit releases and that we have adequate emergency
     planning systems in place or response systems in place.
               Then Section 3 is the preliminary post-closure
     suitability evaluation.  Here, the proposed rule includes
     requirements for the TSPA methodology that I will describe. 
     It calls for DOE to identify natural and engineered features
     important to isolating waste, and we will do so.
               There are a series of suitability criteria or
     characteristics that I will describe.  And, finally, it has
     release limits to which we will compare the TSPA results for
     the 10,000 year compliance period.
               So, first, I'm going to describe in a little more
     detail Section 2 and then Section 3.
               So in Volume II, Section 2, the pre-closure
     methodology, the approach that we're taking is to apply
     established nuclear technologies, proven technologies, and
     using those technologies and established methods for design
     and operations; that is, accepted codes and standards.
               The assessment approach itself, fundamentally, is
     to reduce releases to workers and to the environment.
               The approach starts with a systematic
     identification of events based on standard hazard
     evaluations.  It then follows with a screening or a
     determination of which events apply to the systems that will
     be built at Yucca Mountain and then, finally, these events
     are categorized into category one or category two, depending
     on their associated probabilities.
               Then the consequence analysis is complete, the
     preliminary consequence analyses are reported.  These
     determine the dose for comparison with the limits specified
     in the regulation.
               Finally, the repository design establishes
     criteria for the prevention and mitigation of repository --
     repository design establishes criteria for the use of
     features and controls important to safety.  These are, of
     course, to prevent and mitigate consequences.
               Now, the suitability criteria in the regulation,
     page eight, in these sections, DOE must demonstrate the
     ability to contain radioactive material and limit releases
     of radioactive materials.
               Here, the burden is on DOE to demonstrate that the
     radiation doses are below the limits.
               Secondly, to implement control and emergency
     systems to limit exposure to radiation.  Again, DOE will
     describe the preliminary program for the emergency systems
     and this program will rely on industry standards and proven
     technologies.
               Ability to maintain a system and components that
     perform their intended safety functions.  In this section of
     Volume II, we will record analyses of structures, systems
     and components to ensure that they are performing as
     intended.
               And, finally, the system design will ensure the
     option for retrieval up to 300 years.
               In Volume II, the preliminary post-closure
     suitability evaluation.  Part 963 has a series of
     requirements for the TSPA methodology that will be used to
     assess post-closure performance for the 10,000 year
     compliance period.  These requirements are that data related
     to the post-closure suitability criteria or characteristics
     are incorporated in the TSPA, and I will describe those
     characteristics in a moment.
               The methodology also must account for
     uncertainties both in information and in modeling.  It must
     demonstrate that alternative models have been considered and
     in Volume II, we will summarize the consideration of
     alternative models in the TSPA-SR, in the PMRs, and in
     Volume I.
               It must provide the technical bases for input
     parameters, for the FEPS analyses, the features, events and
     processes analyses, and for the models used or abstracted
     into the TSPA.
               Finally, the TSPA must conduct appropriate
     sensitivity studies.  So Volume II will address each of
     these requirements for the TSPA methodology, on page 10.
               The rule also calls for DOE to identify the
     natural and engineered features that are important to
     isolating waste and we will do so.  Six natural and
     engineered features will be described.  These are summarized
     from the barrier importance analyses that are in the TSPA-SR
     that Bob and Abe will describe in a moment.
               First is surficial soils and topography, reduce
     the amount of water entering the unsaturated zone for
     surficial processes, infiltration is less than
     precipitation.
               Unsaturated rock layers overlying the repository
     and host rock unit reduce the amount of water reaching the
     repository.  Here, seepage is less than percolation.
               Drip shield and inverts surrounding the waste
     packages.  The drip shield that Russ described, the titanium
     drip shield protects the waste package from any seepage that
     may enter the drift and there is a ballast placed in the
     invert below the waste package in the bottom of the tunnel
     that serves to limit advective transport into the host rock.
               On page 11, the waste package prevents water from
     contacting the waste form for thousands of years based on
     the corrosion resistant material that's been selected and
     the corrosion testing that the department has completed to
     date.
               The spent nuclear fuel cladding delays or limits
     the water from contacting the actual fuel pellets.
               Finally, the waste form itself serves to limit the
     contact of water with the nuclear fuel itself, both in the
     commercial and the DOE high level waste glass form.
               Each of these will be described in Volume II of
     the SRCR.
               Okay.  Another requirement of the rule is for DOE
     to evaluate a series of suitability criteria or
     characteristics.  There are a total of nine here.  These
     closely parallel the process model reports that the
     department has prepared to support the TSPA-SR and Volume II
     of the SRCR.
               For each of these characteristics, Volume II will
     describe the technical basis -- that is, the data -- that's
     used in this evaluation.  It will describe the models and it
     will include a regulatory evaluation of how this information
     is represented and incorporated in the TSPA-SR and will do
     so for each of these post-closure suitability criteria.
               On page 13, similarly, disruptive events will be
     evaluated as suitability criteria, also.  Then, finally, in
     Volume II, the results of the total system performance will
     be evaluated.
               The methodology described and the criteria will be
     evaluated -- will be used for a preliminary evaluation of
     the suitability of the site.  This is done by comparing the
     release standards to the TSPA-SR does results both for the
     pre-closure and the post-closure requirements.
               You've seen page 15 before.  The only point here
     is that I'm going to briefly describe the technical basis
     documents, some of which Russ has already described.
               I want to emphasize here that we have assembled
     the site recommendation consideration report such that it is
     fully traceable to the underlying technical basis documents.
     By that, I mean, specifically, that no new information, per
     se, is presented in the site recommendation consideration
     report.  Everything that is presented and summarized in that
     report is fully traceable to the technical basis
     documentation that you see in the pink and the purple areas
     of this pyramid.
               Page 16, Russ mentioned the analysis and modeling
     reports used to document the analyses and models of
     individual FEPS using site characterization data sets.  They
     cover both the natural and engineered features of the site.
               The PMRs then synthesize and integrate groups of
     AMRs to describe and model general categories of features
     important to post-closure repository performance.
               They serve as an intermediate level, the
     equivalent of the TSPA-VA technical basis document, in which
     the component models for the TSPA were described.
               The SDDs are the engineering analyses to document
     surface, sub-surface and waste package designs, and then the
     TSPA-SR uses abstracted results from AMRs to analyze the
     performance of the repository with a focus on a 10,000 year
     compliance period, but it presents results for longer time
     periods, as well.
               Page 17, I think we've been over this.  On page
     18, I provide the current status of the PMRs, the process
     model reports.  In fact, they've all been now accepted as
     Rev. 0 by DOE.  However, further updates and modifications
     to the process model reports are underway in the final
     column.
               There are two main drivers.  The first is that we
     are -- the M&O is now completing the revisions to AMRs to
     incorporate the no backfill design that Russ described and
     those final updates are underway now and they will result in
     updates to the PMRs between now and December.
     And number two, based on internal comments, the FEPS AMRs,
     the features, events and processes AMRs, and there is one
     for each of the PMRs except the integrated site model, based
     on internal comments, those are also being revised and
     updated and those results will also be incorporated in the
     PMRs that will be available at the time of the release of
     the SRCR.
               You will also note here, on the right, that full
     revisions of two of the PMRs are underway, the UZ PMR and
     the engineered barrier system PMR.
               We will have, though, at the time of the release
     of the SRCR, all of these PMRs in their final form
     available.
               Page 19, additional technical basis documents. 
     The preliminary pre-closure safety evaluation is a report
     that's been prepared to support our preliminary evaluation
     in Volume II of the SRCR.  It's an analysis of the
     radiological safety for a repository at Yucca Mountain.
               The site description is a comprehensive compendium
     of site information, including chapters on natural resources
     and natural analog studies.
               The repository safety strategy, which will soon be
     in Revision 4, is a general plan for the identification and
     prioritization of the factors important to repository system
     performance and it formulates a safety case where DOE will
     present the essential aspects of the performance of a
     repository system.
               On page 20, I just want to make a couple points
     here.  The first is that the process that we have used to
     assemble the TSPA-SR and the SRCR includes a foundation
     built on the left side of AMRs and PMRs, some of which, as I
     mentioned, are being updated, in the red box, to reflect the
     no backfill design and those support the TSPA-SR and the
     SRCR.
     We'll go through a similar process and we'll do another
     update to the TSPA in the spring and we'll go through the
     same process again of updating and revising AMRs and then
     PMRs to develop a subsequent iteration of the performance
     assessment for the license application.
               There are a couple of new boxes in here.  In light
     green, you'll note the FY-00 technical update.  This is a
     report that the department will also release at the time of
     the SRCR.  Its intent here is to provide an update on
     testing and design work that has occurred subsequent to the
     freezing of the inputs for the TSPA-SR to provide all
     interested parties the most current information that we can.
               And we are currently contemplating another such
     document for the time of the SR currently scheduled for July
     of '01 to, again, provide as current information as possible
     to support these documents.
               On the next page, page 21, DOE has developed the
     site recommendation consideration report to inform the
     public of the technical basis for the consideration of the
     repository at Yucca Mountain and to facilitate the public
     comments during the SR consideration hearings.  It's to
     promote the dialogue.
               The SR, the site recommendation, that will follow
     will provide a comprehensive statement of the basis of any
     recommendation that the Secretary will make to the
     President.  It will include additional information as
     required by the Nuclear Waste Policy Act.
               It will include the views and comments of the
     governor and legislature of any state or the governing body
     of any affected Indian tribe, together with the response of
     the Secretary to such views.
               It will include, on page 22, the preliminary
     sufficiency comments from the NRC.  Carol will discuss that
     a little further in a moment.  It will include a final
     environmental impact statement, any impact reports submitted
     to the department by the State of Nevada, and any other
     information that the Secretary considers appropriate.
               On page 23, the act does require that the
     Secretary hold public hearings in the vicinity of Yucca
     Mountain for the purpose of informing the residents of the
     area of such consideration and receiving their comments
     regarding the possible recommendation.
               Current planning calls for these hearings in early
     2001.  Location and number of the hearings has not yet been
     determined.
               Finally, here on the schedule chart, I won't go
     through each milestone, but I will identify several key
     ones.  The first star there in late 2000 represents the
     release of the SRCR.  The vertical red dashed lines and the
     horizontal arrow identify the comment period.  Our current
     planning is for 90 days.  And subsequent to that, the gray
     star would represent receipt of the NRC sufficiency
     comments, Secretarial decision on whether to proceed with
     the site recommendation in June, and then DOE will submit
     the SR to the President in July of '01, waiting 30 days
     after notification to the state of the Secretarial decision,
     a minimum of 30 days.
               On the bottom are some EIS milestones which
     culminate in the submittal of the FEIS to the President at
     the same time as the SR and its subparts.
               So that's all I had to present this morning.  I
     could entertain questions now or we could have Carol do her
     piece and have questions later.  I'll leave that to you.
               MR. GARRICK:  Let me ask the committee.  Are there
     any questions?
               MR. WYMER:  I have one sort of a general
     off-the-wall question.  This is going to be a large and
     comprehensive document with a lot of stuff.
               MR. SULLIVAN:  About 1,300 pages.
               MR. WYMER:  Who will review this thing?  It seems
     that most of the people in the country who are competent to
     review it have been involved in the preparation of it.
               Do you have any idea how this thing will be
     reviewed and where will they find the people to do this?
               MR. SULLIVAN:  Well, I can't speak to that
     specifically.  It will be accompanied by an overview, which
     is a much slimmer document, similar to the VA overview.  We
     have targeted the SRCR to a general audience.  We have tried
     to make develop the document so that it will be
     understandable to people who are not expert in individual
     disciplines of engineering or science.
               MR. WYMER:  But you have no feeling on that, and I
     don't really know why you should have, but you have no
     feeling of who actually will do the review, where they will
     get the people from.
               MR. SULLIVAN:  DOE has received 11,000 comments on
     the DEIS.  So we expect people will review and comment on
     portions or the entire document.
               MR. WYMER:  So it's just people.
               MR. SULLIVAN:  Stakeholder organizations and
     individuals, interest groups of various kinds.
               MR. WYMER:  But the President, in quotes, is the
     person to actually say this looks like it's the real stuff
     and it will work, so let's go ahead.
               MR. SULLIVAN:  Right.  First the Secretary, then
     the President.
               MR. WYMER:  So somebody has to advise him.
               MR. SULLIVAN:  Right.
               MR. WYMER:  And it will not be the public.
               MR. SULLIVAN:  It will be the Secretary.
               MR. WYMER:  Okay.
               MR. SULLIVAN:  Taking into account all of the
     comments that have been received on the document and the
     views of the affected governors and legislatures and the
     preliminary sufficiency comments of the NRC.
               MR. WYMER:  So it's kind of a closed loop here. 
     Okay.
               MR. HORNBERGER:  Tim, in your pre-closure
     comments, you mentioned that retrievability would be assured
     for up to 300 years.
               MR. SULLIVAN:  Correct.
               MR. HORNBERGER:  So I take it you're still holding
     to, what, a 50 to 300 year possibility for pre-closure.
               MR. SULLIVAN:  Yes.  We identify the period up to
     300 years as the period in which the repository potentially
     could be monitored and we would have decommissioned the
     surface facilities and entered into a monitoring phase.
               But we would have retained the -- not the
     possibility -- we have retained the capability to retrieve
     the waste up to 300 years, meaning we would maintain the
     ground support within the repository and if and when needed,
     we would commission surface facilities to handle the waste
     retrieval.
               MR. HORNBERGER:  And then your analysis also then
     includes the possibility of a tunnel collapse and things
     like this and still have the capability to retrieve.
               MR. SULLIVAN:  Yes.  You will see degradation
     analyses in the SRCR and in the supporting documents.
               MR. GARRICK:  Any other comments?  Staff?
               MR. LARKINS:  A quick question.  Yesterday, we
     heard from a representative of the State of Nevada who
     discussed how his views on the importance of the performance
     confirmation plan and DOE strategy, and we didn't hear any
     mention of it this morning.  Is this a part of your -- how
     doe the PCP fit into this?
               MR. SULLIVAN:  We will describe, and I omitted
     that, the performance confirmation plan in the SRCR and
     there is a plan, a stand-alone report, that will be
     available also.  So that was an oversight on my part.
               MR. GARRICK:  Thank you.  Thank you very much. 
     Carol?
               MS. HANLON:  Perhaps if I just begin my
     presentation, and Andy will go ahead and he will adjust it
     and you can hold up your hand if you don't hear.
               You will notice that we spent a good portion of
     this meeting discussing different portions of the key
     technical issues, technical exchanges.  I think that that's
     an aspect that warrants a good deal of emphasis.
               I appreciate the presentations that we had
     yesterday from the Nuclear Regulatory Commission, Bill
     Reamer, John Trapp and Neil Coleman I think did an excellent
     job of setting the stage and identifying some of the things
     that have gone on to date and discussing for us the path
     forward that we've looked at.
               I would like to discuss a few of those things from
     the Department of Energy's perspective to clear a couple
     areas I think that remain perhaps not fully understood, to
     help you all with your understanding of our document and
     perhaps understand Dr. Wymer's question a bit more on review
     and how the reviews are going.
               One of the things I'm going to talk about is I'm
     going to talk about process for these meetings, what we're
     looking for, what we're going to do with upcoming meetings,
     and also how we're handling the agreements on items that
     have come out of the meetings, the technical exchanges that
     we've previously had and will continue to come out, because
     there will be agreement items and we're watching those very
     closely.
               So as Tim and Dr. Dyer have both said, we have a
     requirement from the Nuclear Waste Policy Act that we
     provide sufficiency comments.  We include sufficiency
     comments provided to us by the Nuclear Regulatory Commission
     as part of our site recommendation and those are in two
     important areas, at depth site characterization, waste form
     proposal, and I think the words that they are sufficient,
     they appear to be sufficient for inclusion in any
     application to be submitted.
               That's an important concept.  That's a
     forward-looking concept and it helps us focus on where we're
     at with regard to the site recommendation consideration
     report, the SR and forward to the LA.
               So obviously to the Nuclear Regulatory Commission
     staff, a very important portion of their sufficiency review
     has been the resolution of their key technical issues,
     non-key technical issues, and those have been documented in
     the issue resolution status reports.
               Our purpose, our goal is that the technical
     exchanges will result in a clear understanding of the status
     of each of these and where there is not resolution, a path
     forward for reaching resolution.
               The first general kind of overarching technical
     exchange was held April 25 and 26 in Nevada.  That had a
     dual purpose; first, the purpose of discussing how
     sufficiency would be approached and, secondly, to discuss in
     full the status, both perceived by the Commission and as
     perceived by the department, with the new information that
     we had as we were preparing analysis and modeling reports
     and we were preparing the process model reports and going
     forward to the total system performance assessment, what our
     reflection of the status was.
               Based on that, we subsequently set up a series of
     technical exchanges on specific topics.  Now, this is where
     a bit of the unclarity, I think, still remains for your
     committee and I would like to just take the opportunity to
     clarify that a bit.
               You will recall the November 24 letter from Dr.
     Brokoum to John Greeves and in that letter, we proposed an
     approach for providing additional information which would
     support the Commission's ability to review our technical
     basis documents and make sufficiency comments.
               Based on that, we had suggested a series of
     meetings, specifically nine meetings, focused on process
     model reports.  You've heard a good deal this morning on
     process model reports, analysis and modeling reports and
     other documents, both from Dr. Dyer and from Tim, and it
     perhaps is a bit daunting.
               Our approach in setting up these meetings was to
     discuss the particular category of a process model report,
     the analysis and modeling reports that contributed to that,
     the impact to the total system performance assessment, and,
     of course, very importantly, the contribution to the
     specific key technical issue that was addressed by that
     process model report.
               To make our series of meetings and our technical
     exchanges more effective, we adjusted that approach and I
     think that may be something that was a bit confusing and
     left you all a bit in a lurch on that.
               We are now focused specifically on the key
     technical issues and that's so that we can very specifically
     take that same information, the process model reports, the
     AMR reports, the portions of the TSPA, and address it very
     specifically to an individual key technical issue.
               So on this, you will see the set that you had
     previously seen, I think that we did this schedule in late
     July and you've seen it since then.
               I briefed you, I believe, in July on this.  And
     that includes our completed unsaturated zone technical
     exchange in Berkeley.  The igneous activity, also completed,
     was held in Las Vegas, and the container life and source
     term that were completed last week in Las Vegas.
               Two that you're still familiar with are the
     structural deformation and seismicity technical exchange
     that will be held the week of October 11, actually the 11th
     through the 13th, and saturated zone flow, as we mentioned
     this morning, will be held in Albuquerque.
               We've added an additional day to that.  It's
     October 31st, November 1st and 2nd.  I also might just make
     another comment, to make sure that we have ample and
     adequate information for full discussion, full vetting of
     the issues and full presentations, basically we have three
     days for all of these technical exchanges.
               In order to facilitate the Commission's ability to
     do their sufficiency review and have their comments ready,
     as we've asked them, in by the end of May, the Commission
     asked us to go forward in setting up the series of meetings
     on remaining key technical issues and to try and have those
     done by the end of January.
               We moved out in February a bit, but I think we're
     pretty close.  These have not been fully agreed to with the
     Commission staff yet, but I have put months down here so
     that you can see what remains before us and get an idea not
     only of our busy schedule, but consequently, I think, your
     busy schedule.
               The in-package criticality, which is subissue five
     of the container life and source term, will be held October. 
     It's actually, I think, October 24 to 26, as it is currently
     scheduled.
               In November, we will have the technical exchange
     on thermal effects on flow.  December we'll have the
     technical exchange on radionuclide transport.
               Also, in December, we have identified the
     possibility of a briefing on the total system performance
     assessment results, since we had previously had the meeting
     in June in San Antonio.  Now that the TSPA will be coming
     out, there is a potential that there will be a briefing on
     that, not solidified yet.
               In January, we will have two meetings; earlier
     January, evolution of near field environment; later January,
     the repository design, thermal mechanical effects.  And the
     last meeting, the 6th through 8th of February, is the total
     system performance assessment integration.
               So that's our new set and when we have the
     formally agreed to list of meetings, we'll make sure that
     you get a copy of that.
               You've heard quite a lot about the status of these
     meetings and I just want to recap for you a bit.  The three
     meetings we've had are unsaturated zone flow.  The number of
     subissues with regard to that were five.  The sixth one will
     be handled, as we've said, at the saturated zone in
     Albuquerque.  That's on matrix flow in the saturated zone.
               From that KTI technical exchange, we have four
     issues which were closed -- subissues, excuse me, which were
     closed or closed pending and one remains open.  That is on
     the infiltration and that came up in some of the discussions
     yesterday.
               There was some information, new information from
     the center that was presented during the technical exchange
     in Berkeley that our staff had not had the chance to fully
     evaluate.  So what we have come up with is, regarding that
     subissue, a proposal to evaluate that new information and
     identify what we do need to include and what would explain
     or justify what we may not think is relevant.
               So that's the approach that we are going to
     present to the saturated zone.  If you have any specific
     questions, we have Martha Pendleton here, who can talk to
     the specifics of the infiltration.
               Another issue that came out of that unsaturated
     zone flow meeting was the discussion of the importance of
     fully understanding the subsystems as you are also
     understanding the performance of the entire system.
               I think that's had a little bit of confusion here. 
     The point is not that because of a very strong component in
     this case, a long-lived waste package, you're unable to
     understand the subsystems.  The point is that in order to
     understand the full system, we must understand all the
     components, we must understand all the portions of the
     subsystem, the natural system, the engineered system.
               So our particularly strong portions should not
     obscure the issue nor should it be a penalizing component,
     but we must understand fully all of those.  That's what we
     are committed to doing.
               Bob Andrews, in his discussion of total system
     performance assessment, is going to go into a bit more of
     exactly how we take those subsystems, full understand them,
     as we move forward to an understanding of the total system.
               With regard to igneous activity, we had two issues
     there.  One is closed, one is open.  I included our score
     card, and so did John.  This is actually we're a little
     better than that, we're about 90 percent closed on the
     subissue for consequences and the reason we're not fully
     closed on that is that we have some AMRs that will be
     completed late this year, this calendar year, and provided
     to the Nuclear Regulatory Commission, addressing the no
     backfill issue, and it was believed that to fully be able to
     understand the consequences and close that issue, they
     needed those AMRs.
               We fully agree with them and we're making every
     effort to move forward to get those AMRs and they will be in
     their hands in January.
               Container life and source term, we had six issues,
     we've closed five.  The issue, subissue we did not close was
     not addressed in this particular technical exchange.  It's
     the issue of criticality and it will be addressed in a few
     weeks at the second CLST meeting in October.
               It's not in my handout, but one of the things I
     did want to mention to you is that as you can see, and I've
     tried to create a complete record here, you can move through
     a few of these, John, so that you can see what the title is,
     what the status is, and what the agreements are.
               Now, it's very important, as we make these
     agreements, to track them very clearly and make sure that we
     get the exact information back to the Commission on
     schedule, as we've promised.
               An example of that comes again from the
     unsaturated zone meeting in Berkeley, where the Commission
     staff asked to see a copy of the test plans for Alcove 8. 
     We committed to providing those within a week.
               We did provide them within a week.  Neil Coleman
     made sure that the review occurred and he's got our comments
     back.  So those agreements have been handled.  We are now in
     the process of evaluating those and seeing if there are
     modifications that we can make to our test plan to
     accommodate the comments we received from the NRC.
               To make sure these things are happening, we are
     tracking them very closely, both formally and less formally. 
     I won't say informally.  We have a condition identification
     reporting and resolution system, which is a formal system
     that's in place, and that's a system, it's important for you
     to know that that is in place and it will transition as we
     go through the period of having a new contractor.
               So these agreements on the items regarding the
     closed pending will not be lost.  They are in a formal
     tracking system and they will transition.
               Also, on an operating basis, to make sure that we
     are staying very much on top of this, we've instituted
     weekly or biweekly briefings on the status of items that are
     coming out of these and where they remain and where we are
     with regard to providing information to the Commission.
               So we are watching those very closely.  Another
     one that's much on our mind is the infiltration issue that's
     coming up for the saturated zone.
               I think we can probably move forward, leaving you
     with these things to review at your leisure.  I think we can
     move forward to 14.
               From our standpoint, the department has spent a
     great deal of effort on these.  We understand and appreciate
     the effort, also, that the Commission staff has dedicated to
     these technical exchanges.  We feel that they have been
     extremely productive.
               The three meetings only have resulted in four
     subissues changed from open to closed pendings.  Four
     subissues I've discussed remain open, and two of those
     subissues were not addressed in the interactions.  They were
     the saturated zone and the criticality.  So we are moving
     forward.
               We believe that these technical exchanges have
     been extremely effective in establishing either the status
     or the path forward to a closure status.
               I feel that our teams are working closely and very
     well together.  There's a good exchange of information and
     that's very productive, I think, to working toward providing
     the Commission with the information that we believe they
     will need.  And the management commitment, both by the
     department and the Commission staff, has certainly been a
     definite asset.
               So we're continuing to hold and move forward with
     these technical exchanges, supporting the staff's approach. 
     We were a bit disappointed that we weren't able to year the
     Yucca Mountain review plan approach and that the formal
     sufficiency approach.  We're concerned that we really are
     definitely on target and that we're not missing something
     that the Commission will need.
               So the sooner that we can know those specific
     details and know that we are on target, we can move forward
     or we can adjust our path as we need to.
               So we'd like to confirm the path that we're
     currently engaged in and moving forward to both creating
     information that will be sufficient for the license
     application docketing and, also, the sufficiency comments
     that the NRC will provide.
               May I answer any questions for you?
               MR. GARRICK:  Any questions?  John, you have a
     question?
               MR. LARKINS:  I was just curious.  What happened
     to the biosphere PMR technical exchange?
               MS. HANLON:  That was included in the igneous
     exchange, John.  That was very much a part of the second day
     of the igneous activities exchange.
               MR. LARKINS:  Has that been combined now?
               MS. HANLON:  It's been completed.  It was the
     second one that was completed.
               I believe John Trapp, Dr. Trapp spoke about it
     yesterday.  It was believed that in order to fully
     understand those components of the igneous activity, the
     dose consequences and so forth, that the biosphere aspects
     had to be brought in.
               So I think we had either a half-day or a full day.
               MR. LARKINS:  Because there was a previously
     scheduled separate technical exchange and we had committed
     to have somebody participate, but I didn't realize it had
     been combined with the igneous activity.
               MS. HANLON:  Dr. Hines was there, so you did have
     support there.  And I hope that in this I've been able to
     partially explain Dr. Wymer's question about who and how is
     this review going to be done.
               There is a great volume of information and Dr.
     Dyer spoke about it a bit in our hyperlink text system on
     the internet.
               So if you have a specific question, hopefully
     we'll be able to click on that and go down through that. 
     But these meetings are -- we are attempting to set them up
     so that we are explaining the technical basis that's
     supporting various aspects of our TSPA, of the site
     recommendation consideration report, and specifically
     focusing them on the KTIs.
               So hopefully that's assisting the review somewhat.
               MR. WYMER:  Well, I had naively assumed that there
     would be some sort of independent initial review, but that
     isn't going to happen.
               MR. GARRICK:  I guess just to extend that thought
     a little bit, the committee was quite impressed with the
     reports that were prepared by the peer review group that was
     put together by DOE on the TSPA and I just wondered if that
     particular model was going to be applied to any of these
     other key reports.
               MS. HANLON:  I don't think we've implemented at
     this time, but we certainly can keep that in mind.
               MR. GARRICK:  George, you had a question.
               MR. HORNBERGER:  Carol, these technical exchanges,
     and now that you have some experience with them, obviously
     take an awful lot of effort, involve an awful lot of work. 
     I gather, however, even given your experience, you're
     confident that the very ambitious schedule you have you're
     going to hold to.
               That is, you haven't fallen behind in any of this
     as a result of having the ones that you've had.
               MS. HANLON:  We're doing pretty well so far and
     you are right, Dr. Hornberger, it takes a tremendous amount
     of effort, both on the part of the Commission staff and the
     center and the part of DOE and our supporters.
               So they are extraordinarily intensive.  However,
     we've done pretty well in staying on target.  We've looked
     at things carefully.  The Commission has been extremely good
     if something needs to be moved and we've also tried to
     accommodate needs that they may have had to move a meeting.
               But we're shooting for this target and I'm very
     optimistic about it.  We did leave ourselves Christmas and
     New Year and we managed to leave ourselves Thanksgiving.
               If there are no other questions.
               MR. GARRICK:  Any other questions?  I want to
     thank you for presenting an excellent scorecard for a very
     complex process.  That's very helpful.
               MS. HANLON:  You're certainly welcome.  Thank you.
               MR. GARRICK:  Okay.  I think that in spite of the
     fact that a break is not noted in the program and if this
     goes into the Federal Register, I'm going to accept the risk
     of violating that piece of information, and declare a
     15-minute break.
               [Recess.]
               MR. GARRICK:  Let's come to order.  We're not
     going to turn our attention to the often referred to total
     system performance assessment site recommendation.
               I think, Abe, you're going to lead this off, is
     that correct?
               MR. VAN LUIK:  Last week, I had a reminder that as
     we become more effective in life, we take on more risk.  My
     oldest grandchild got his driver's license last week and my
     youngest grandchild decided she could have a better life
     without diapers, and I'm happy to report that in the ensuing
     five days for both of them, neither one has had an accident.
               I'm Abe Van Luik.  I'm the Senior Policy Advisor
     for Performance Assessment and I'm going to give you a short
     introduction to the TSPA-SR, and Bob will give you the
     details under the heavy lifting.
               I wanted to talk a little bit about the regulatory
     requirements that we're addressing right at this moment, the
     objectives of the TSPA, summary of the major improvements
     since the viability assessment, mention a little bit about
     the barrier design and the basis for process models.
               If you look at the regulatory requirements, you
     know that we have proposed regulations on the street right
     now, and to make this talk very short, what we are doing is
     addressing all of the nuances of the proposed regulations
     from EPA, NRC and DOE.
               When these are finalized, they will become simpler
     because the NRC will incorporate the final provisions of the
     EPA and we will not have basically dual nuances on
     definitions of our MEIs and that kind of thing.
               If we look at the individual protection or dose,
     which is the primary performance measure that we are
     concerned with, we have to include probable behavior, as
     well as potentially disruptive events.
               If we look at the objectives, they have changed
     over the years.  TSPA-91, for example, was just to show that
     we could do one.  In '93, we began to get serious about
     using TSPA to align the project, to look at what's important
     and what's not.
               TSPA-95 and the viability assessment got stronger
     in that department and then TSPA-SR is to support the
     national decision-making process.
               TSPA integrates underlying models of individual
     process components.  We're looking at several performance
     measures, individual dose, ground water protection, the
     human intrusion standard, and peak dose for the final
     environmental impact statement.
               We are looking at the significance of the
     uncertainty in the process models and Bob will talk a little
     bit about some of that evaluation process.
               Major improvements.  We've had both technical and
     process improvements.  The process improvements, I think,
     are easily underestimated in terms of the effort that they
     have taken.  But everything is under quality assurance
     procedures at this point.
               We are using the analysis and model reports that
     Russ Dyer talked about as the thing from which we trace our
     data and the information flow.
               We have explicit evaluation, a comprehensive
     evaluation of features, events and processes.  We are using
     traceable data sets and the TSPA model itself can be used to
     move down into the data sets themselves, and we are tracking
     the quality status of all data, models and software.
               Technical improvements.  Some of you mentioned a
     while ago, I think it was John mentioned that we did have a
     review on the viability assessment.
               That review, of course, came after the completion
     of the viability assessment.  So the TSPA-SR and the TSPA-LA
     will be where we respond to that review in terms of
     improving our modeling.
               Models with major enhancements, looking at those
     comments, and also comments from others, such as the NRC, in
     the exchanges, as you've heard Carol talk about, of course,
     we have learned a lot from the NRC about what their
     expectations are and some of these improvements also address
     those.
               But climate and seepage has been greatly improved. 
     A couple thermal processes are lot farther along than they
     were in the VA.
               Waste package degradation, we're looking at stress
     corrosion cracking and initial defects in the welds. 
     Saturated zone transfer and volcanism, all of these models
     have seen major enhancements since VA.
               The engineered barrier.  TSPA-SR is based on the
     site recommendation design, no longer the VA design.  We're
     looking at an average thermal load of 62 metric tons of
     heavy metal per acre, which is lower than the viability
     assessment.  We're looking at at least 50 years of
     ventilation, it may be more.  This is some of the
     operational mode adjustments that Russ was talking about.
               We're looking at blending of fuel at the surface
     to levelize the thermal load.
               The engineered barrier design considers the
     titanium drip shield, non backfill, waste packages placed
     end to end, an average line load of 1.4 kilowatts per meter.
               The waste package itself, still 21 pressurized
     water reactor assemblies or 44 BWR assemblies, and
     co-disposal of Defense spent nuclear fuel and Defense high
     level waste.
               The outer layers, alloy-22, 20 millimeters of it;
     the inner layer, stainless steel, 100 millimeters.  The
     inner layer of stainless steel is not taken credit for in
     the TSPA.  It is a member that gives structural support to
     the waste package.
               There is a dual alloy-22 lid closure weld.  The
     outer lid closure weld, the stress is mitigated by solution
     annealing.  The inner lid closure weld, the stress is
     mitigated by laser peening.  These turn out to be very
     important to long-term performance.
               This is just a listing of the process model
     categories and the process model report.  On the right are
     ones that were mentioned by Russ in his talk and, of course,
     some of these reports, like the near-field environment and
     the EBS degradation flow and transport reports come into one
     basket when it comes to the actual modeling, which is the
     engineered barrier system environment.
               I don't think I need to spend any time on this. 
     It's just I may make the boast that this is the best
     integrated performance assessment we have ever done.
               In the past, you could find principal
     investigators that said, well, I handed the data over to PA,
     but I don't know what they did with it.
               This is no longer the case.  The principal
     investigators are intimately involved in taking their data,
     abstracting it and putting it into the total system
     performance assessment.
               So we've come a long way since the '93-'95 days.
               An issue that we have to be aware of is that we
     have to have some statement of how confident we are in
     whether or not these results that we come up with are useful
     in the decision-making process.  Demonstrating confidence
     requires a lot of things, but it requires showing a
     sufficient understanding of processes, determining system
     behavior.
               Carol mentioned this in her talk just a few
     minutes ago, that the NRC staff is very concerned that we
     show that we understand the processes that we're putting
     into our modeling.
               Systematic applications of the features, events
     and processes, screening.  It's a way to show completeness
     of the arguments, that there's not something big out there
     that you've just completely forgotten about.  This is
     another way of showing that you have a reason to have
     confidence.
               Systematic evaluation of component process models
     and their importance.  You can have a haphazard evaluation
     and do some neat calculations that say, oh, look how
     important this is, but a systematic evaluation is what's
     important to building confidence.
               And you have to show that you have properly
     incorporated the important uncertainties.  And, of course,
     the TSPA, as your documentation, has the challenge to make
     these points clearly and traceably, and in my review of that
     document, I find that it's a very good read.  It's a good
     document and it's well on its way to illustrating these
     points quite nicely.
               Happily, the NRC staff and we see things the same
     way when it comes to a risk-informed performance-based
     approach, and I think some of the success that Carol was
     talking about in having issues closed pending, the delivery
     of the final products so they can verify that we actually
     did what we said we're doing.
               Some of that is based on them and us agreeing that
     some things are more important than others.  Risk-informed
     means that the entire uncertainty distribution, not just
     mean value lines, are being used to inform. 
     Performance-based means that the outcome is -- you know,
     whether a feature or a system or a process is important to
     swinging the outcome one way or the other, is an important
     criteria for judging performance.
               And, of course, something that -- as a corollary
     to that is something that we mentioned on the previous page.
     You also have to show that you understand what you're
     talking about, because if you don't understand it and model
     it wrong, then your conclusions here don't mean much.  
               So all of these things are together to give us
     confidence.
               We use these types of considerations to prioritize
     science, engineering, design and modeling, and if the budget
     comes in lower than we would like it to, we will invoke
     these types of results to say, well, this is more important
     than that and adjust the funding for science and engineering
     accordingly.
               And we also use these considerations to rank key
     technical issues and decide on the level of effort to be
     devoted to address them and I think on this, we are in synch
     with the NRC staff.  They agree that this is the right thing
     to do.
               The thing that we have to do is to make decisions,
     even in the modeling of this system, in the face of
     uncertainty.  The basic rules that we have applied so far is
     that where something is extremely complex or the quantity of
     data is just insufficient to develop a meaningful
     distribution, that we take a conservative or a bounding
     approach.
               We have several coordinated activities underway
     now to evaluate how these decisions, which were made as part
     of the process of creating the TSPA-SR, affects the actual
     performance measure of dose.
               We are looking at unquantified uncertainties and
     these are the conservative values I talked about in the
     first bullet or, in some cases, maybe even optimistic
     values.  This is in the eye of the beholder or the reviewer,
     in some cases. 
               We are identifying those and then we will do a
     trial study, coming up with a distribution for that
     parameter and running sensitivity studies to see how
     important it was to have assumed that, or should we go to a
     more detailed approach.
               On the next page, we are also, at the same time,
     looking at the quantified uncertainties, the ones that are
     documented with data distributions, CDFs, et cetera, which
     are sampled into TSPA.  And we're looking at the
     uncertainties that were considered at each modeling level
     and to get a better feeling for just how uncertainty has
     been rolled up, how well or how poorly it has been rolled
     up, all the way from the data interpretation process level
     modeling into the TSPA.
               And depending on the outcome of these activities,
     a given activity may be expanded or a new activity defined. 
     This is work in progress, in other words.    
               The goal is to increase understanding of the basis
     for the TSPA results and improve the basis for judging if
     there is an appropriate level of confidence for the current
     stage of the societal decision-making process.
               The current status of TSPA-SR analyses, and some
     of you may not believe this, but some of the results
     presented today are preliminary and subject to change.  We
     are still in checking and this got a chuckle at the
     technical review board meeting from some people saying,
     yeah, sure, you're just saying that.
               But you will see in what Bob presents that some of
     the curves have changed since the TRB because of the
     checking process.  This is serious.  You are seeing draft
     material that is still being worked.
               And so they are certainly not suitable at this
     time for making regulatory compliance judgments.  They are
     intended to be used right here, right now, for general
     discussions of sensitivities.
               The calculations that you're seeing are going into
     Revision 00 of the technical report.  The repository safety
     strategy Revision 04, which is also in draft right now, and
     the SRCR.  We expect to make minor updates, not major
     revampings, of all of these calculations for TSPA-SR
     Revision 01, which is coming in next spring, which will
     support the site recommendation and the final environmental
     impact statement.
               So you're seeing a work that's close to being done
     for one stage and then there will be another smaller stage
     for the SR, and then we have not looked in this presentation
     to the LA.
               And, of course, you'll want to save your questions
     for Bob Andrews, who is now going to show you the technical
     side of things.
               MR. GARRICK:  Committee?  Milt, go ahead.
               MR. LEVENSON:  I've got a couple of questions. 
     One has to do with I suppose you could define it as a
     decision-making process.  As you go through here, items are
     either included or you decided to leave them out, like not
     taking any credit for the stainless steel canister, et
     cetera.
               Whether it's to include something or to leave it
     out, at what level, who makes those decisions?  How much
     review is done of whether you should include something in
     the TSPA or not include it?
               This has nothing to do with the technical part of
     how you treat it, but who decides, in essence, the scope?
               MR. VAN LUIK:  That's a good question.  I think
     there is a process that's not as well defined as you might
     think it is, but it's in the process model TSPA abstraction
     interactions where it's actually decided that there's enough
     information to go forward with this, to bound this, or to
     say, for the sake of conservatism, we will not take credit
     for this, although it has a definite purpose, which is to
     maintain the integrity of the waste package.
               So these types of decisions are documented in the
     AMRs that describe the abstraction process, for example, and
     in some cases, they are documented in the analysis and model
     reports for the particular example you're talking about, for
     the waste package lifetime, where it was discussed that,
     yes, you can get credit for hundreds, maybe even thousands
     of years, but because of the way that the system is
     functioning now, this does not really add much except in the
     terms of the doses of two, 300,000 years out, when it would
     not perturb things at all.
               So it's all documented, but the decisions were
     made, in some cases, at a lower level, at some cases at the
     abstraction level, and in some cases, DOE might walk in and
     say do this differently.
               So it's a decision process we're all made aware
     of, but it happens at different levels, but hopefully it's
     our intent that all of these decisions are documented in the
     AMRs and then rolled up into the PMRs.
               MR. LEVENSON:  They may be documented, but are
     they carried forward in evaluations such as uncertainties,
     because it seems to me they could have a significant impact,
     particularly things that are left out.
               MR. VAN LUIK:  The task that I'm very well aware
     of, because I'm part of the DOE oversight of it, on the
     uncertainty evaluations that I was describing, these are
     some of the issues that we're looking at; should we have
     done what we did or should we bite the bullet and go into
     more detail in the modeling of this particular issue.
               So it's definitely one that we will address in
     that process, but that's still work in progress.
               MR. LEVENSON:  In the consideration of
     uncertainties, are they all treated equal?  I mean, some
     uncertainties are symmetrical or something is plus or minus. 
     In other cases, uncertainty is all plus or all minus.
               In looking at the overall uncertainties, are
     things being carried with a sign as well as a quantity?
               MR. VAN LUIK:  This is actually the topic being
     addressed by our uncertainty task, because what we did at
     the beginning is we put out some general guidelines on how
     to treat uncertainties.  What we're doing now is verifying
     whether those were followed or not and we're finding, in
     some cases, that what you're describing is basically a
     judgment on what the degree and sign of the uncertainty is,
     that it was not done.  So we're going back to fix those
     types of things.
               But in every case, the analysts thought that they
     were making a conservative assumption, except in one or two
     cases.  Some other analysts disagreed with them.
               So we are getting at the bottom of those types of
     things.  But I think this should not be confused with the
     idea that we did not capture what we know are the major
     causes of uncertainty and I think those are very well
     wrapped up in this TSPA.
               So we're looking at something that's a second
     order correction, basically.
               MR. LEVENSON:  One other question, and I'm not
     sure you're the appropriate one to ask, but you're standing
     there.  One of the very, very useful outputs of the work
     you're doing here, but I'm not sure it's being done, and
     that is the design has been evolving over the last, say,
     couple of years, a fair amount of time and money has been
     spent on the design evolution.
               The TSPA could tell you, probably better than
     almost any other method, how effective those design
     improvements have been or are they improvements, do they
     really reduce dose to the public at the end.
               Do you have any feel for how much change in the
     dose to the public has occurred because of so-called design
     improvements?
               MR. VAN LUIK:  We have done considerable work in
     sensitivity analyses, a few of which Bob will show in a few
     minutes, after I sit down.  But the basic point is my answer
     would be yes, we have a very intimate relationship with the
     designers and we evaluate what they do in those terms.
               However, there's other things besides dose.  There
     is defense-in-depth considerations.  There's considerations
     of confidence.  So because there is uncertainty in the
     modeling, we also do things like add a drip shield that the
     TSPA shows that for 10,000 years, since the waste package is
     intact with or without a drip shield, the dose comes out
     about the same.  It's not exactly the same.
               But there is another consideration.  Do you have
     defense-in-depth?  Do you have reliance on only one barrier? 
     That drip shield gives us two barriers.  When you take away
     one or the other, as you will see when the repository safety
     strategy four is issued and also you will see in TSPA-SR
     documentation, you will get the feel that there is actually
     some backup in this system and that you can have confidence
     that even though have uncertainty here, that 10,000 year
     dose number is a pretty good number.
               MR. GARRICK:  Ray?
               MR. WYMER:  No.
               MR. GARRICK:  George?
               MR. HORNBERGER:  Again, just a quick follow-up on
     Milt's question.  Have you done a catalog?  We're talking
     about these bounding values or conservative values or on-off
     switches.  Do you have a catalog?  Can you give us an idea
     of how many of them you have to deal with?  Is it ten, is it
     100, is it 1,000?
               MR. VAN LUIK:  I think it's over 100.  We have an
     initial catalog, but this is work in progress.  And so it
     may expand and it may decrease when we see that we have
     double-counted some items.
               MR. GARRICK:  Abe, this is a process question and
     it is an extension of Milt's question.  It's one I will lay
     out that we may come back to with the other presenters.  But
     I want to get it out on the table.
               We're seeing a lot of language now about
     quantifiable and non-quantifiable uncertainties.  It's
     language that's also popped up on the nuclear power plant
     risk assessment world.
               And I think that it's a situation where there is a
     great deal of opportunity, it seems to me, for a lot of
     mischief and I'd like to be enlightened a little bit more on
     the whole issue of non-quantifiable uncertainties.
               You say that the way you're handling these, for
     the most part, is to take point estimates and take and use
     conservative values or bounding values or whatever, and, of
     course, part of Milt's question and George's question is
     some sort of a taxonomy of the impact of these
     non-quantifiable uncertainties and how they, in the
     aggregate and through the propagation process, impact the
     overall credibility of the analysis.
               But the whole concept bothers me a little bit,
     because it's a little bit contrary to the notion of what we
     mean by quantitative risk assessment.  Not that you can
     quantify things that you can't quantify, but the whole
     thrust of doing a risk assessment is not the manipulation of
     statistics and information nearly so much as it is
     establishing the logic between what you're trying to
     calculate, which might be an event frequency, about which
     you may have no information, the logic between that and a
     level at which you do have information.
               Then the thrust of the evaluation, the review is
     on the credibility of that logic, not so much the
     credibility of an unquantifiable piece of information.
               So I'm just wondering, the way we got around this
     a lot in the nuclear power risk models was to spend a great
     deal of time establishing that logic and that, in essence,
     became the focus of the creativeness, if you would, of the
     risk assessment as opposed to what often comes up on
     people's minds when they think of a risk assessment as being
     a game of statistics.
               I've said many times, statistics may be five
     percent of a risk assessment, but it's not a very big part
     of it.  The real effort is in establishing the answer to the
     question what can go wrong and what characterizes the logic
     of things going wrong.
     Are you doing anything specifically to get to these
     non-quantifiable contributions to uncertainty of that
     nature?  Are you really -- to me, that's what the
     breakthrough of risk assessment was all about.
               The rest of it is old technology, came about by
     way of reliability analysis and general modeling.  What's
     really creative and the aspect of risk assessment that
     constitutes a major step forward is this modeling process or
     this logic development process that you go through in going
     from what you want to learn about, about which you have
     nothing, down to something that you have good information on
     and you clearly understand the connection between that and
     what you're interested in.
               Isn't that the way to address non-quantifiable
     uncertainties and are you doing that?  You don't have to
     answer that completely now.
               MR. VAN LUIK:  I can give you a partial answer. 
     We're very well aware of this issue and that's why I
     mentioned we have a long list of candidates.  What we want
     to do is with some outside expert help, pick out the most
     likely importance items, perhaps half a dozen to start with
     out of that list, and say, well, if we have that uncertainty
     quantified in stead of bounded at this point, what would
     that buy us.
               And what we'll do is for that six or maybe eight
     items, we are talking about invoking an expert elicitation
     process, with people who would have a feel for this subject
     from the outside, as well as one or two from the inside and
     establishing a PDF for that particular parameter or modeling
     option, and then looking at the importance to the outcome
     from that and then depending on -- and that's why I said,
     you know, depending on the outcome of that, if that shows,
     whoa, this is a bigger thing that we originally thought it
     was, then we'll move on to the next five or six and by the
     time of licensing, we should have a much more solid story.
               But this is something that will take time and it's
     a sizeable investment.  We're aware of the issue and this is
     the approach that we're piecemeal putting into place for
     dealing with the issue.
               MR. GARRICK:  Thank you.  Any other comments? 
     Yes, Milt.
               MR. LEVENSON:  I have sort of a follow-on question
     to George's about this catalog of things left out and are
     bounding.
               Is it possible to get a copy of that list,
     recognizing that it's much more fragile and work in
     progress, but just to get a -- at the moment, I find that I
     have no real feel for the scope of this issue and it's a
     fairly important part of the credibility of the TSPA.
               MR. VAN LUIK:  The TSPA-SR document itself, when
     it comes out, will have a list in it of places where we use
     conservative assumptions and that's the starting list and
     that's a pretty short list.
               In the meantime, some of our other people have dug
     way into the hinterlands of the process models and the data
     interpretation reports and come up with a much longer list.
               The principal investigators looking at these
     things feel that that's -- the one approach may be a little
     bit short, the other approach is a little bit overboard.
               But you will see the very first version when the
     TSPA-SR document comes through the DOE review and becomes
     available to you.
               MR. LEVENSON:  That will include lists of all of
     the things left out, like the stainless steel in the
     container, et cetera.
               MR. VAN LUIK:  Those kinds of things are the more
     obvious ones and they will be discussed in the document,
     yes.  But I'm talking about where we use the bounding value
     rather than a PDF, because our internal expert judgment was
     that to go beyond that, since we already know that it makes
     very little difference to the performance measure of
     interest, would be money not well spent.
               So those kinds of analyses also will be reflected
     in that table in the TSPA-SR, and then you go to the AMRs
     and PMRs and see what the actual logic was.  
               So that is pretty well in there.  It's pulling all
     that together and we have several different people doing it,
     coming up with different lists, and our job, our task
     force's job is to come up with one list that we all agree
     on.
               But as soon as it's a little bit less flaky than
     it is now, you can have it.
               MR. GARRICK:  Okay.  Andy?
               MR. CAMPBELL:  It sounds like what you're focusing
     on, though, are parameters, where you're not sure what the
     range ought to be, so the analyst picks a value that they
     believe bounds what that range of values would be.
               And in that analysis that you're doing, are you
     going to also look at what the shape of the distribution
     might be on the results in terms of that parameter?  That's
     one question.
               But more importantly, how are you going to factor
     conceptual model uncertainty, because you're dealing with
     parameter uncertainty in that case, but you also have the
     whole issue of do you have the right conceptual model and
     then you get into type one and type two type of errors, and
     that kind of stuff.
               Do you have a plan for dealing with that?
               MR. VAN LUIK:  That is included in what we are
     supposed to be looking at.  Obviously, the easiest thing to
     do is to look at parameters.  That is another order of
     magnitude more difficult, but that goes to the
     interpretation of data, what interpretations does it allow.
               So it definitely is part of the plan, but whether
     we get that done in the first phase or the second phase, I
     would guess it would be the next phase.
               But we plan to do this to the point that at the
     time of licensee application, we have our ducks pretty well
     in a row.  But I have to re-emphasize that I think and I
     think most of the PIs in the program think that these are
     second order corrections and we have captured the major
     uncertainties and in several instances we have captured, by
     analyzing, different conceptual models, for example, and
     chosen either the more conservative one, which is another
     source of conservatism, or we have somehow combined them.
               So there is already conceptual model uncertainty
     addressed in the TSPA-SR, and the supporting documents,
     itself.
               It's not like it's something that we say, oh, gee,
     we forgot that, but there are the major ones, like in the
     unsaturated zone, and then there may be some other minor
     ones, too, that we will want to address further in this
     exercise.
               MR. GARRICK:  Okay.  No further questions, we'll
     listen to Bob Andrews.
               MR. ANDREWS:  Thank you very much.  As Abe said,
     this is work in progress.  The documentation of the analyses
     and calculations and the model and the report itself are
     internal to the M&O review and comment resolution as we
     speak.
               It's been an incredible team.  Most of you are
     aware of them from the VA.  It's been a fairly stable team,
     I'm very thankful for, and that team is very hard at work
     still, that team in Albuquerque and here in Las Vegas.
               I had the joy yesterday of looking at the cover
     page of the technical report and in the cover page, we're
     putting all the contributors to the documentation of the
     TSPA-SR and it was just kind of -- by the time you looked
     through the 20 or 30 names on there, you were well aware of
     the hard work and incredible work of pulling this thing
     together and documented, as Abe said, in as clear and
     concise and traceable and transparent a fashion.
               And sometimes, some of those adjectives compete
     with each other, as though of you who have prepared large
     documents are aware.  You try to make it traceable, but in
     so doing, you might have lost some transparency, or you've
     tried to make it transparent, but in so doing, you might
     have lost some traceabilty.
               But I think we have a happy medium between those
     and the issues that were raised by the questioners in Abe's
     presentation I think have been addressed.
               My objective today is to kind of walk through the
     TSPA-SR as it stands right now.  The various attributes,
     look through the system, how the system is connected or the
     components of the system are connected, and then go to the
     results.
               The objective in the hour or hour and a half that
     we have here is not to go through each individual component
     part, starting with climate and going through to biosphere
     and to disruptive events, but to talk about it as an
     integrated system.
               If you have questions about an individual part,
     I'll do my best about how that part was implemented in the
     TSPA model.  As Abe and Russ and Tim and Carol told you, the
     whole building blocks of this TSPA are those 121 AMRs that
     provide the technical foundation and the data, in fact, that
     support those AMRs.  Those AMRs, those analyses, model
     reports have used site-specific data, analog data, as
     appropriate, in situ data, laboratory data, literature data
     to develop their technical bases and the technical bases
     reside in those AMRs.
               So with that, let me go on, John, to the next
     slide and talk about process.
               The next two slides kind of go hand in hand.  I
     think this is a fairly well defined process.  This is a
     pictorial representation essentially of the requirements of
     TSPA as they are defined in Part 63, where the first step is
     to identify those features, events and processes that may
     significantly affect the performance of the repository
     system, both the engineered features, events and processes
     and the natural system features, events and processes.
               We've looked through, starting with an NEA
     database, which the department and NRC were both a part to
     in the development of that NEA database, as well as the
     international community at large.
               WE added Yucca Mountain specific features, events
     and processes to that, to the point where it became
     something like 1,600 features, events and processes that
     then had to be either evaluated and screened into a model
     or, with a basis, screened out of the model.
               Once we've done that and I think the NRC is
     reviewing the analysis and model reports that relate to the
     FEPS screening process, we developed two basic scenario
     classes using the definition in the TSPA-I IRSR of scenario
     class.
               Those scenario classes are what we call a nominal
     scenario class and, in this particular case, of a panic or
     disruptive events scenario class.
               IN the disruptive event or volcanic scenario
     class, there are actually two.  I think the parlance in the
     TSPA-I IRSR is an event class; two event classes, one an
     intrusive event class and one an extrusive event class.
               Given that I have those scenario classes, now I
     have to have the individual component models and the
     integration of those individual component models and the
     scientific technical underpinning for those individual
     component models.
               In the TSPA, if you come around the wheel of the
     figure, there are essentially nine of those component parts. 
     There's subcomponents, that I'll get to in a second, that
     are in a series of backup slides to this presentation, but
     we start first with the unsaturated zone flow, the things
     above the repository, climates, air infiltration, et cetera.
               We then get to the engineered barrier system
     environments.  The environments in the drift, around the
     drift, in the rock, that can affect the degradation
     characteristics of the engineered materials that are placed
     inside the drift, with the proposed design that Abe talked
     to you and Russ talked to you about.
               We then get to the degradation of package and drip
     shield, their performance over time.  Next, the waste form. 
     Once the packages are degraded, the waste form starts
     degrading, the cladding can start degrading, the internals
     of the package affected by the environments inside the
     package once they start degrading.
               We then have transport through the package,
     through the invert materials, into the rock, and then
     transported in the unsaturated zone to the water table,
     transport through the saturated zone, and, finally, we have
     a biosphere, where the nuclides that are released to the
     community of individuals -- let's not get involved whether
     it's a group or maximally exposed, but we'll talk about that
     a little later -- that group of individuals is exposed to
     those nuclides.
               So we end up with a volcanic dose, dose induced by
     the low probability volcanic scenario classes, and the
     nominal dose, those that are not impacted by these low
     probability volcanic scenario classes.
               As Abe pointed out, there's two other principal
     performance measures, regulated performance measures, the
     ground water protection, concentration, however that's
     finally implemented in Part 63, assuming that it still
     exists in 197.  It does exist in the proposed 197.  And the
     human intrusion dose, assuming, again, that it's implemented
     in the way Part 63 has it currently in the draft.
               As you're well aware, Part 197, in its proposed
     regulation, allows the applicant to potentially exclude that
     scenario from consideration if the applicant so decides.
               We have, for the purposes of the TSPA-SR, included
     that particular scenario class into the assessment.  It's
     not weighted.  It's a stylized calculation to evaluate the
     robustness of the system.
               The one that's not shown on there is the peak
     dose, the requirement for the final environmental impact
     statement.
               So that is the process of developing the TSPA-SR. 
     If I skip over the word slide, because that's in there more
     for completeness, and walk now through the various component
     parts that feed into the TSPA-SR.
               This set of attributes is very familiar to this
     board, I know, from the repository safety strategy Rev. 3
     and also the viability assessment.  The viability assessment
     volumes three and four talk about the major attributes that
     affect the long-term performance of a repository at Yucca
     Mountain.
               All we're trying to do here is trying to put it
     into a little more general construct and use some icons to
     point the reader through where they are in the system,
     starting with the natural system, with the water above the
     repository, as it gets into the drifts, then the package
     itself, then the mobilization and release; finally,
     transport and whatever the consequences and risks associated
     with disruptive events might be.
               As a general view, the next slide takes the TSPA
     wheel for the nominal scenario, a similar one for disruptive
     scenarios, but this is for the nominal, and shows you the
     individual component parts and the individual process model
     factors or subcomponent parts or process models, whatever
     you want to call it, that feed into that TSPA-SR.
               This is kind of shorthand notation.  In the last,
     I think, nine slides of your handouts, they're in the
     backup, I didn't feel it was necessary to go through it in
     the presentation, but I think it's to provide you a road
     map, if you will, from the feeds into TSPA, which are shown
     here and the individual component parts that are shown
     there, to show you what analysis model report is the final
     analysis model report providing the input parameters, the
     input parameter distributions, discussion of alternative
     conceptual models, discuss the technical bases for those
     parameters, what those ones are.
               So it's a tabular mapping that takes the
     individual parts, piece parts shown on this wheel, tells you
     what parameters those piece parts are generating.  There's
     on the order of several hundred parameters that are being
     generated for input into the TSPA, and shows you where the
     supporting documentation is for that.
               It gives you the analysis model report title, the
     analysis model report number.
               So I believe the Commission has, I think, as Russ
     said, 119 of those.  So I believe all the ones that are
     indicated there, the Commission has.
               That's more of a traceability completeness kind of
     backup presentation than for you to necessarily do anything
     about, unless you want to review those AMRs.
               So the next set of slides just walk through those
     process model factors, the individual component parts that
     feed into the TSPA model, starting, first, with those that
     affect the attribute.  That's the water contacting the waste
     package.  So climate, infiltration, UZ flow, the effects of
     thermal hydrology on the in-rock processes and, finally,
     seepage into the drifts.
               The next slide shows those principal factors that
     are affecting the environment inside the drift, the
     environment that the engineered barriers are likely to see
     as they change with time after the wastes are emplaced.  In
     particular, the chemical environment and the thermal
     hydrologic environment.
               Also in that physical environment are the stress
     environments associated with the degradation of the drifts
     themselves.
               The next slide looks further into the drifts and
     this is essentially looking at those component parts that
     relate to the degradation characteristics of the drip shield
     and the degradation characteristics and projected
     performance of the waste packages themselves.
               Next slide gets into the internals of the package. 
     We have two basic types of packages, one being the
     commercial spent nuclear fuel packages and one are what have
     been termed the co-disposed packages.  It's co-disposed
     glass, logs with DOE spent nuclear fuel rods going down the
     center.
               In addition to these, though, there are other
     kinds of packages, special packages, for example, the Naval
     wastes have specialized packages because of their size and
     handling requirements.
               But these are the two principal ones and any other
     special type package, we've done a special off-line analysis
     of the consequences associated with that kind of inventory
     and that kind of waste, if it's different from these kinds
     of wastes.
               The next slide, it should be pointed out that
     there are on the order, for DOE type wastes, there are on
     the order of several hundred different specific waste forms. 
     There are not data on every one of those specific waste
     forms, so the DOE spent nuclear fuel program, for our
     purposes, has lumped those waste forms into 13 individual
     types of waste forms, with similar types of characteristics
     and similar types of inventories and similar expectations
     about the degradation of the cladding associated with those
     waste forms.
               The next slide talks to the transport aspects,
     away from the engineered barriers.  In particular, transport
     through the unsaturated zone, transport then through the
     saturated zone, and ultimately the uptake of these
     contaminants by the biosphere group and whatever dose
     consequences are associated with the uptake and use of the
     water that may have been contaminated at some point in time
     by the other degradation processes and transport
     mobilization processes.
               The next slide shows that when we have a volcanic
     event, with the probabilities that are currently being
     estimated, and I believe you talked about those yesterday,
     when you talked about the KTI meeting on igneous activity. 
     Right now, we're sampling that probability from the expert
     elicitation that was performed on the probability of an
     igneous event, but we have then two types of consequences.
               So, therefore, two types of event scenarios that
     are being assessed.  Type one is the event occurs,
     intersects the repository, degrades the package and the
     event conduit continues to the surface and you have a cinder
     cone and an ash associated with that.  The ash is
     redeposited with the wind over the member or members of the
     critical group and there's a dose associated, potential dose
     associated with that release pathway.   
               That one we've called an extrusive volcanic event
     or an eruptive volcanic event scenario event class.
               The other possibility is that the dike intersects
     the repository, degrades the packages sufficiently so
     they've lost their containment possibility, degraded the
     drip shields, degraded the cladding, and then the normal
     processes of the nominal scenario take place; i.e., all the
     slides that I had in there earlier about radionuclide
     mobilization, alteration of the waste form, release from the
     waste form, transport through the engineered barrier system,
     transport through the UZ and transport through the saturated
     zone, and then uptake in the well and biosphere dose
     consequences associated with that.
               So we have two very different pathways, all with
     the same initiating probability, but very different
     consequence models from that initiating probability on. 
     They are then, of course, combined at the end for the same
     event.
               If I go to the next slide, the regulation
     currently requires stylized human intrusion scenario.  This
     is that stylized human intrusion scenario.  Somebody drills
     inadvertently, goes through a package, goes through the
     waste form, goes back through the package, back through the
     EBS and down, continues down to the saturated zone, and then
     radionuclides can be mobilized, released from the package
     through that degraded -- what now is a degraded engineered
     barrier, through what now is a degraded unsaturated zone
     barrier, and through into the saturated zone and then the
     other processes take their normal course.
               The next slide tries to summarize, I think, what
     Russ has told you, what Abe has told you, and I'm going to
     go into a little more detail on some of the following ones
     on.
               The technical bases -- turn back to the viability
     assessment.  The viability assessment, volume three of the
     viability assessment, which I know this panel reviewed, had
     a large technical basis document that went along with it,
     essentially nine chapters.  That provided the individual
     bases for the individual component models in the viability
     assessment.
               In the SR or the SRCR TSPA that we're talking
     about here, that nine got expanded to nine process model
     reports, which are very similar, slight differences between
     the technical basis document and how they've lumped the
     process model reports.
               But the fundamental science is embodied in those
     121 AMRs, analysis model reports, developed by the labs and
     the GS and M&O participants to support the TSPA-SR.
               Of those 121, 40 of them are direct feeds into the
     TSPA.  So those 40 provide a direct data set or model or
     conceptual model or equation or something, are a direct feed
     into the TSPA.
               You say, well, what about the other 80.  Well, the
     other 80, probably 15 of them relate solely to screening
     arguments, features, events and processes screening
     arguments.  The other 65 are process models.  They are
     alternative models.  They are supporting models that feed
     into those 40 that ultimately support the TSPA model itself.
               So all 120, we have several family trees of these
     120 and how the information flows in all 120 AMRs takes
     about 15 figures to show all of that, and we've put that in
     the appendix of the TSPA document, to show where did all the
     information come from to support the final feed into the
     TSPA.
               So one can very, very quickly and very easily pull
     the chain, pull the string, and go back to the AMR that gave
     the technical basis for its inclusion in the TSPA.
               And I might add, based on the discussion that was
     going on before, the technical bases for the assumptions
     involved in those analyses and models and if there were
     assumptions on degree of conservatism or degree of
     complexity or something that's going to be treated as a PDF
     or something that's not going to be treated as a PDF or
     something that's not going to be treated as a PDF and the
     reasons for that, it's in those supporting analysis model
     reports.
               The 40 that feed directly into the TSPA-SR model
     are shown on this slide.  This is the one you need in
     addition to Russ' pyramid, the one you need the magnifying
     glass for.  Actually, the next one, too, you'll need a
     magnifying glass for.  In the actual document, they appear a
     little bit bigger.
               But the color coding is color coding the AMRs to
     the corresponding process model report in which that
     analysis model report is summarized and the arrows are
     showing where in the TSPA model are the individual component
     analysis model reports input and then where in the middle is
     the process of doing the TSPA, starting with the package
     degradation, going to waste form degradation, going to EBS
     transport, UZ transport, SZ transport, and, finally, the
     biosphere.
               That middle part is the guts of the TSPA model, if
     you will.  All of those inputs are what's being integrated
     within the context of the TSPA model.
               If you don't like boxes, then the next two slides
     try to walk it through from an information flow point of
     view; what information, what kind of, if you will,
     intermediate result is passing from component to component
     within the TSPA model.
     One of the objectives of the TSPA model itself and how it's
     been constructed and how it's been documented is to show, in
     as transparent a fashion as possible, given that I have 40
     AMRs that are feeding it, as transparent a fashion as
     possible, how the information flows from component to
     component; how does climate information flow to unsaturated
     zone flow; how does unsaturated zone flow information flow
     to seepage; how does thermal hydrology information flow to
     the degradation of the package, degradation of the cladding,
     degradation of the drip shield; how does thermal hydrology
     information flow to the characteristics of the invert, the
     thermal and hydrologic characteristics of the invert.
               So there are placeholders within the TSPA model
     where we go in and we've done this in the technical report
     and in the model document, where we go in and stop the
     results, if you will, and look at what information is
     passing from component to component; what flux, it's general
     energy kinds of things.
               It's heat that's passing from component to
     component, it's mass that's passing, it's volume of fluid
     that's passing from component to component, and ultimately
     it's -- and it's chemistry passing from component to
     component, and, ultimately, it's radionuclide activity
     that's passing from component to component.
               You can say that from barrier to barrier, you can
     say it from feature to feature, but it's traceably and
     transparently showing how that system evolves based on the
     models available through time.
               Some of that information, as a backup, I've
     included, in the first part of your backup slides, I was not
     going to go over it here unless it gets into that level of
     detail.  But as we go from flow to seepage to package to
     water mass movement, mass release, activity release, and
     ultimately activity release at the 20 kilometer point of
     compliance, that ultimately leads to a dose.  That's all in
     the backup and it's in greater detail in the technical
     report.
               So these two slides show that how information
     flows from part to part.  Same thing on slide 17.  It shows
     once I get internal to the engineered barriers, that is
     internal to the drift, how the environments are propagated
     through time and what downstream processes impact, and then
     how the degradation characteristics of the engineered
     barriers are propagated through time, and then ultimately
     the release, mobilization and release of nuclides from those
     engineered barriers and back into the natural barriers.
               The next slide, Abe alluded to this.  Uncertainty
     and variability, both primarily quantified, has been
     directly incorporated in the TSPA-SR model.  The third
     bullet there is a bullet that Abe had on his slide and that
     is where the individual analysis model report originator,
     the reviewers, the checkers, felt there was either a large
     degree of complexity, large degree of uncertainty.
               The goal was to be defensible and in being
     defensible, they were probably, in some cases, a little
     conservative.  They documented that in their analysis model
     report.  Are there alternatives they could have chosen? 
     Yes.  Did they document what those alternatives were?  Yes. 
     Did they give a rationale why they didn't choose that
     alternative?  Yes.
               Could we propagate those alternatives back through
     the rest of the system in sufficient time?  Maybe, because
     some of those alternatives require alternative
     representations.  They require alternative data.  They
     require alternative analyses and they require alternative
     abstractions into the performance assessment.
               It's not a simple flick the switch.  It is in some
     cases.  If it's down at a parameter level, I think Andy's
     question is very well taken.  If it's down to the parameter
     level, it is a very simple aspect.  If you don't have the
     process model for how you think -- take the example that was
     alluded to earlier, the stainless steel degrades and that
     degradation characteristics are sufficient complex and the
     amount of information available in the environments that we
     have is sufficiently uncertain, it would have taken a big
     effort to incorporate stainless steel into the TSPA model.
               First, the process model, then the abstraction,
     and finally into the TSPA model.  So a conscious decision
     was made by all concerned, including the Department of
     Energy, in that particular case and others, of which things
     to include and which things to exclude and carry those
     excludes.
               MR. GARRICK:  Bob, just to pick up on that a
     minute.  Obviously, you have to have some evidence in order
     to assign a number or a parameter that you think represents
     an upper bound or a conservative estimate.
               It just seems that it would be easier to hedge
     your bets with a discreet probability distribution, for
     example, than it would be a single number.  In other words,
     you talk about an unquantifiable event or number or
     parameter, on the one hand.  On the other hand, you say you
     assume conservative values.
               So you've got to have some basis for justifying it
     as a conservative value.
               MR. ANDREWS:  I think they do.  I think --
               MR. GARRICK:  And my point is this.  It's easier,
     as a matter of fact, to justify some sort of a distribution
     than it is a single value.
               MR. ANDREWS:  You're right.  To justify the
     distribution, in some cases, say I think I'm pretty sure
     it's within this range.  I have evidence to say it could be
     at low values, pick the parameter, pick the model, it's the
     case all across the board.
               There's evidence to support it down here.  There's
     evidence to support it up here.  There's evidence to support
     it in the middle.  It might be analog evidence.  It might be
     direct field observations, whatever the evidence is.
               However, when faced with the requirement of being
     as defensible as possible and to not be optimistic with how
     performance may evolve in this system over time, the
     analysts, I believe, personally, correctly, went with the
     conservative approximation.
               There have been words in Part 63 or in the
     statement of considerations, perhaps, I'm not sure where
     they were for Part 63, that if the applicant -- it might
     have been 60, in fact, I forget where they were -- if the
     applicant has uncertainty in a particular aspect or
     alternative representations or alternative models, it is
     okay to include those alternative models and alternative
     representations in your performance assessment, but we still
     want to see the effects of that more deterministic, more
     conservative representation.
               MR. GARRICK:  That's not a particularly
     unreasonable approach.  My only point is you don't want to
     get yourself trapped into saying, on the one hand, that you
     have no information, and, on the other hand, you're using a
     number that's characteristic of information.
               MR. ANDREWS:  That's well taken.  That's a point
     well taken.
               The next three viewgraphs summarize some of these
     aspects in a kind of Consumer Reports sort of fashion of
     what uncertainty was directly incorporated in the TSPA-SR
     model, what variability, and this is generally spatial
     variability, was included in the TSPA-SR model, and a brief
     set of comments.
               The comment, if I didn't check uncertainty or
     variability, those are generally the ones where a more
     conservative representation was taken within the analysis
     model reports in order to avoid some of the complexity
     associated with that particular process or coupled process.
               The very first one on there, on page 19, is the
     coupled effects on seepage.  This was one that also was
     noted in our review of the viability assessment, it's been
     noted several times by the Nuclear Waste Technical Review
     Board, noted by NRC staffers and center staffers on their
     review of the viability assessment, as well.
               And that is that the actual seepage into a drift
     following emplacement and the perturbations that are caused
     by emplacement, the mobilization of water, the changes in
     the chemistry, the changes in the mechanical stress around
     the drift, the degradation of the drift support system
     itself, all of the thermal mechanical hydro chemical coupled
     processes are -- I think everybody would acknowledge are
     very difficult to quantify with a high degree of
     defensibility.
               There are data, yes.  There are data from the --
     specific data, even, associated with seepage tests that have
     been conducted by Berkeley in various niches at the site,
     the actual thermal hydrologic drift scale test at the site,
     smaller scale thermal hydrologic tests at the site to
     support and plus modeling and analyses of coupled thermal
     chemical processes.
               There are data to support a range of possible
     effects associated with thermal hydro chemical mechanical
     stress evolution at the site.
               But it's fairly broad what the possible outcomes
     are in terms of their effects on the rock properties, the
     permeability, the fracture aperture, the capillary suction
     in and around the drift.
               Therefore, because of that complexity and because
     of that uncertainty in what is the most reasonable, the most
     defensible model for seepage in the effects of all these
     coupled processes, a conservative assumption was taken that
     take the flux five meters above the drift, I think the
     bullet says there, and put that flux into the seepage model.
               Don't try to capture all the nuances of what
     happens in the first ten centimeters or the first millimeter
     of the drift wall as the drifts are degrading with time.
               So that's not uncertainty, it's not variable in
     the model.
               Is the actual seepage into a drift uncertain? 
     Yes, because the seepage model itself is uncertain and the
     flux itself is very uncertain and highly variable.
               But the coupled processes and the complexity
     associated with those was, in this particular case,
     eliminated by making this conservative assumption.  I should
     point out that that was, going back to the TSPA-VA, that
     was, in fact, a recommendation of the peer review panel to
     say in order -- the complexities associated with this one
     are so large, you, department, may be better off simplifying
     it and that's what we've done.
               George is biting at the bit here.
               MR. HORNBERGER:  Just a quick question on that. 
     The TRB has been critical of this particular aspect and one
     of the contentions is that some of this uncertainty may, in
     fact, be diminished or go away if the temperature of the
     repository were lower.
               Can you give me the basis of that?  Do you agree
     with that contention and if so, what's the basis of reducing
     the uncertainty?  The coupled processes are still there, is
     that right?
               MR. ANDREWS:  Yes.  Even at 70-80 degrees C, you
     still have coupled processes.  You still have mechanical
     processes at zero degrees C.  As soon as you open that
     drift, you have mechanical degradation processes that will
     kick into gear and then you have the thermal hydrologic
     coupled processes and the thermal chemical coupled
     processes.
               I can't speak for the board and their beliefs
     about reducing uncertainty with cooler repositories.  I
     believe it's not totally tied to this particular one.  I
     mean, this is what gets the focus, but it's also on the
     degradation characteristics of the engineered barriers in a
     cooler -- where cooler now is 70, 80, 90 degrees C,
     environment rather than 120, 130 degrees C.
               So it's not solely in the rock that they're after.
               MR. HORNBERGER:  I'm not asking you to speak for
     the board.  I was just asking, in your experience in doing
     TSPA, would the uncertainties be significantly reduced at
     these lower temperatures?
               MR. ANDREWS:  No.  We got enough uncertainty.
               MR. GARRICK:  It just may be displaced.
               MR. ANDREWS:  It might be displaced in time.
               MR. GARRICK:  Right.
               MR. ANDREWS:  I don't know if you want to pick up
     any other of those that don't have checkmarks, but if I go
     to the second page, for example, I think it's a useful one
     on the waste form characteristics and the waste form
     degradation.
               There are a number of fairly complex processes
     that occur once water, in whatever form it exists, whether
     it's humid air or actual liquid water, when it comes into
     contact with the waste form, there are very good data
     collected at PNL, at Argonne, at Lawrence Livermore Labs,
     and internationally about the degradation characteristics of
     the waste forms.
               Those data have been used, but they have a fairly
     broad range and the applicability of that range under the
     exact environments that we're expecting is uncertain.  We
     could have used that range or we could have used a more
     bounding value.
               What's there right now in Rev. 00 that we're
     talking about is that more bounding value for the waste form
     characteristic degradation.
               I don't know if there's any -- in package
     transport, that's another nice one.  What the internals of
     the package look like after the package has degraded, as a
     function of time.  The characteristics of the basket
     material, the degradation of the basket material, the
     stresses involved inside the package, the hydrology in the
     package, the chemistry inside the package are very
     uncertain, very uncertain.
               And for anybody to confidently predict the
     internals, with the exception of some basic fundamentals
     like temperature, would be -- well, it would be difficult.
               So a more bounded type approach was taken of bring
     that mass, bring that activity, once the waste form is
     altered, to the edge of the package, the inner edge of the
     package.
               So transport inside the package, transport along
     fuel rods, transport through a degraded fuel rod assembly
     was not taken credit for in the Rev. 0 TSPA-SR.  And that
     would require, obviously, a different model, different
     representation, alternative conceptual model of how you
     think the internals of the package perform over time.
               It is much simpler, much more defensible to take
     the more conservative approximation in this case.
               By the way, most of these have limited effect,
     where limited is small, effect on the post-closure
     performance.  WE did not make the collective decision of
     where to add conservatism based on things that were highly
     important to performance.
               So it was generally focusing on those that were
     less significant to performance.  One exception to that is
     this secondary phase issue that's a bullet on the waste form
     degradation and solubility limits.
               There's no other -- well, there's other
     interesting ones in here, but maybe that -- I think those
     examples, I think, are probably sufficient to walk through
     the process, the collective decision-making that was done.
               As Abe told you, each one of these are being
     examined with this small group, evaluating is there --
     should we look at quantifying this uncertainty and if we do,
     should we then run a calculation to see what the impact of
     that particular aspect on the system performance was.
               I think taking Dr. Garrick's point to heart of now
     that we have the tool, it's running, it works, we believe
     giving reasonable results, now it's kind of time to exercise
     the tool with alternative representations, gain additional
     understanding.
               MR. WYMER:  It's a little disturbing to me that
     most of the areas that are unchecked, or many of them, are
     chemistry related areas and to a chemist, those don't look
     any more difficult than some of the other complexities which
     have been taken into account.
               And in some of them, like second phase formation,
     you may have quite a profound effect on the release of
     radionuclides, where it would be significant in the dose to
     the person at 20 kilometers.
               So it seems to me that, from a prejudice point of
     view, what I see is that reflection of the knowledge and
     backgrounds of the people doing the study rather than
     reflection of the difficulty of doing the analysis.
               MR. ANDREWS:  Well, there are a lot of good
     geochemists working on this project and who have supported,
     through their analysis and model reports, supported the
     development of these inputs.
               The one example you pointed to is a very near and
     dear example to many of our hearts.  It was an example that
     we used in the VA, in the viability assessment.  It's an
     example we've had extended discussions on with the folks at
     Argonne who are doing the detailed fuel testing and
     characterization of the alteration phases of the fuel.
               It's a point that I think has been made by NRC and
     center staff with respect to review of Pena Blanca and
     utility of Pena Blanca as a very valuable analog for waste
     form degradation and mobilization and transport of some of
     the actinides that we're talking about here.
               MR. WYMER:  There are other kinds of chemists than
     geochemists, you know.  There are inorganic chemists and
     physiochemists who could give a good deal of insight here. 
     Geochemists have their own point of view.
               MR. ANDREWS:  Well, I'm not going to get into that
     debate.
               MR. WYMER:  Unfortunately, I have one sitting next
     to me.
               MR. ANDREWS:  But the point is well taken and I
     think there are complexities associated with the controlling
     phases as these materials degrade.  There is complexity
     associated with some fundamental thermodynamic information
     on these controlling phases.  Not so much on the uranium
     side, but for -- when I put neptunium in them or plutonium
     in them, those fundamental thermodynamic information, I
     think, is -- I'm not a geochemist, so excuse my bias, but I
     don't think some of that fundamental thermodynamic
     information is available.
               I think there was a presentation to the technical
     review board in Reno by Dr. Glassly from Livermore and he
     pointed that uncertainty out as well, that some of the basic
     thermodynamic data is just not there, and I think the board
     -- I forget -- one of the board members asked him, well, are
     we pushing to get that kind of information, and his response
     was, well, I hope so, but it sounds like fundamental
     university type laboratory research to come up with those
     data.
               And in the absence of some of those thermodynamic
     constants and time stability constants, it was difficult for
     -- in the high defensibility role that we wanted to be in,
     to take credit for some of those aspects.
               Is there more that we could gain there?  Sure as
     heck is.  And I think Dr. Ewing was a member of our peer
     review panel or DOE's peer review panel, I should say, and
     he, I think, shared your kind of comments on the VA.  I
     think they are throughout the VA comments.
               It's pretty complex.
               MR. WYMER:  Yes.  No more complex than some other
     aspects, though, that are dealt with, in my judgment.  But,
     okay.  I'm not going to beat a live horse.
               MR. ANDREWS:  Okay.
               MR. GARRICK:  By the way, Bob, these are excellent
     exhibits.  I must like matrices, because these are very
     helpful.
               MR. ANDREWS:  Good.  Okay.  Now, we've set up the
     stage.  It's time to get to some results, preliminary
     results.
               The first result slide, John, why don't you go
     ahead with the VA, puts it into context of what the TSPA-VA
     result that's most comparable to the results that I'm going
     to be showing to you.
               In the VA, of course, we did not have Part 63 or
     197 on the street as proposed regulations.  I think they
     came very shortly after the summer of '98.
               However, there was enough discussion, I think,
     with ACNW, with NRC staff, that we had fairly good knowledge
     of what was going to be in the draft regulation when it came
     out, which was a slightly different way than we were doing
     most of our plots in the VA, quite frankly.
               The how one does the expected, quote-unquote, the
     dose -- just the mathematics of doing that calculation was
     slightly different than the way we were proceeding in 95
     percent of our plots that we presented in the VA.
               But we had one set of plots, shown here, figure
     4-28, that most closely represents for the nominal -- this
     is nominal scenario class in the VA -- most closely
     represents the way NRC ended up writing the proposed Part
     63.
               What I've shown here is the 95th percentile mean,
     i.e., expected median and the 5th percentile is actually off
     the curve.  The 5th percentile was zero out to 100,000
     years.
               So it gives you an idea, backed with VA models, VA
     assumptions, whether they were good, bad or indifferent, VA
     design, this is the comparable VA, TSPA-VA result for the
     slides that are going to be following.  I did not change the
     time access to be logged like the subsequent ones are.
               There have been a lot of changes.  As Abe pointed
     out, there was a number of changes.  There was hardly a
     model that didn't change between the VA and the SR.  The
     design changed from the VA to the SR and in some cases, the
     approach changed between the VA and SR.
               So the following slides all relate to those 121
     AMRs and their incorporation in the TSPA.
               So let's skip to slide 24.  What I've shown you
     here or have up here are 300, the skinny little lines are
     300 individual realizations of the total system performance
     based on those 40 AMRs, cranking them through with their
     uncertainty and their individual parameters and models, et
     cetera.
               So each one, each one of 300 has an equal
     likelihood of occurring.  We then superimposed on top of
     those 300 four basic statistical measures of each one of
     those, essentially done in a per year basis, where per year
     really means, in this case, per time step and time step is
     about 25 years.
               So it's per time step slice and I've shown the
     95th percentile, the mean or the expected value, the median
     or the 50th percentile value, and the 5th percentile value.
               Several things to note on this slide.  One is
     there is quite a wide spread, variance, if you will, of the
     dose as a function of time.  If I'd look at it at any
     particular time, there is a wide variation of dose.  If I
     look at any particular dose, it's a wide variation of time.
               So it's quite a wide spread.  Understanding that
     spread, that distribution of the results is an important
     component of performance assessment, and we're going to
     spend a little more time talking about that, why the broad
     spread.
               If you look at 100,000 years, that spread is going
     over seven or eight orders of magnitude.  It's very broad.
               Another point of observation is in those 300
     realizations, for the data that are contained within the
     analysis model reports, for the models that are supported in
     those analysis model reports, none of them had a failure of
     the engineered barriers prior to 10,000 years in this
     nominal scenario class.
               We'll look at some examples in just a second where
     that's not the case.  It's not the case in the volcanic
     event and it's not the case in human intrusion, but for the
     nominal performance, the expected performance, with its
     uncertainty embedded in there, none of them failed before
     10,000 years.
               Your logical question is, is it impossible to have
     failures before 10,000 years, and the answer is clearly no. 
     There is a probability of having a degradation prior to
     10,000 years, that we'll get to here in a second.  But for
     the nominal performance over that many realizations, there
     were none.
               You might ask how did you pick 300, why didn't you
     use some other number.  Well, our goal was to get a stable
     mean, not a stable 5th percentile or a stable 95th
     percentile, but a stable mean and we ran 100, 300 and 500
     and 300 was stable enough, 500 was sitting right on top of
     300.  So we stopped at 300.  Just totally economics related
     to why 300 and not 500.  They're one on top of each other.
               The difference between 100 and 300 is not that
     much, it's probably 20 or 30 percent, and we have those
     plots in the technical report as backup.
               Other points of information are probably best
     explained by looking at the next slide.
               MR. LEVENSON:  Bob, before you leave that one.  Is
     that correct that the 95th percentile line crosses the mean?
               MR. ANDREWS:  Yes.  And you'll see it also when we
     get to volcanic, in a way, that the 95th percentile can
     exceed the mean; i.e., the mean is driven by the top two or
     three percentile of the distribution, which is what you're
     seeing right there.
               The next slide talks to the principal nuclides
     that are controlling this dose rate and all I've done is
     taken, for the expected value case, this is now expected, I
     have to be careful here, this is expected value of the
     output.
               So this is based on -- still based on 300
     realizations of the probable performance of the repository
     system and I'm just looking at one expected value from that,
     and then what are the nuclides that are controlling that
     expected value.
               And what you see here, quickly, is that up till 40
     or 50,000 years, we are dominated by the very high
     solubility, unretarded, diffusing radionuclides, like iodine
     and technetium, same thing we had in the VA.
               That at early times, those things that are
     unretarded, those things with very high solubility which can
     diffuse through thin films or long thin film boundaries,
     they are released first.
               After a while -- in this case, it ends up being
     about 50,000 years -- it's the lower solubility, retarded,
     but not completely retarded nuclides.  They diffuse a little
     bit, but they're more controlled by how much water advects
     through the system, how much water advects through the
     package and through the invert, neptunium and the colloidal
     plutonium.
               This is -- we have two colloidal plutonium species
     that are being tracked in the TSPA-SR and those are the ones
     that are coming out at longer times.
               I should point out that a large fraction of the
     inventory, of the total inventory is still retained either
     in the waste form or in the waste package or in the EBS,
     simply because of the very low solubility or very high
     retardation characteristics of those other nuclides.
               So what we see here is things coming out and
     trying to understand why they come out and what is the order
     in which they come out, but there's a lot of other things,
     and I did not bring that plot, but we show the plot in the
     technical report, that are retained and that are essentially
     delayed significantly before they come out and provide any
     kind of dose consequence.
               There's a little bit of understanding that can go
     into these plots, but that understanding really resides in
     the backup slides.  Part of it's clear that until you have
     the primary containment barrier, i.e., the package, degraded
     and you get water into the package, you have no release. 
     Even when you have the package degraded, and, generally, and
     there's backup slides to point this out, for your benefit,
     once the package degrades, it's generally degrading at the
     closure welds.
               It's generally degraded by stress corrosion
     cracking at those closure welds.  Those cracks are micron
     size.  So liquid water is not getting into those cracks. 
     There's an analysis model report to support that.  But humid
     air can get into those cracks and nuclides through humid air
     condensing on the waste form itself and that other
     conservative assumption that we talked about earlier, about
     the innards of the package and how they're being treated,
     nuclides can diffuse through those cracks in the stress
     corrosion cracked welds of the earlier package failures.
               So what you see here for iodine and technetium is
     diffusion through very thin cracks, while the drip shield is
     intact, but the packages have degraded at the closure welds.
               At later times, the drip shields are degraded. 
     Liquid water for that fraction of the repository that has
     seepage, which is depending on the timeframe and the climate
     state and the percolation flux, et cetera, the fraction of
     packages that see seepage changes with time.
               But for that fraction that sees seep and for that
     fraction where the drip shields have degraded and the
     packages have degraded, liquid water can get into the
     package and liquid water can take away nuclides from the
     package, through the package, through the invert, and back
     into the rock, and that's when you see neptunium and
     plutonium taking over.
               MR. GARRICK:  Have you somewhere cast these into a
     dose exceedence CCDF form?
               MR. ANDREWS:  No.
               MR. GARRICK:  That would be very -- that
     communicates very well.  I just wondered if you had
     considered that, especially for the disruptive events which
     have a recurrent cycle to them.
               MR. ANDREWS:  I appreciate that, John.  We
     struggled internally on communication of this and, quite
     frankly, in polling people, obviously, we didn't poll you,
     we concluded that CCDFs, although explanatory for some
     fraction of the audience, were obfuscating for a large
     fraction of the audience and, therefore, we -- including
     people who were on our peer review panel before.
               So reasonable people.  So there was a transparent
     --
               MR. GARRICK:  These are important curves, but from
     a risk perspective, the CCDF is the risk curve, whereas
     these are not.
               MR. ANDREWS:  Right.
               MR. GARRICK:  These are just probabilistic dose
     curves.
               MR. ANDREWS:  These are risk when you multiply
     them by .999, which is what these are.
               MR. GARRICK:  Yes.  But I would think that
     especially for the disruptive events, it would be a useful,
     a very useful presentation to be able to --
               MR. ANDREWS:  That's a good suggestion.
               MR. GARRICK:  -- answer the question in one
     diagram what the risk is.
               MR. ANDREWS:  That's a good suggestion.  We had a
     little dialogue with some other review groups on how to
     prevent the disruptive event work, and that would be, I
     think, a good suggestion.  We have not implemented that yet.
               MR. GARRICK:  Okay.
               MR. ANDREWS:  Let me go on to slide 26, the one
     looking at longer-term performance.  This particular slide
     and the one that follows, which talks about the key nuclides
     associated with this slide, is the case of extrapolating
     those models used in the 10,000, 100,000 year performance
     results, extrapolating those models on out to a million
     years.
               We understand, as we read proposed 197, that that
     maybe isn't exactly what they intended, but this shows what
     the impacts of doing that would be.
               We stopped at a million because of 197 saying that
     was the time period of geologic stability, which is also the
     time period the NAS thought was of geologic stability.  I
     think it reasonably captures the peak.  You can see the
     peaks are coming down as you go out past two or 300,000
     years, coming down primarily because either things are
     slowly but surely decaying.
               Some of these nuclides have very long half-lives,
     so they're still in the system or they're being absorbed or
     they've come out of the system.
               One thing -- I think that's probably enough to say
     on that particular curve and the following curve.
               Let's go to the disruptive events, the igneous
     activity scenario class.  This one.  This is the one that I
     think Abe said things were in checking or in review.  This
     is the curve that's different a little bit from the curve
     that we presented in the beginning of August to the Nuclear
     Waste Technical Review Board, and I'll walk through that
     difference here in a second.
               But before I do that, let me talk to the form and
     structure of this particular set of curves.  At the
     left-hand side, you see some nice smooth responses.  The
     curves all more or less look the same.  You just have a
     distribution around a mean or around the median for those
     curves.
               Those curves are all the extrusive igneous event,
     whereby the pathway is -- the event goes through the
     repository, intersects the package, takes a fraction of the
     waste up, entrains it, and then the wind blows south and the
     waste is deposited over the landscape along with the ash and
     the primary pathway is an inhalation type pathway associated
     with that release.
               So there is a wide uncertainty on those, being
     driven by the number of packages that are hit, by the
     probability of occurrence.  These particular curves are
     still sampling the probability of occurrence over that
     distribution from the expert elicitation, uncertainty in the
     wind speed, et cetera.  So there is a distribution around
     that.
               Another thing I should point out on this is these
     are probability weighted dose rates.  These are not
     deterministic doses, given the event occurs, what's the
     dose.  These are already factored into the analyses for
     comparing apples and apples to make them comparable to the
     other slides, have already factored that probability into
     the dose rate.
               So this really is a risk.  So that 95th percentile
     is 95th percentile not of the total distribution, I think
     this is getting to your point where I think, John, it's a
     really good suggestion, this is 95th percentile given the
     event.  So it's conditional.
               And I think if you showed the complete, either as
     a PDF or probably better, as you point out, as a CCDF, the
     full distribution from probability one to probability
     ten-to-the-minus-eight of dose consequences, you probably
     would have a clear picture of the overall system, whereas by
     probability weighting it, which is the way Part 63 asked us
     to do it, but by probability weighting it, you've lost that
     consequence time probability factor, because it's already in
     there.
               The third thing to point out, which is the reason
     the slide changed from what we did before, is the right-hand
     portion of the slide.
               What I have shown here is actually 500
     realizations, individual realizations, but there's actually
     5,000 realizations that are behind this particular plot. 
     You say, well, my gosh, why did you go to 5,000 in this
     case.
               Well, the reason is in order to get a stable mean
     on the dose consequences associated with the indirect
     intrusive event, so the volcano goes -- well, it's not a
     volcano, now it's a dike, the dike goes, intrudes the
     repository, degrades the engineered barriers, and then the
     natural system takes over.
               The timing of when that event occurs, of course,
     is very uncertain.  So it's being stochastically sampled in
     here.  But the consequences are a function of the
     uncertainty and all the other aspects of the system,
     uncertainty in the unsaturated zone, uncertainty in the
     saturated zone, uncertainty in colloids, uncertainty in
     seepage, all the other uncertainties take over or get a
     distribution of dose responses for the indirect intrusive
     volcanic event.
               And simply put, in order to get a stable mean, we
     had to go to 5,000 realizations over a 50,000 year time
     period.  That's why the plot stops at 50,000 years.  In
     order to get a more reasonable stable mean, that's the red
     line there, for the risks associated with the indirect
     intrusive volcanic event.
               So this curve is slightly different in the
     presentation that we gave to the TRB back in the beginning
     of August.  We had, I don't know, 500 realizations, not
     5,000 realizations, for this particular plot.  So the plot
     is a little different.
               Let's see.  The next plot essentially just
     combines the two expected cases.  So looking at the means of
     the distribution of the nominal performance and the means of
     the distribution of the igneous scenario classes, we
     essentially just add them.
               And these are now correctly weighted by the
     probability.  The probability, when I combined them, is one. 
     The probability of one is .999.
               And you see for this case the predominance in the
     first 10,000 years is driven by the low probability, but
     high consequence igneous event scenario classes, something I
     think the NRC has pointed out or alluded to in their review
     of the VA and I think in earlier documentation that they
     have produced.
               MR. WYMER:  Before you leave these curves, which
     are very interesting, it occurs to me that with the
     exception of the igneous activity curves, the other curves
     all start at post-10,000 years, and that's because, I
     presume, C-22 is considered to last that long before you get
     any significant failures.
               Ten thousand years is a long time and the database
     for the corrosion of C-22 is not too large and you tend to
     believe extrapolations like that when you have a good
     understanding of the basic processes involved as opposed to
     just measurements.  You understand mechanisms.
               How comfortable are you that you can believe that
     it will not corrode for 10,000 years?  How much do you know
     about the fundamental corrosion processes of C-22?
               MR. ANDREWS:  Well, I'm probably not the best
     person -- I'm not a C-22 expert.
               MR. WYMER:  This is fundamentally your whole
     analysis.
               MR. ANDREWS:  Well, in the first 10,000 years, for
     the nominal scenario class, yes, the degradation
     characteristics of that waste package and the welds and the
     stresses at the welds are what dominate the performance. 
     The analysis and model reports and the data and their
     justification incorporating the uncertainty, and there is
     uncertainty in the degradation characteristics, there is
     uncertainty in the stress stage at the welds, there is
     uncertainty in other mechanisms, other process mechanisms
     that can lead to degradation of those engineered materials,
     all of those are in the analysis model reports, plus a
     description of their uncertainties and their basis.
               In order to evaluate kind of the what if sort of
     scenario that you're alluding to, I think, is we've done
     several things.  One, that I'm going to get to here in a
     second, is that we looked at this barrier, as a barrier, and
     looked at all of the component parts of that barrier that,
     first off, are included.
               So the things that we have included and their
     uncertainty and pushed them to their -- all of the key ones
     to their 95th percentile in order to see what happens, if
     you will.  That's one thing that we've done.
               The other thing we've done is we've done, in
     support of the repository safety strategy, we've done -- and
     looking at this potential vulnerability, we've looked at
     juvenile failures, quote-unquote, juvenile failure of the
     package, and then done the same analyses off of those
     juvenile failure packages that we've done for the nominal
     packages.
               MR. WYMER:  Where you assume a limited number of
     failures.
               MR. ANDREWS:  Right, assume a number have failed. 
     They fail with a patch kind of opening under a drip shield
     and then we've sometimes failed the drip shield and the
     package at the same time, just to gain an understanding for
     how the system performs in the absence of that barrier, if
     you will.
               The other thing, of course, that we've done is --
               MR. WYMER:  What results do you get when you do
     that?
               MR. ANDREWS:  Those are -- for a singular -- I
     don't have them, at the top of my head.  They're in the
     document that's undergoing review right now.  My
     recollection is for a single package, it was, at 10,000
     years, it was on the order of ten-to-the-minus-two or
     ten-to-the-minus-three millirems for a package.
               Abe, do you remember what the numbers were?
               MR. VAN LUIK:  I don't, but it sounds about right.
               MR. ANDREWS:  It's in that ballpark, anyway. 
               MR. WYMER:  Okay.  Thanks.
               MR. GARRICK:  While we're on that curve, which is
     a very interesting one, given that the igneous event is now
     controlling the risk through the time of compliance, does
     that not bring up the whole issue of the design of the
     repository in terms of how much you should invest in trying
     to design a 12,000 year waste package?  In other words,
     aren't you in a situation here with a much more conventional
     waste package wouldn't change the risk?
               MR. ANDREWS:  Well, if I had a much more
     conventional waste package, I'm not sure that line on the
     right would start appearing after 10,000 years.
               MR. GARRICK:  No, it wouldn't.
               MR. ANDREWS:  It would be significantly before.
               MR. GARRICK:  That's the point.  It doesn't
     matter.
               MR. ANDREWS:  For this.
               MR. GARRICK:  If you moved everything -- yes.
               MR. ANDREWS:  If I moved it significantly to the
     left.
               MR. GARRICK:  It doesn't matter from a risk
     perspective.  You could get by with a lot less extravagant
     repository design.  Given that the risk is being driven by
     something that you can't design out.
               MR. ANDREWS:  It's possible.
               MR. GARRICK:  Just something to think about.  It
     seems to me, as far as my paying for this, from a risk
     standpoint, I'm paying a heck of a lot money, perhaps,
     perhaps if the calculations haven't been done, to achieve a
     level of performance that, from a risk perspective, is kind
     of irrelevant.
               MR. ANDREWS:  I agree and that's very possible,
     and I think Abe alluded to some design related sensitivity
     analyses that are to be done.  Right now, we have close to a
     point design.  It's a flexible design, but it's close to a
     point design.
               We are, this fall, probably going to do some
     limited simplifications of that design, just to see what if,
     I think to address questions like that.
               Having done that, that's one step of performance
     assessment, generating the curve and a series of curves. 
     Another, and, in fact, as important, maybe even more
     important aspect of it is to evaluate why those curves are
     the way they are.
               Do the sensitivity analyses, do the statistical
     analyses, do the barrier importance analyses, to gain an
     understanding of what's going on and why is it going on.
               So there's been a wide suite of these done in
     support of the SRCR, first, and documented in the technical
     report and documented in the repository safety strategy.
               These start first off with basic statistical
     evaluations, what drove the variance, what drove the mean,
     what drove the top ten percentile, et cetera.  There's a
     wide variety of statistical techniques available and we've
     used the ones that are most appropriate for our intended
     purpose.
               We've done then a large number of sensitivity
     analyses, taking an individual factor, an individual
     parameter and doing 95th-5th percentile sensitivity analyses
     on that factor or parameter, trying to gain an
     understanding, by factor, what is its contribution to the
     system performance, always running out to the 100,000 year
     time period.
               Why 100,000 years?  Well, 100,000 years was to go
     sufficiently beyond 10,000 years to the point where the
     engineered barriers were degrading, go sufficiently beyond
     10,000 years to assure ourselves and the reviewers that
     there was no significant degradation of any aspect of the
     system beyond the 10,000 year regulatory time period.
               So all the sensitivity analyses and barrier
     importance analyses are also done out to 100,000 years, with
     the exception of that 50,000 year volcanic event one.
               The final set are what we've termed barrier
     importance analyses and those are going barrier by barrier
     and looking at, in two different ways, looking at either 5th
     or 95th percentiles of the key aspects of that barrier, 5th
     being on the good side of performance, 95th being on the bad
     side of performance.
               If that particular parameter, a low value, meant
     bad performance, then we flipped it.  So it's looking at
     good or bad within the distributions that we have.
               Finally, the last bullet there is in some cases,
     for particular barriers, they were neutralized; i.e., the
     function of that barrier was removed from the analysis to
     evaluate the contribution of that barrier and all the other
     barriers to the system performance.
               Those were mostly used in support of the
     defense-in-depth and multiple barrier determinations that
     are embodied in the repository safety strategy.
               The next three -- I'm going to go a little faster
     here -- slides just talk through using the same orientation
     I did in the previous table, which is the same table that's
     in the back of the document, go through what's the barrier,
     what's the barrier function, and what was the importance
     analysis done on that barrier, where that performance
     analysis here was that 95th-5th percentile aspect.
     So you can read what aspects of the system were pushed to
     their limits in that particular table.
               The next slide, which is a results slide, walks
     through some regression analyses, and these are supported in
     a lot of different ways.  I'm just showing you the
     regression analyses of the nominal performance scenario to
     what drove the variance.
               So I'm trying to understand what drove that
     six-eight orders of magnitude of variance of system response
     for all of the uncertainty inputs that were put in there.
               I think Abe already alluded to this one, but the
     top four relate to the package.  They relate to the welds of
     the package and they relate to essentially stress corrosion
     cracking at the welds of the package.
               So that is a point of continued investigation,
     continued discussion.  I believe this issue came up,
     although I was not able to attend, at the container life and
     source term KTI meeting, about the importance of the stress
     state at the welds and degradation mechanisms at the welds,
     which includes the defects at the welds.
               The other one is the saturated zone flux.
               Having done that, it points to a few parameters of
     the several hundred that are in the total system model to do
     a barrier importance analysis on.
               The next slide, the results slide, John, if you'd
     just slip to that one, assumed -- you know, I took 95th
     percentiles and 5th percentiles of those stress factors,
     those corrosion rates, the defect distribution, the defect
     orientation, the defect size, all of which are parameters in
     the TSPA model, and pushed them to their 95th percentile,
     and then ran 100 realizations, we only did 100 when we were
     doing barrier importance analyses, 100 realizations and redo
     a mean or expected value off of those 100 realizations.
               And you can see that if we sample some of those
     parameters at their close to extreme values, obviously not
     the 99th percentile, but at their 95th percentile, we
     generate early failures and when we generate early failures,
     we get early releases through those stress cracks at the
     welds.
               These are all driven by the weld and degradation
     at the weld and those are early and prior to 10,000 years.
               With that, I have a few slides to just summarize,
     but they more or less repeat what Abe had, first talking
     about the major technical improvements and the major process
     improvements and then, finally, summarize with where we are.
               As I've said, the results are done, but in
     checking and review, the document has not been delivered to
     the Department of Energy yet for their acceptance.  They
     will, of course, do an acceptance review and we'll make
     whatever changes in the document, in the analyses, in the
     report as required, as they did with all the process model
     reports.
               So with that, I'll entertain any questions you
     might have.
               MR. GARRICK:  Good.  Thank you.  Thank you very
     much.  That's a very interesting presentation.
               Committee, any comments?  Milt?    
               MR. LEVENSON:  I had one comment, or a couple, on
     the same issue.  You discussed the degradation of the waste
     container and it was very clear and I think probably is what
     happens.  You get a crack in a weld and et cetera.
               But unfortunately, in one of the pieces of paper
     we got a month or two ago, it said that the assumption was
     made that when the container lost the ability to be helium
     leak-tight, the assumption was made that 50 percent of the
     container disappeared.
               Now, that's quite in contrast to what you've said
     and I can't give you the specific reference, but I know the
     other committee members have read the same thing.  So I was
     a little -- what you're telling us is what's really in the
     TSPA.
               MR. ANDREWS:  I don't know this helium leak test
     issue.
               MR. LEVENSON:  Well, the problem is when -- that
     they assumed when failure occurred, when it cracked through,
     the 50 percent of the container disappeared, so the fuel was
     immediately immersed.    
               MR. ANDREWS:  No.  I do believe -- well, I don't
     know where that -- if it was a criticality -- that sounds
     like an extreme assumption for criticality analysis
     purposes, but I have no idea if that's the case.
               MR. LEVENSON:  It was supposed to be release.  The
     other question I have, which is somewhat related, is that
     even at 10,000 or 15,000 years or whatever time we want, the
     fuel is really the only heat source in the mountain and,
     therefore, if humid air gets through a crack, why would you
     expect condensation on what is the warmest thing in the
     mountain?
               MR. ANDREWS:  That was one of those -- I think I
     alluded to it, maybe not as directly as your question is
     pointing out.  The innards of the package and the hydrology,
     other than the temperature, the hydrology and chemistry of
     those innards of the package, that was simplified and
     conservatively so for this particular iteration of the PA.
               There have been analyses, supporting analyses of
     the amount of heat that the waste form, in particular, the
     commercial spent nuclear fuel waste forms.
               This is not so much an issue with the glass or the
     DOE spent nuclear fuel, which are much, much, much cooler. 
     The heat output is much smaller in those packages.
               But for the commercial fuel, your point is well
     taken and there are some analyses that talk about more or
     less an eight to 15,000 year time period, the time period is
     a function of burn-up and age-out of reactor and storage and
     things like that.
               One would expect the system that you just
     described to occur, that the water, humid air, would never
     condense on the waste form itself during that kind of time
     period because of the heat that's being generated.
               After that time period, the temperatures are so
     evenly distributed across all aspects of the waste form, the
     package, the innards, the basket materials, et cetera, that
     taking any more credit beyond that time period would not be
     very feasible.
               MR. LEVENSON:  Because there is no time period
     that you can go to when the fuel is colder than the
     mountain.
               MR. ANDREWS:  That's true, that's true.
               MR. GARRICK:  Ray?
               MR. WYMER:  No, I've said my piece.
               MR. HORNBERGER:  Bob, Abe was -- we got into that
     discussion about the qualified -- I think you called it
     qualified uncertainties, the issues that you were just
     discussing.
               How will you work that in after you do these
     analyses?  Do you have any idea how you'll work that into
     the presentation?  I mean, the idea would be, on some of the
     issues, the kind that Milt was just referring to.  These are
     one-sided.  So they're not symmetric.
               MR. ANDREWS:  There's a large number of them that
     are not symmetric.  Given that we've -- in fact, most of
     them are probably not symmetric.  We made -- and not just PA
     doing the final crank turning, but we, the project, in these
     various areas where those assumptions were made, made that
     conscious decision to be on the conservative side of a
     distribution.
               Doing -- so that means we're on the -- whichever
     side is conservative, I don't know if that's right or left,
     but we're on the right side of the distribution and
     everything is to the left side of that distribution, in
     which case putting in the whole distribution, if it's
     distribution, or alternatives, A/B kind of alternatives,
     means we're pushing it to the left, pushing it to lower
     doses, if you will.
               And they'll probably be treated as sensitivity
     analyses off of what we say here is a conservative
     representation of how we think this system will evolve.
               So there will probably be, in terms of the
     documentation, and we haven't discussed this with the
     department and they might have different ideas, and so Abe
     probably should say something, but we probably would put it
     into this alternative model representation.
               The other alternative, of course, is -- maybe I
     shouldn't go here, but we have EPA and NRC have slightly
     different definitions that go after the word reasonable.
               One talks about reasonable expectation, one talks
     about reasonable assurance.  It may very well be that some
     of these issues that we're talking about fall into the
     reasonable expectation kind of case, rather than a
     reasonable assurance case where you really want very high
     defensibility of your models and assumptions before you go
     forward with a license.
               And that kind of discussion is going on right now
     within the department, but whether it's a sensitivity
     analyses, whether you use it in this reasonable expectation,
     whether they are used for peak dose kind of considerations.
               You've been conservative during the time of
     regulatory concern and let's say appropriately conservative
     during the time period of regulatory concern, but after the
     time period of regulatory concern, when you're looking at
     geologic stability timeframes, perhaps you should put this
     reasonable expectation argument in.
               It's for the final environmental impact statement,
     which, of course, goes along with the license application,
     but it's serving a different purpose.  Those peak doses are
     serving a different purpose for a different audience.
               Maybe that's an idea.  I don't know.  Abe, do you
     want to -- it's kind of a policy question.
               MR. VAN LUIK:  It's a policy question, but I think
     your answer was correct.  We were discussing exactly that
     issue.
               MR. GARRICK:  Abe, can you go to the mic and
     repeat that?
               MR. VAN LUIK:  Your discussion is essentially, in
     a nutshell, what we are trying to determine within the
     department should be our approach between the case that will
     actually evolve into the licensing case and also the larger
     case for the peak dose, which is to satisfy NEPA and satisfy
     197's reasonable expectation.
               And they go out of their way to say that what
     we're interested in is what you expect, the expected value
     case, that they're not interested in details of our
     distributions for that particular purpose.
               So that's some of the internal discussions that
     we're having and we'll make a decision on how that goes
     forward before the time of SR.
               But what you will see in the SRCR is basically
     what Bob has shown you.
               MR. HORNBERGER:  Second question, Bob.  Can you
     summarize for me how information, let's say, derived from
     analog studies makes its way into the TSPA analysis or is
     that considered separate from the TSPA?
               MR. ANDREWS:  There's a couple of ways.  That's a
     good question.  A lot of it, and most of the time, it's
     confirmatory type information to lend support to the process
     models that are the underpinning analysis model reports.
               Most of the time, it's that, to help defend, if
     you will, the quote-unquote, validity of those process
     models.
               So in that sense, it's not a direct parameter or
     direct data set that's fed into PA.
               There are examples where those analogs, in
     addition to providing support, actually are used to support
     some of the data.  It's non-site data, not DOE data, but
     literature, available information that's relevant and
     applicable to be applied.
               So it's incorporated within the process model
     itself as a -- if you will, down at the process model, a
     direct feed.  By the time we see it in PA, by the time it's
     gone through an abstraction and incorporation into the TSPA,
     you probably have lost a little bit of that direct
     one-to-one traceability from that data set, that analog data
     set into the process model and through the process model
     into the TSPA, but it's there.
               So it's confirmatory for most of the cases, kind
     of add confidence, confidence-builder.
               MR. GARRICK:  The committee has a lot of questions
     that they would like to explore one day on the TSPA and I
     think we're going to have to do that at a later time and
     when we get down to maybe a lower level of the analysis.
               I think this has been an excellent overview. 
     There's a great deal of interest here in how some of these
     algorithms are actually applied and how some of the analysis
     is done, and we'll look forward to covering those in other
     meetings as we get more involved in the PMRs and the AMRs
     and the input documents into the TSPA.
               So we'll look -- and I have a lot of questions,
     but we'll look to another time to do that.
               I've had requests from two people.  Andy, you have
     a question?  Go ahead, I'm sorry.
               MR. CAMPBELL:  Just a quick one, Bob.  Do you
     feel, does the program feel that in the transition from
     TSPA-VA to TSPA-SR, the modeling and the approaches and what
     gets incorporated into the models in terms of parameters has
     become more conservative or do you think it's become a
     little more realistic?
               MR. ANDREWS:  I think, in toto, it's become more
     realistic.  There's the few examples where a conservative --
     if a conservative assumption was made in the SR, it was
     probably also being made in the VA.
               MR. CAMPBELL:  The reason I ask is it looks like
     the dose curves have increased.  At any one time, you get
     higher doses in the TSPA-SR by, in some cases, almost a
     factor of ten, if you look at the mean or the 95th
     percentile.
               If you're becoming less conservative, shouldn't
     the doses be going down?
               MR. ANDREWS:  No, more realistic.
               MR. CAMPBELL:  If you're becoming more realistic.
               MR. ANDREWS:  I think the one aspect of the system
     that's driving -- or a significant difference that drives
     that is the solubility distribution, in particular, for
     neptunium, because neptunium is what's driving the 100,000
     year and, in fact, the peak dose.
               In the VA, we tried to broaden that uncertainty to
     incorporate both the waste form degradation characteristics
     and the data from Argonne and PNL and Livermore, but mostly
     from Argonne, as well as the direct observations of
     solubility from Los Alamos and Berkeley.
               So we had all the labs doing varying parts of
     neptunium solubility work, which made it a fairly broad
     distribution, had some lower values, had a few higher, but
     mostly, on toto, was lower than what we have in the SR.
               There were comments on the VA saying make the
     solubility more chemistry dependent and we made the
     solubility more chemistry dependent.
               And in those analyses and model reports that
     relate to the chemistry dependence of neptunium solubility,
     the currently available data and fundamental thermodynamic
     data pointed to a very strong pH. dependency which is
     incorporated in the model, which meant, by doing so, the
     solubilities ultimately were increased from what we had in
     the VA.
               So I think it's reasonable, defensible, but higher
     value.
               MR. LEVENSON:  As a follow-up question, I
     understand the technetium solubility is from real data. 
     What about technetium retention as it moves through the
     ground?  There is an assumption that is not related to, I
     think, experimental data.
               MR. ANDREWS:  Now, there are technetium, neptunium
     and technetium, there are retardation data for both of them
     from Los Alamos, with site-specific kinds of materials.
               The technetium retention in the alluvium is being
     considered in the TSPA-SR, as is neptunium retention through
     the whole system is being considered in the TSPA-SR, based
     on best available data on those retention characteristics.
               So if they're retarded, they're with data that
     support them, they are in there.
               MR. GARRICK:  Okay.  Speaking of intrusion, we're
     getting into our lunch period.  But as I was about to say,
     we've had two people ask for an opportunity to make a few
     comments and I would like to get those in and I would like
     to limit the comments to three to five minutes.
               Is Mr. Harney here?  He's not here.  All right. 
     Then Judy.  Judy Treichel, of the Nevada Nuclear Waste Task
     Force, would like to make a comment.
               MS. TREICHEL:  Judy Treichel, Nevada Nuclear Waste
     Task Force.
               You said that you were coming here so that you
     could hear from us and I want to find out how that works,
     because you are hearing.
               I've got questions on the pyramid that I guess you
     have in color and the rest of us got in black and white,
     that shows the -- where we're at and where we're going and
     all of that sort of thing.
               I want to know, because the SRCR is supposed to be
     a featured event in our lives very soon, how big is that? 
     Are we talking telephone book, something like this, are we
     talking site characterization plan?  How tall is that thing
     expected to be?  It's two volumes.
               MR. SCOTT:  I believe it's about 1,300 to 1,400
     pages.
               MR. GARRICK:  You're going to have to use a
     microphone and announce who you are.
               MR. SCOTT:  Mike Scott, M&O.  I believe Tim
     Sullivan, with DOE, said this morning it would be about
     1,400 pages.
               MS. TREICHEL:  Okay.  So probably about like the
     draft EIS.  Then am I correct that when the site
     characterization -- when the site recommendation report
     itself comes out, you just slap two additional volumes on
     there?  Those first two go as they are and you put two more,
     or are you going to rewrite the first two?
               Here's one and two, and then three and four come
     in.  Does all of this in this hunk stay the same or are you
     making changes before you go to the Secretary and the
     President, after we've had our chance to comment?
               MR. VAN LUIK:  The idea is that we would actually
     listen to the comments, consider them, and make changes in
     the documents before they go to the Secretary.
               MS. TREICHEL:  So it's up to us to get this site
     recommendation all ready to go, because you were asked if it
     was going to be reviewed and it appeared that was kind of
     iffy and you weren't too sure about that.
               MR. VAN LUIK:  Ninety days.
               MS. TREICHEL:  For us.
               MR. VAN LUIK:  Uh-huh.
               MS. TREICHEL:  And that's interesting, too,
     because during those 90 days, which, of course, we're pretty
     used to, you have all of the major holidays and you still
     got -- and we -- and in addition, we have no EPA rule, we
     have no NRC rule, we have no DOE guideline.
               So I'm not sure why, if we were doing a bona fide
     job of reviewing this thing, we wouldn't review it against
     the existing rules, that we wouldn't just take 60, we
     wouldn't take 960 and see if it passes muster, because
     that's what's there.  We won't have anything new.
               At the same time, while we're doing this exercise,
     we're going to have eight technical exchanges that are going
     on that are trying to bring about sort of resolution or
     clarification or whatever you want to call it, changing the
     status of your KTIs and the questions that people have.
               So those are going on and we do those all at the
     same time.  In addition, the other myriad of meetings that
     happened in which everything changes.
               There was information you got today from DOE
     that's a little bit different than what we've seen before
     and the design is always different in one degree or another. 
     It was a little different today because of a difference in
     the stainless steel part of the package.  And we know that
     that's not going to get frozen.
               So I guess what I'm trying to figure out is
     whether or not I'm spinning my wheels.  The public doesn't
     get paid.  So we have to kind of save on what we spend, and
     that includes our energy.  And I just don't want to wind up
     out there with something that means nothing, and maybe
     there's other more important things going on.
               So thank you for listening.
               MR. GARRICK:  Thank you.  Thank you very much. 
     All right.  Unless there's comments from anybody, I think we
     will adjourn for lunch.  Try to be back here as close to
     1:00 as possible.
               [Whereupon, at 12:05 p.m., the meeting was
     recessed, to reconvene this same day at 1:00 p.m.].                           AFTERNOON SESSION
                                                      [1:05 p.m.]
               MR. GARRICK:  We'd like our meeting to come to
     order.  For the benefit of those on the panel or on the
     committee that are chemically inclined, this is their time
     to shine.
               We're going to, this afternoon, spend a good deal
     of time on chemical related issues and to actually lead the
     discussion and questioning on this will be Dr. Wymer.
               Our first speaker is going to be Dr. William
     Boyle, and I think, unless there's anybody that wants to
     make any preliminary comments, we'll get right into it.
               MR. BOYLE:  Good afternoon and thank you for this
     opportunity.  You can see the names on the sheet here.  I'll
     be very brief and just mainly provide an introduction as to
     why DOE wanted to make these other measurements of
     chlorine-36.
               I will be followed by Dr. Mark Peters, of Los
     Alamos, who will provide the technical details, and the
     actual work is being done by the principal investigators who
     are listed at the bottom, most of whom are here, Mark Caffee
     from Livermore is here, June Fabryka-Martin from Los Alamos
     is here, Mel Gascogne from AECL is not, and Zell Peterman of
     the USGS is here, and Robert Roback of Los Alamos is here.
               So if there's questions on the details, the
     investigators are here.
               For those of you who were present in Pahrump in
     May for the NWTRB meeting, I've got the same sheets, so it
     will be repetitive for you.
               I assume most of the audience knows why the
     project has measured chlorine-36, but just in case, I will
     give a non-expert synopsis.
               Chlorine-36 is one of many naturally occurring
     radioisotopes used for age dating.  Its abundance was
     changed my nuclear weapons testing in the South Pacific in
     the 1950s, creating a bomb pulse.
               Measurements of chlorine-36 at Yucca Mountain have
     been interpreted to have this bomb pulse.  These bomb pulse
     data, at depth, are then used as evidence that there are
     vast flow paths in the unsaturated zone at Yucca Mountain.
               That's the synopsis, and now I'll go on to why we
     did the validation measurements.
               The project's original measurements for
     chlorine-36 were done by Los Alamos National Lab.  As you
     can see, there are other organizations involved now in the
     validation measurements.
               And why were these validation measurements made? 
     Well, about two years ago or even a little bit longer, a
     series of reports were written by the United States
     Geological Survey that seemed to describe a comprehensive
     history over geologic time about the unsaturated zone at
     Yucca Mountain.
               This history was based upon the integration of
     many independent data sets.  Not surprisingly, not every
     data set that was used to develop the integrated history
     flanged up perfectly.
               One of the data sets that did not flange up as
     well as some of the others is the chlorine-36 results from
     Los Alamos National Lab.
               In discussions about why there might be this
     difference between the chlorine-36 data set and the USGS
     history for Yucca Mountain, it was decided initially to
     follow a standard scientific practice and have an
     independent group make measurements of the chlorine-36, and
     those independent measurements are those by Mark Caffee and
     the others listed on the sheet, not from Los Alamos.
               So we went ahead and made the measurements and now
     you will get to hear the results of those.  At this point, I
     will turn it over to Mark Peters.
               MR. PETERS:  Thanks for having me.  I do work for
     Los Alamos, but I'm really up here as a representative for
     the team that Bill mentioned is out there in the audience. 
     So I'll go through a lot of the technical details, but I'm
     hoping to get through it, leave a lot of time for questions,
     and then the PIs in the audience I've already told to feel
     free to step up and help me answer some of the details,
     clarify, however you all want to work that.  That's up to
     you.
               Bill already mentioned the participants.  They're
     all sitting in the audience today, which is real nice.  What
     I want to do today is walk you through, first, a lot of you
     are familiar with the chlorine-36 studies that we've done
     over the past three to four years.  I want to bring you back
     up to speed on what that data set looks like, then go into
     some of the results from the validation study, where we're
     doing analyses of samples from some of the locations in the
     ESF, where we thought we saw bomb pulse, and then bring in
     some cross drift results and then wrap up with the path
     forward, because as I'm going to show you, there are some
     differences in the analyses for chlorine-36 chloride ratios
     for some of the samples in the ESF set of samples from the
     Sundance fault and we have a path forward that we're
     following to try to address those differences.
               As I'm going through, I want to clarify, so we
     don't get lost in semantics.  I'm going to be talking a lot
     about sampling, processing, and analysis.
               When I mean sampling, I mean physically taking the
     sample from the rock and then processing, I mean what we
     have to go through in the laboratory to process the sample,
     fairly labor intensive, and then the analysis is done by
     accelerator; in the case of Livermore, at Livermore; in the
     case of Los Alamos, it's sent to Purdue University.
               So, first -- and I should also say that there's a
     lot of organization-specific references.  You'll see a lot
     of Los Alamos and Livermore.  We are working as a team, but
     it helps me distinguish between the data sets to point that
     out.  So this is an integrated study that we're carrying
     forward with.
               In terms of the overall objectives of the
     chlorine-36 chloride program over the past three to four
     years, the objectives are shown here.  We were looking to
     test alternative conceptual models for unsaturated zone flow
     and transport.
     Specifically, to look at flow and transport through the
     Paintbrush tuff, the Paintbrush, non-welded, which sits
     stratographically above the repository horizon.
               Then also to look at the significance of differing
     temporal and spatial scales.  And what I mean by that is the
     effects of episodic infiltration and how well the PTN might
     dampen that flow into the repository horizon.
               The program focused on systematic samples.  Parts
     of the ESF, we took samples every 100 meters, and then in
     some cases, every 200 meters.  We also looked at features,
     meaning fault zone fracture sets and took samples, and there
     we were taking block samples, large samples, with a pick
     hammer or jackhammer, taking those back to the laboratory.
               Bill mentioned the validation study.  In the
     validation study, we were focused -- and I'll show you the
     data from the ESF in a minute, but one of the locations in
     the ESF that we saw apparent bomb pulse was in the Sundance
     fault zone.  I'll show that on a map, but it sits down in
     the main run of the ESF, down by Alcove 6, for those of you
     who are familiar with the ESF.
               We took -- and also at the drill hole wash fault,
     but I'll talk mainly about the data from the Sundance today.
               Those samples were taken by drill.  We did bore
     holes.  So there was a difference in sampling approach for
     the validation study than in the studies that we've been
     doing over the program in the past couple of years.
               Again, it was led by the USGS, with involvement of
     Los Alamos, Livermore and AECL.
               We were also looking at some of the samples, in
     addition to looking at chloride-36, chloride measurements. 
     WE also have some tritium analyses that I will talk about
     briefly.
               But what we did is we did systematic bore holes
     across those two fault zones, two-inch cores, up to four
     meters, and, again, to emphasize, it was in contrast to the
     sampling approaches that we took in the ESF and in the cross
     drift.
               So we took samples as depth slices and splits were
     taken.  So in some cases, we have Livermore and Los Alamos
     looking at some were core and some were hole.
               I mentioned I want to talk about the cross drift. 
     I call this validation, maybe I should put it in quotes, but
     nonetheless, in the cross drift, we also did feature-based
     sampling, but here we were able to do predictions using the
     UZ flow and transport model for what we thought we would see
     in terms of chlorine-36 chloride systematics and we compared
     those predictions.
               So background for chlorine-36, Bill alluded to it
     to some extent.  Chlorine, this is basically lifted out of
     the table already, nuclides, chlorine-35, chlorine-37, both
     stable isotopes, make up the full abundance.  Chlorine-36
     has a half-life of 301,000 years.
               In terms of sources in the subsurface, you've
     really got three primary sources.  You have the pre-bomb
     ratio, the modern ratio is about 500-ten-to-the-minus-15. 
     That has varied through time due to variation in the field
     strength, the magnetic field strength on the earth, you get
     more or less cosmic ray bombardment, therefore, more or less
     chlorine-36 production in the atmosphere that rains down.
               B bomb pulse has a much higher ratio.  Then C,
     there's a contribution from production of it due to
     reactions, nuclear reactions with uranium and thorium in the
     subsurface.
               That ratio is much lower, 20 to
     50-times-ten-to-the-minus-15, and that tends to be a
     negligible contribution.  All in all, those three sources,
     there are some other minor sources, but they add up to what
     you see in the subsurface rocks and water and what we then
     measure.
               Now, to go back to what we've seen in the ESF, the
     exploratory studies facility.  This plot is on the Y. 
     Chlorine-36, the chloride ratio, times-ten-to-the-minus-15,
     has a function of construction stations, so that's thousands
     of meters through the ESF.  The north ramp, the main drift
     and the south ramp are shown along the top.
               There are several different kinds of samples shown
     here.  This is the Los Alamos results.  So you have, in the
     black squares, you have the systematic samples.  The open
     squares are the feature-based samples, then there's also
     core water samples, where we've done -- we've extracted
     water using centrifuge and done analyses of that water.
               Then there is also plotted on here, in the red are
     the Los Alamos validation samples, meaning the splits of the
     core from the validation work at the Sundance.
               But you can see, in general, where we see apparent
     bomb pulse ratios, which were above
     1,200-ten-to-the-minus-15.  It tends to be associated with
     structural features.
               So we chose the Sundance and the Drill Hole Wash
     to do our systematic validation study across there.
               So just to come back to the validation work.  The
     goal here was to verify the presence of bomb pulse in
     samples taken from the ESF and the sample preparation method
     that was developed, that Livermore is using, is designed to
     detect the presence of bomb pulse.  So we weren't looking to
     delineate the relative contributions from those three
     sources, but we were mainly looking for any evidence of bomb
     pulse chlorine-36 in ESF.
               This is the data for the Livermore results. 
     Again, we did a series of bore holes across the Sundance
     fault.  This is chlorine-36 to chloride
     times-ten-to-the-minus-15 on the Y; again, plotted against
     ESF station.
               We did a series -- we did on the order of over 40
     bore holes across the structure and you can see the data
     there.  So the present day ratio is shown about
     500-ten-to-the-minus-15, ratios greater than 1,200,
     indicative of bomb pulse.
               You can see the Livermore analyses are all
     relatively low, in the 50 to 200-times-ten-to-the-minus-15
     range.  No evidence of bomb pulse.
               Now, for the Los Alamos results from the
     validation study.  This is, again, a similar plot, same
     ratio plotted on the Y against ESF station, with the
     Sundance fault drawn in the -- the Sundance is right here. 
     And here is the sampling interval.  WE sampled across the
     fault with the bore holes.    
               What's plotted here in the blue diamonds is data
     from the -- what I'll call the standard ESF study and then
     in the red squares are the validation results from Los
     Alamos.
               We haven't seen evidence -- we haven't seen ratios
     greater than 1,200, but notice there is a difference.  All
     the samples are greater than 500-ten-to-the-minus-15, so
     you've got a significant difference in ratio there between
     what Livermore was getting for similar samples.
               You have to ask yourself, okay, what -- well,
     that's really the crux of why we're here.  We've got two
     data sets now that are showing some differences.  How robust
     are the Livermore data?  When you correct for blanks, you
     don't really affect the final ratio.  In general, when you
     correct the ratios for things, you tend to lower rather than
     raise the ratio.    
               However, and I will talk more about this, if you
     correct for rock chloride, which might be related to sample
     processing, processing in the laboratory, it may be possible
     that you could cause the correction to go in the other
     direction, but that's what we're about in our path forward
     trying to figure out what's really going on, particularly in
     the processing techniques.  That may be part of the
     differences.
               But I should also mention the Livermore --
     Livermore runs samples from all over the world for a lot of
     different reasons and when the samples that were run at the
     same timeframe yielded results that were consistent with
     what they expected for the geologic setting.
               So we have no reason to believe that the Livermore
     data is wrong or the Los Alamos data is wrong.  We need to
     look into the details of how we process the samples.
               MR. GARRICK:  Were the sampling procedures exactly
     the same?
               MR. PETERS:  Do you mean sampling in the field or
     processing in the laboratory?
               MR. GARRICK:  I mean in the field.
               MR. PETERS:  Yes.  It was taken from the same bore
     hole.  So they were dry drill bore holes and then the core
     was simply split for the validation study.
               MR. GARRICK:  And were they governed by any kind
     of a national or international standard for taking such
     samples?
               MR. PETERS:  They were collected, according to our
     QA program, like we sampled all the samples underground.  So
     it's proceduralized.
               MR. GARRICK:  They both used the same QA program.
               MR. PETERS:  The collection in the field was the
     same program, because it was the same driller, same drill
     rig, same sample handlers.
               So where we're headed here is how one -- what's
     going on downstream of that.
               MR. GARRICK:  And those same questions we'll want
     to talk about downstream.
               MR. PETERS:  Right.  And the details of that are
     better spoken to by the PIs.  I'm not going to say a whole
     lot more than what I've already said, actually.  But we do
     have a path forward that focuses on that part of the process
     to see if that's -- that therein lies the difference.
               I mentioned at the beginning that we're looking --
     there's some related work associated with the validation
     study.  For example, we're also looking at doing tritium
     analysis in some of the same samples, and here we're
     extracting the water and doing tritium analyses.
               This is a slightly out-of-date slide.  The USGS
     has done additional analyses, but suffice it to say it
     doesn't change the distribution.
               So right now we've seen really only one sample
     that's even above detection limit for tritium.
               So for the validation samples, no events of bomb
     pulse tritium.  But tritium and chlorine-36 are going to act
     very differently in the unsaturated zone hydrologic system.
               MR. HORNBERGER:  Is the processing for tritium
     similar to the processing for chlorine-36?
               MR. PETERS:  The tritium was all through
     centrifuge. You extracted the water with the centrifuge --
     distillation, excuse me.  So vacuum distillation.  So
     basically putting -- moving it around in a cold trap in a
     vacuum line, basically, whereas chlorine-36, as you know, is
     basically running DI through some variation on that theme.
               AECL's participation is mainly focused on the
     U-series disequilibria work that they're doing.  This
     doesn't -- it speaks a lot to the -- basically, the
     residence time of core water in the unsaturated zone.  It
     doesn't speak as much to bomb pulse, but it combines real
     nice in with geochemical indicators of long-term percolation
     flux, et cetera.
               But we are seeing that thorium, uranium and radium
     are in secular equilibrium.  Therefore, they haven't been
     moving around over the past 100,000 years.
               The 234-238 uranium ratios are depleted, show a
     five percent depletion in uranium-234, which suggests that
     uranium-234 may be lost to pore fluids and that's probably
     by alpha recoil.
               And therefore, we would expect the pore fluids to
     be enriched in 234, and they, in fact, are.  So that can
     actually be modeled to give us an idea of residence time of
     the core water in the unsaturated zone.
               Now, getting back to the differences.  We haven't
     finished.  As Bill mentioned, it's a work in progress.  It
     is a work in progress.  It's going to continue next year. 
     Livermore may yet see bomb pulse in some of the samples.
               Mark made this point in May, and it's a good one. 
     Work to date hasn't demonstrated its absence.  We just
     simply haven't found it.  It could be that we may find it.
               The Livermore sample processing may have selected
     phases that don't contain bomb pulse chlorine-36 or maybe
     the samples may not, in fact, have bomb pulse chlorine-36 in
     them.
               But at any rate, the chlorine-36 concentrations
     seem to be comparable, but the difference in ratios may be
     due to, and this is a may, this is what we're going after,
     elevated concentrations, chlorine concentrations, because
     obviously we're reporting chlorine-36 to chloride ratios, so
     you can -- there's a lot of leverage with chloride
     concentration there to change that ratio pretty
     dramatically.
               So the differences, again, may be related to
     processing method.  We're focusing on that, not sampling or
     analytical methods.  So sample crushing, how you extract the
     chloride for analysis, and those kinds of things.
               I've mentioned, we've talked mostly about the ESF,
     but this data, I don't think you've probably seen this data
     in a meeting.  This is data from the cross drift; again,
     same ratio along the Y for the cross drift station in 100
     meter increments, and the same familiar dotted line bounds
     the range of current day values for chlorine-36 to chloride
     ratios.
               There's the faults noted in the cross drift, as
     mapped in the cross drift, and you can see there, again, and
     these are back to taking feature-based block samples through
     the cross drift, so back to the standard ESF program, and
     it's consistent with what we've seen throughout the ESF,
     that Los Alamos has seen throughout the ESF.
               So really what we're focusing in on is why are we
     getting the differences in the validation samples.
               So I've already said this, but we did do a set of
     predictions for the cross drift and, in general, they were
     consistent with the predictions for where we would see
     background levels and where we would see bomb pulse.
               And I said the second bullet already, but the
     sampling protocol that we use in the cross drift seems to --
     it does yield similar results to what see in the ESF
     investigations, except for the validation study, as we've
     talked about a lot.
               So what about a path forward?  There was a lot of
     discussion amongst the PIs in the April-May timeframe,
     particularly after the interactions with the TRB about how
     to go forward and address these differences.
               So the path forward is the USGS has prepared a
     reference sample.  We've taken some muck from the cross
     drift excavations that we were doing back in that timeframe. 
     That's been crushed, homogenized, and then aliquots have
     been distributed to both Livermore and Los Alamos.
               There's going to be a set of experiments done by
     both laboratories to test for the effects of different
     leaching procedures on the release of rock chloride, again,
     the chloride could be the key.  That's ongoing.  So this is
     really, again, a progress report.
               We will then do the laboratory work, do the
     accelerator analyses and see where we're at, compare notes.
               Once we've sort of agreed on a standard processing
     method, we hope there will be exchange of samples, exchange
     of information, apply that to the reference sample and the
     validation samples, and then the results will be
     synthesized.
               Now, I haven't talked too much about the
     conceptual model, but the ESF and cross drift data, except
     for the validation data, has been incorporated into our
     thinking on conceptual models for unsaturated zone flow and
     transport.
               So depending upon which way things go with the
     validation samples, that could have implications for our
     understanding of the conceptual model.
               The three bullets are meant to address the aspects
     of the conceptual model that will be under discussion,
     depending upon the results of that study, frequency of fast
     flow paths, roles of fault and fractures, particularly ones
     that cut through the PTN, the non-welded unit above the
     repository horizon, and then, finally, pore water ages and
     implications for infiltration and percolation.  The chloride
     data is also very key.  That's used heavily as a calibration
     tool for the flow field in the unsaturated zone.
               So this is all tied together.  It's a little
     premature for us to really say much about what it means for
     conceptual model until we get at what the differences are in
     the data sets.
               So maybe this time next year we can tell you, give
     you a final answer on that.   
               That's all I have.
               MR. WYMER:  Milt, questions?  Okay.  I'll start
     out, then.  Obviously, it is extremely important to
     understand these data, because they do bear on one of the
     most important parts of the whole repository; namely, how
     fast is the water moving and by what mechanisms, and you've
     known about the disparity quite a while.
               What is it about the analyses that takes so long
     to cross-check and confirm?  It seems to me that could have
     been done by now.
               MR. PETERS:  Well, the differences really were
     just exposed in the spring.  We just really started to
     understand that there were some significant differences in
     the spring timeframe.
               MR. WYMER:  Three months.
               MR. PETERS:  I understand.
               MR. WYMER:  Four months.
               MR. PETERS:  There's been some constraints.  There
     hasn't been as much work done at the laboratories because of
     other competing priorities associated with AMRs and stuff. 
     I mean, it's project priorities, to some extent, that have
     caused not as much laboratory work to be done in this area.
               That may not be what you want to hear, but that's
     the reality.
               MR. WYMER:  You're right.  Okay.  Well, that's my
     question.  Milt?
               MR. LEVENSON:  Yes.  I've got a couple of
     questions.  The numbers that you have for the ratios, one
     for the pre-bomb and the other for the bomb pulse, are those
     average for the entire atmosphere, the 500 and the 1,200?
               MR. PETERS:  Let me be clear.  The actual ratio --
     that wasn't clear on the slide.  The actual ratio for bomb
     pulse water is much higher.  That 1,200-ten-to-the-minus-15
     is what we see when we take a sample of rock in the ESF.  We
     think anything above that is bomb pulse.
               But the ratio for the actual bomb pulse chloride
     is two orders of magnitude greater.
               MR. LEVENSON:  So the atmosphere concentration, if
     you sample the air anyplace, it's going to be two orders of
     magnitude higher than that.
               MR. PETERS:  For the bomb -- June, help me out
     here maybe.
               MS. FABRYKA-MARTIN:  The ratio of 500 for
     background is based on samples from the Yucca Mountain area,
     actually the Yucca Mountain region.  It's based on three
     types of samples; soil profiles, for one, where once we get
     below the bomb pulse in the soil profile, then we use those
     -- the numbers that are clearly below where the pulse is as
     part of the database.
               The second source of samples are packrat samples
     from packrat urine, that's fossilized.  We carbon date the
     sticks that are stuck in the midden and then use the
     chlorine-36, the ones that clearly aren't recent, to show us
     what the background has been.
               And then the third type of sample is ground water
     itself that doesn't have any tritium or C-14 indicating
     absence of bomb pulse there, as well.
               All three sources have been consistent in defining
     that background.
               Now, the 1,200 is a statistically derived
     threshold.  So we're saying that once it's above 1,200, then
     we're confident that it's unambiguous evidence that it's
     clearly elevated above natural background, and that's based
     on a database of about 300 results from the tunnel and it's
     also supported by our packrat midden samples where we never
     saw any ratios as we went back to as far as 35,000 years.
               None of those carbon dated samples were ever
     higher than about a thousand or 1,100 or so
     times-to-the-minus-15.
               MR. LEVENSON:  But I still don't have an answer to
     my question as to what the source term is.  What is the
     ratio in the atmosphere from bomb debris?
               MS. FABRYKA-MARTIN:  Why do you care?  I mean,
     because --
               MR. LEVENSON:  Why do I care?
               MS. FABRYKA-MARTIN:  Because once it hits -- as
     soon as it hits --
               MR. LEVENSON:  Because how do I know how credible
     your 1,200 is if I don't know the source term?
               MS. FABRYKA-MARTIN:  The thought I had -- well,
     first of all, it would be hard to collect a sample from the
     atmosphere that did not have bomb pulse in it.
               MR. LEVENSON:  No, no.  I don't want the
     background from the atmosphere.  I want the present
     concentration of bomb pulse in the world, the source term
     for this.
               MS. FABRYKA-MARTIN:  Okay.  We do have samples of
     runoff water, would you accept that?  Runoff catched from
     channels from USGS investigators?
               MR. LEVENSON:  If that's the best you can do.  Is
     that what you consider representative of the atmosphere?
               MS. FABRYKA-MARTIN:  No.  No.
               MR. LEVENSON:  The source of the bomb --
               MS. FABRYKA-MARTIN:  The reason I say that is the
     moment that a drop of water reaches the surface, it's
     already being diluted by what's already there, what's
     accumulated there over the thousand -- or hundreds of years
     or tens of years.
               So I'm not sure what --
               MR. CAFFEE:  Not to disagree with anything June
     said, but one of the --
               MR. GARRICK:  Identify yourself, please.
               MR. CAFFEE:  My name is Mark Caffee, from Lawrence
     Livermore National Lab.  We can recreate what the bomb pulse
     input is from looking at ice core data.  So there's been
     extensive studies of chlorine-36 deposition at the Vostock
     ice core and in Greenland ice core samples.
               So we can go -- we have gone back, these studies
     have been done by a number of groups, and reconstruct that
     profile of the bomb pulse chlorine-36 in the atmosphere.
               I'm positive that I didn't bring a viewgraph of
     that with me, but the chlorine-36 to chloride ratios are
     very, very high and what you see is kind of what you expect,
     that a few years after the atmospheric testing of nuclear
     weapons, you had a peak in the chlorine-36 and then it has
     fallen off since.
               I don't know if that answers your question, but
     that's kind of the --
               MR. LEVENSON:  What's the current day, current day
     ratio?
               MR. CAFFEE:  If you just went out and got
     precipitation, it would be back to what it was before the
     bomb pulse, before the bombs were exploded.
               MR. GARRICK:  If you want to comment, you have to
     do it in a mic.
               MR. CAFFEE:  What June was going to say is someone
     has looked at precipitation in the environ of Purdue
     University and for what we believe the pre-bomb chlorine-36
     to chloride ratio was, it has gotten back to that again.
               But it's not the case that there have been
     thousands of measurements of chlorine-36 to chloride ratio
     in rain waters all across the country or all across Africa
     or anything.  These contours of input are drawn based on not
     a great deal of data points.
               MR. LEVENSON:  What's the precision and accuracy
     to one significant figure of these analytical methods for
     this ratio?
               MR. CAFFEE:  Okay.  I'll answer for Livermore.  I
     won't try to answer for anybody else.  The measurement
     itself is much better than five percent, based on counting
     statistics.  By the time we fold in blank corrections, and I
     can address the Livermore data here, the uncertainties may
     go a little above five percent, but they're probably still
     less than ten percent.
               MR. LEVENSON:  Five percent is the precision or
     the accuracy?
               MR. CAFFEE:  Precision.
               MR. LEVENSON:  What's the accuracy?
               MR. CAFFEE:  I don't know how to answer that
     question.
               MR. LEVENSON:  That's the most important question
     here.
               MR. CAFFEE:  Of course it is, but what we're
     trying to do, I can tell you that other samples, not Yucca
     Mountain samples, but other samples, for example, in situ
     chlorine-36 produced in carbonates in Greece and Italy and
     all across the world have reproducibilities that are
     comparable to our precision.
               So for samples where we have a long history of
     running things and our researchers have sent us duplicates
     and triplicates, we believe that our accuracy is comparable
     to the precision.
               But for me to get up here and tell you that our
     numbers at Livermore are absolutely accurate and there can
     be no other circumstances that can change those ratios, I
     can't answer that -- I can't do that.
               MR. LEVENSON:  Let me ask one other question,
     which is a completely different type, because I was at the
     meeting in Pahrump four months ago when this first came up
     and there was discussion then that the simplest thing to do
     was to take some aliquots from a processed sample and run it
     both places and rather quickly eliminate whether the
     analytical method was relevant or not, and it doesn't sound
     like that has been done yet.  Is that correct?
               MR. CAFFEE:  No, we have not done that.  I don't
     recall -- that was quite a while ago.  I don't recall it
     being mentioned.  I don't doubt that it was mentioned.  I
     would, again, in answer to that, say that over a course of
     six years of running, we have run duplicates of aliquots
     many, many times, from meteorite samples, lunar samples,
     rock samples from all over the world, and secondary
     standards, NIST standards, a whole variety of things and we
     have comparisons with samples run between ourselves and
     Zurich, between ourselves and the old lab at Rochester, and
     I even believe between ourselves and Purdue.
               MR. LEVENSON:  But that's not really the issue. 
     The issue is to try to resolve why the difference and it
     potentially --
               MR. CAFFEE:  I'm not disagreeing.
               MR. LEVENSON:  You've attempted to get rid of the
     sampling difference by splitting a sample.  I think that's
     an acceptable method of doing it, and you could fairly
     quickly resolve whether it's processing or analysis by the
     two labs running the same sample.  I just don't understand
     why --
               MR. CAFFEE:  This is one of the things that will
     certainly be done with this standard reference material,
     because here we have enough sample that we can precipitate
     enough silver chloride that we can make splits.
               The chemistry that's done on the core samples is
     difficult enough that after we have precipitated silver
     chloride, we generally don't have enough to make splits.  We
     make enough silver chloride for one, perhaps two analyses.
               So we don't have in our laboratory a supply of
     silver chloride from the ESF samples.  To do that would
     require processing much more sample than we've got.
               And let me make one further point.  When we got
     into this, what I assumed was that we would measure these
     samples and we would see bomb pulse chlorine-36 in all of
     those samples.  That was my assumption.
               Now, if we had made those measurements and
     observed those elevated ratios and everything, we wouldn't
     be here.  No one would be asking questions about precision,
     no one would be asking questions about accuracy, no one
     would be asking questions about did you shake your samples a
     certain way and do you shake your samples a different way.
               We have seen a big difference, though, and I think
     it's important, but it's going to take time to go through
     and resolve those.
               So the initial work was not done anticipating this
     kind of problem.  Had it been done anticipating this kind of
     problem, we would have had to spend a whole lot more time to
     do the work.
               MR. LEVENSON:  We're not talking about prior to
     May, just the concern.  I must say -- let me record a
     personal comment, that whoever sets the priorities so that
     the paperwork has higher priorities than resolving a problem
     as important as this really ought to reassess what they're
     doing.
               MR. HORNBERGER:  I have just a quick follow-up. 
     You were rudely interrupted when you were talking about your
     lab comparisons and I just wanted to know --
               MR. CAFFEE:  Did I interrupt myself?    
               MR. HORNBERGER:  That was it.  I just wanted to
     know if Livermore -- have you done inter-lab comparisons
     with Purdue?
               MR. CAFFEE:  You know, not in recent years, but I
     think we have.
               MS. FABRYKA-MARTIN:  One of the first things I did
     when I became PI on this project back in 1990 was to do an
     inter-lab comparison with results from Rochester, which was
     still running then, Livermore, and Purdue.  And I think I
     sent out a total of four blind samples that all four labs
     had done and they did pretty well.
               I don't remember the exact numbers, but they were
     within two sigma, for sure, of one another and I was
     satisfied.
               MR. WYMER:  How much variation do you find in
     total chloride content concentration from various samples
     that you've taken?
               MR. CAFFEE:  For in general or for the validation
     sample specific?
               MR. WYMER:  I want to know whether there's a
     factor of two, three or five of chlorine concentration from
     one spot to another in the repository.
               MR. PETERS:  How much does the chloride
     concentration vary, say, June, in your samples, from north
     to south ramp?
               MR. WYMER:  Is one saltier than the other?
               MS. FABRYKA-MARTIN:  Okay.  Do you want to
     rephrase that so you're talking -- whether or not you're
     talking about pore water or total amount of chloride leached
     from the sample or a function of --
               MR. PETERS:  I would answer both.
               MR. WYMER:  Yes.
               MR. PETERS:  Answer pore water and then --
               MS. FABRYKA-MARTIN:  Pore water, it's about -- in
     the pore water, in the south ramp, it's about 80 milligrams
     per liter, on the average.  North ramp, we saw much lower
     values, I'd say averaging maybe 20 milligrams per liter,
     and, also, along the cross drift, about 20 milligrams per
     liter.
               As far as chloride leached from the rocks, a very
     general trend that I saw with a lot of scatter was that we
     generally saw bomb pulse -- the samples that we saw bomb
     pulse in were the samples that had the lowest quantities of
     chloride leached from the rock, whereas the ones that had
     the lowest ratios tend to have higher chloride, and that
     would be in terms of we did a one-to-one leach, one kilogram
     of rock to one kilogram of water and saw concentrations
     anywhere from maybe 0.3 milligrams per liter, which means
     0.3 milligrams leached per kilogram of rock, on up towards
     maybe an order of magnitude or more higher.
               Does that answer the question?
               MR. WYMER:  Sort of.  In your analyses, there was
     no normalization with respect to the amount of total
     chloride, or was there?
               MS. FABRYKA-MARTIN:  No.  We measure the ratio on
     what gets leached out and it's an important point to realize
     that we're not trying to maximize amount of chloride leached
     from the rock.
               In fact, it's just the opposite.  We're trying to
     minimize how much chloride we leach from the rock, because
     we don't --
               MR. WYMER:  I understand that.  It just seemed to
     me that if there was four times as much chloride to start
     with and you had the same amount of chlorine-36, you might
     change your ratio.
               MR. PETERS:  I think that's what I tried to
     convey.  June, the concentration in the validation samples
     for chloride, what was yours, and then, Mark, I guess, what
     was yours?
               MS. FABRYKA-MARTIN:  It was about 0.3 milligrams
     per liter that was leached from the validation samples.
               MR. PETERS:  I don't mean to put you on the spot.
               MR. CAFFEE:  Sure.  But my recollection, without
     having my raw data here, is that it's .8 to 1.5, maybe even
     two for a couple -- ppm, so one ppm.  We're definitely a
     little higher than June's.
               MR. GARRICK:  I was just curious.  I'm not a
     chemist, so I can't ask good questions like you've been
     getting.
               But this technique of measuring for bomb pulse,
     that chlorine ratio tracer technique or actual measurement
     technique has been around for a long time, has it not?  And
     there's bore holes all over the Nevada test site, are there
     not?
               So there must be a lot of experience in doing
     this.
               MS. FABRYKA-MARTIN:  No.  What's new here, what's
     really unique is the fact that we're leaching -- we're
     trying to measure chlorine-36, where there essentially is
     not much water, so you're leaching rock.
               And here, unlike those other types of sample
     types, you really have to watch out for diluting your sample
     with salt that's trapped in the fluid inclusions or grain
     boundaries or wherever it is.  That's what is unique.
               I don't know of anyone else who is doing that. 
     They do it in soils, tons of studies using soils, looking at
     in situ chloride in different mineral species, ground
     waters, but unsaturated rock, I can't think of anybody else.
               MR. GARRICK:  So you think that the unsaturated
     rock and the uniqueness associated with that makes this a
     pretty much one of a kind sampling operation.
               What I'm getting at is that with all the
     experience that you've had at the Nevada test site and the
     bore holes, you must have -- have you -- you must have had
     either agreements or disagreements in the past and the
     necessity for confirmation or reconfirmation.
               I'm just trying to get at if you've had similar
     experiences in the past, why can't we attack them the same
     way.
               MR. CAFFEE:  I'll answer that different, actually. 
     First of all, you mentioned that the technique has been
     around for a long time, but while it's been around for a
     while, it's not the case that this is a routine technique,
     like Argonne-Argonne dating, for example.
               If you think about how long Argonne-Argonne dating
     has been around, in the first couple decades, it was
     probably a pretty painful start, too.  So that's the first
     point I'd like to make.  It's still a difficult measurement. 
     We're measuring a million atoms of something.  So that's not
     easy.
               The second point I guess I would like to make is
     that I think, to the best of my knowledge, the only person
     that has leached this kind of rock to get chlorine-36 out
     for AMS analysis is June.
               It's not the case that the literature is just
     totally, totally filled with this kind of thing.  There's a
     lot of data on in situ produced or cosmic ray produced
     chlorine-36.  There's a lot of data on chlorine-36 in ground
     water.  Those are all laboratory techniques that we all know
     how to do and we can all do that.
               But you won't go open Geochemic and find a whole
     lot of articles on this kind of application.
               MR. GARRICK:  I see.
               MR. CAFFEE:  So I find this disparity in results
     not that surprising, to be honest with you.  I mean, I think
     it's just the normal process of --
               MR. GARRICK:  Is this the kind of thing where a
     sample could be sent to an independent lab outside the
     weapons complex, for example, and --
               MS. FABRYKA-MARTIN:  I have sent samples to Purdue
     to process in the past, with mixed results, because I didn't
     define the protocol very well.  Some samples came in right
     where I expected them.  Others, they crushed them too
     finely, so they came down a little, not as low as Mark's,
     but down lower than expected.
               But I would like to point out that I'm not the
     only one who applied this technique.  What would be a fair
     statement is to say Yucca Mountain project PIs have been the
     only ones to apply this technique.  But I wasn't the first
     PI on this and, furthermore, my technicians don't allow me
     in the lab really, with good reason probably, and there's
     probably, over the last -- this project has been going on --
     the chlorine-36 project has probably been going on more than
     15 years or about 15, let's say, maybe slightly more and
     it's had probably five different technicians on it over the
     years that have processed samples and the samples have
     always been consistent from one technician age to another,
     and that's under two different PIs.
               At first, they used to be done at a lab in Tucson,
     Hydro Geochem, and then it got moved to where I am now.
               MR. GARRICK:  That was the basis of my original
     question, the process that is employed in the lab to make
     the measurements, governed by the same more or less
     prescriptive process.
               MS. FABRYKA-MARTIN:  Yes.  That part is true.
               MR. GARRICK:  And was that true for Purdue?
               MS. FABRYKA-MARTIN:  I had them write the
     procedure that they used and send it to me along with the
     results when I really should have gotten in on the very
     beginning and looked over what they had proposed to do.
               I didn't bring those data with me, but it's still
     up in the hundreds, the ratio was up in the hundreds, not
     down in the 100 or less.
               MR. WYMER:  Are you comfortable with the fact that
     fines might give you a different ratio than stuff not so
     finely crushed?
               MS. FABRYKA-MARTIN:  Yes, I am.
               MR. WYMER:  Tell me why.
               MS. FABRYKA-MARTIN:  Because the finer it gets,
     the more likely it is you've broken along grain boundaries
     or opened food inclusions or made it --
               MR. WYMER:  That changes the ratio?
               MS. FABRYKA-MARTIN:  The chlorine-36 to chloride
     ratio simply because then it releases that in situ produced
     chlorine-36 that's made by neutron capture on chloride and
     include an inclusion source, grain boundary, wherever it
     resides, as opposed to what's flowing along the micro
     fractures between the grains.
               MR. WYMER:  I understand.
               MR. PETERS:  Let me follow-up with a question
     here.  Have you guys plotted the chlorine-36 to chloride
     ratio against -- on the Y axis against on the X axis to
     total chloride?
               MS. FABRYKA-MARTIN:  We did for the Los Alamos
     results.
               MR. PETERS:  Do you get just basically a straight
     line?
               MR. CAFFEE:  Yes, sir.  I do.
               MR. PETERS:  Then what you may, in fact, be
     looking at, these high ratios, could very well be simply how
     much non-radiogenic chloride you release from the rock and
     then the question becomes are you, in fact, measuring any
     bomb pulse anywhere, because if all your ratio fluctuations
     have to do with how much chloride is there total, which is
     15 orders of magnitude more stable chloride than there is
     chlorine-36, very small changes in your stable chloride
     amount released from the rock could very well be driving
     this whole ratio.
               So how do you resolve that problem?
               MR. CAFFEE:  Let me make two comments about that. 
     First of all, you asked me if we had plotted the chlorine-36
     ratio versus the chloride, and I said there was no definable
     trend.  It was just a straight array.  That's plotting
     chlorine-36 over chloride versus one over chloride.  So
     trying to come up with a mixing diagram.  That's the first
     thing.
               The second thing is I think the thing you need to
     bear in mind is that there may well be two issues here.  The
     first issue is that in the Livermore measurements, there's
     no bomb pulse chlorine-36 in any of our measurements, and
     the bomb pulse chlorine-36 ratio is substantially higher. 
     It's higher by more than factor of ten than what we -- by a
     factor of ten, what we're measuring.
               Then there is also a difference in the validation
     samples, systematic difference between the Livermore ratios
     and the Los Alamos ratios.  The Los Alamos ratios are a
     factor of two to five, something like that.  That
     difference, the latter difference that I mentioned could
     well be some sort of an effect of diluting the signal with
     chloride.
               That's possible.  We haven't proven that that's
     the case or proven it's not the case, but I agree with your
     line of reasoning and that's part of our path forward is to
     look into that.
               We don't see any bomb pulse I don't think can be
     explained by chloride dilution, because you would be having
     to dilute the chloride by an order -- or the entire chloride
     inventory by a lot of chloride and we would pick that up. 
     We would know that.  We would be measuring 20 to 50 or 100
     ppm of chloride in the leached sample, and we're not seeing
     that.
               We're seeing a ppm to two ppm, June is seeing a
     couple of ppm.  So I think there's two different things that
     you need to bear in mind.  There is the difference -- there
     is the fact that our results are systematically different
     and this might have something to do with laboratory
     protocols.     
               But then there is also the fact that we don't see
     any chlorine-36 or bomb pulse chlorine-36, and I don't
     believe that that can be explained by dilution with dead
     chloride.
               MR. CAMPBELL:  But you're also not seeing modern
     chlorine-36.
               MR. CAFFEE:  Bomb pulse, modern chlorine-36.
               MR. CAMPBELL:  I mean pre-bomb pulse.  If the
     ratio has been somewhere between 450 and, say, 1150 for the
     last 50,000 years, maybe the last million years, and you're
     looking at ratios of 50 to, say, about 150, you're looking
     at very old chloride.
               So it's not just that you don't see bomb pulse
     chlorine-36.  What you're seeing is very old chloride
     relative to any of the samples that Los Alamos has.
               MR. CAFFEE:  That's right.  If there were no other
     data in this world and we only had the Livermore data and I
     couldn't do any more experiments, I couldn't look and see at
     the stepped release of chloride and I couldn't investigate
     these processes, and someone forced me to interpret this, I
     would have to say that based on the fact that the modern
     input is about 500 and our ratios are less than 200, that
     we're seeing decay of chlorine-36.
               So we would interpret our data as indicating very,
     very old pore water, older than -- comparable to a half-life
     of chlorine-36.
               Now, I think it's way too early to try to
     interpret that data like that, but if that's all I had,
     that's the way I would interpret it.
               MR. PETERS:  But as you kind of alluded to, if you
     look at the Los Alamos data on the validation samples, it's
     comparable to what you would expect to see for background.
               If you go back -- right, June?  I mean, the
     chlorine-36 to chloride ratios for the Los Alamos validation
     samples is in the range of five, six,
     700-ten-to-the-minus-15, which is very similar to what she
     -- what you were seeing in background throughout the ESF.
               Now, granted, they haven't seen anything above
     1,200 in those Sundance fault cores yet, and I say yet,
     that's key, but --
               MS. FABRYKA-MARTIN:  I don't think we will.
               MR. PETERS:  Okay.  Fair.
               MS. FABRYKA-MARTIN:  There's not that much core
     left for us to process at Los Alamos.  Most of the core has
     already been spoken for.
               So we may not have a whole bunch more samples
     coming from validation bore holes, but the ones we do have,
     it's true, they're background.  I wouldn't call it bomb
     pulse in what we've had come back.
               On the other hand, when we did find a bomb pulse,
     we had a structural geologist pick our location, saying this
     looks like a likely flow path, sample here.
               We were very careful to try to sample right along
     the fractures, so we would maximize the amount of fracture
     surface in our samples, and that's not the case for the bore
     holes.
               MR. HORNBERGER:  Can you -- well, you certainly
     have looked to see whether or not the processing is
     different at some coarse level.  Can you summarize if there
     are any coarse differences?  Are people looking at different
     size fragments, leaching times, are the leaching times
     different, anything different sort of at that macroscopic
     level?
               MS. FABRYKA-MARTIN:  He shook his samples for
     seven hours and we just let ours stew in the soup pot at
     room temperature for a minimum of two days and stir it once
     a day.
               So it's passive versus an active extraction
     procedure, and that's what we're both investigating now,
     what is the effect of that on extracting both total chloride
     and then, even more importantly, on the chloride to bromide
     ratio, which is the indicator of how much of it is coming
     from the rock.
               MR. PETERS:  So have you both looked at the
     chloride to bromide ratios?
               MS. FABRYKA-MARTIN:  We're trying.  The problem is
     that the leachates are so extremely dilute, that we can both
     get good numbers for chloride, but when it comes to bromide,
     it's really dicey.
               So right now, it looks like what we need to do is
     to up the scale of the experiments large enough that we can
     take a large enough volume of leachate out at each time in
     step to evaporate down and concentrate it.
               But the problem we have is, at least at Los
     Alamos, is my chemist is about 100 percent Busted Butte
     project, which means she only does analyses pretty much as a
     favor, because there's not really a lot of funding and this
     is a much lower priority than Busted Butte.  And so she's
     processed -- measured a lot of samples, but it takes a
     couple months to get the analyses out.
               MR. CAFFEE:  Let me agree with pretty much that in
     its entirety.  One of the things that we do want to do is
     measure the bromine in these fractions and one of the things
     that we want to do, and we've started, but we haven't
     completed the analyses, is leach for two hours, leach for
     four hours, leach for six hours, eight hours, maybe go back
     and look at fines, I think that's a good idea, too, do all
     of these kinds of things.
               It takes time to do these kinds of things and
     we're working on it and we don't have the results in any
     state where we could talk about them today.  
               Regarding the bromide measurements, I think that's
     really important.  It's just a hard measurement.  It's a
     very hard measurement and so we have a new chromatograph and
     we're trying to make those measurements, but we just have to
     get to the point where we feel confident that they're right.
               MR. HORNBERGER:  Mark, you mentioned in your
     presentation that you had a chlorine-36 peer review panel. 
     I guess that's new to me.
               Can you tell me how many are on it and how many
     people are from inside the program and how many people from
     outside?
               MR. PETERS:  It was -- I wouldn't be able to
     recount the names of the folks.  It was four folks.  It was
     the DOE initiated external peer review committee, went
     through a whole series of these back at that same timeframe,
     in pre-VA release timeframe, and this was one of those.
               In fact, they -- one of their big pushes was to go
     to more of a different sampling approach with a cross drift,
     and that was actually implemented.
               So we did different ways of sampling for the
     features in the cross drift than we did in the ESF as part
     of their comments, but they did a full-blown peer review and
     that agreed basically with where we were at with the program
     at that time, anyway.
               MR. WYMER:  Any other questions?  Bill?  Well,
     we're pretty much right on schedule.  Thank you very much. 
     It's a very important issue.  I hope that it gets resolved
     shortly.
               MR. GARRICK:  The next presentation is on fluid
     inclusions.
               MR. WYMER:  Let's get started here.  In light of
     the confusion, I think I would just as soon you would
     introduce yourselves as you speak.
               As I understand it, what we're basically trying to
     decide is whether the Yucca Mountain repository is a shower
     or a bathtub.  Please, proceed.  Introduce yourself and your
     affiliation.
               MR. DUBLYANSKY:  I would say Jacuzzi rather than
     bathtub.
               Gentlemen, my name is Yuri Dublyansky.  I
     represent the State of Nevada here and, also, I am an
     employee of the Russian Academy of Science.  I am a senior
     scientist in the Institute of Mineral Geology, Geophysics
     and Mineralogy.
               I am very glad that I have this opportunity to
     introduce the fluid inclusion issue for you and the reason I
     am so happy, and I even flew all the way from Siberia
     yesterday to attend this meeting, the reason is that this
     issue, which I believe is very important, exceptionally
     important for the Yucca Mountain safety and performance, has
     been neglected for quite a few years.  That's my personal
     opinion.
               Also, I want to say the name of fluid inclusion
     issue is a little bit misleading.  So the real issue is not
     the fluid inclusion.  The real issue is the real origin of
     secondary minerals at Yucca Mountain.
               Were they formed by hot water pathway through the
     mountain or were they formed by rain water percolating down
     through Yucca Mountain, this is the real issue.
               And fluid inclusions are just a tool, a very
     convenient, very robust tool, which allows you to determine
     temperatures at which the minerals will form.
               This information, the temperature formation is
     very important when you try to identify the origin of
     minerals.
               Now, I also want to mention that the fluid
     inclusion method is not something new.  It's a very well
     established technique.  It's been around for almost a
     century.  It's widely used in many geological, fields of
     geology, like oil exploration, mineral exploration, and
     many, many different applications.
               As I understand, the ACNW and NRC have not been
     exposed to this issue for some reason.  So I was asked to
     prepare a short overview of the fluid inclusion work done at
     Yucca Mountain.
               First, I want to make it clear, it's not something
     new.  This fluid inclusion issue has been around for almost
     a decade.
               In 1992, the panel of the Academy of Science and
     National Research Council related the hydrothermal review
     concept which was identified by a DOE scientist as a
     potential site suitability issue.
               This review was requested by DOE, because DOE
     needed to have input to make a decision whether to proceed
     with characterization or not.
               So even though this panel, Academy of Science
     panel discarded this issue, the panel did recommend that
     fluid inclusion research needs to be done.  It was an
     official Academy of Science report on the issue.
               As far as I know, the first fluid inclusion data
     were published in 1993 by Los Alamos researchers Bish and
     Aronson and they studied several site samples from depths of
     30 and 130 meters from bore holes and they measured
     temperatures, fluid inclusion temperatures well in excess of
     100 degrees Centigrade.
               While I personally have reason to believe that
     those data technically are deficient and the temperatures
     are probably not correct, they're unreasonably high, but I
     cannot check it, of course.
               In 1994, the USGS and Los Alamos researchers
     published a more detailed paper on fluid inclusions from the
     unsaturated zone at Yucca Mountain, since the tunnel ESF had
     not been built by that time, they studied samples from drill
     cores.
               Well, this paper leaves a little bit mixed
     impression.  On one hand, it does report fluid inclusion
     suitable for thermometry, suitable for determination of
     temperatures in core sites from Yucca Mountain, but on the
     other hand, this paper does not report any numeric data.
               So they also just made a statement that this core
     site was formed at a temperature less than 100 degrees,
     which doesn't really help, because 20 degrees to 100 degrees
     centigrade is basically the whole range of water from cold
     water to thermal water which you can expect close to the
     earth surface, as understood in hydrogeology.
               In 1995, I had the opportunity to collect my first
     samples, working for the State of Nevada, from the tunnel,
     from the first 300 meters of the ESF which were excavated by
     that time, and I did fluid inclusion, preliminary fluid
     inclusion research on these samples and reported my finding
     to the State of Nevada.  
               Later in 1996, part of this work was published in
     abstracts for two national conferences, Geological Society
     of America conference and the American fluid inclusion
     conference.
               Unfortunately, late in 1995, the State of Nevada
     -- the funding for the State of Nevada oversight activity
     was cut and during the next two and a half years, no work
     was done by the State of Nevada scientists, because just we
     didn't have money.
               In 1998, several events occurred.  First, the
     Pan-American conference on fluid inclusion, here in Las
     Vegas, the USGS researchers Roder and Wellan went on record
     by stating that core site samples which they studied from
     ESF do not contain fluid inclusion suitable for thermometry,
     suitable for determination of the temperature, and,
     therefore, those core sites were formed by rain water
     percolating through the mountain.
               At the same conference, I presented my data, which
     I obtained from basically samples from the same tunnel, and
     those data indicated temperature.  I did measure it for
     inclusion temperatures, have found inclusions which are
     suitable and the temperature was up to 85 degrees
     Centigrade.
               So there was quite a disagreement between two
     groups of researchers.
               Later, in 1998, the U.S. Nuclear Waste Technical
     Review Board completed several reports by the State of
     Nevada scientists, and one of those reports was a report on
     fluid inclusion.
               A consultant to NGGRB, Professor Bob Bodnar, from
     Virginia Tech, evaluated this report and he not always --
     not only evaluated this report, but he also invited me to
     his lab and we spent some two days running samples on his
     equipment, and finally he was convinced that the data which
     I reported in my '95 report are real and they are not
     artifacts.
               So after that, the Nuclear Waste Technical Review
     Board concluded the fluid inclusions found in mineral
     deposits at Yucca Mountain do provide direct evidence of the
     vast percents of fluids at elevated temperatures in the
     vicinity of the proposed repository.  So that was a letter
     report written by NGGRB.
               In September-October of 1998, I did more detailed
     research on fluid inclusions and since, by that time, the
     state still did not have money for this research, this part
     of my work was funded by the Institute for Energy and
     Environmental Research.
               So the results were released through a press
     conference in Washington, D.C., and after that, 30 copies of
     this report were requested by one of the U.S. Congress
     committees.
               I also want to mention that before releasing this
     report, we sent it for external evaluation to three well
     known, well established experts on fluid inclusion in
     Europe, specifically in Austria, in France and in the U.K.,
     and all three experts agreed that the research was done
     properly and the quality of work is fine and the conclusions
     are probably fine, too.
               So although there was some disagreement between
     the scientists representing DOE and the State of Nevada,
     have eventually led the DOE to initiate, in April 1999, a
     verification project on fluid inclusion issue which is
     currently underway.  Dr. Jean Cline, who led this project,
     will probably tell much more about this project in a few
     moments.
               I won't emphasize, I'll probably make some
     correction, but this project was not initiated in response
     to the hydrothermal theory proposed by Jerry Szymanski. 
     It's not correct.  I quote from one of the memos of this
     committee.
               Actually, I started my fluid inclusion research,
     just viewing this research as a means of very -- testing the
     hypothesis of Jerry Szymanski about hydrothermal activity. 
     So I wasn't the one who proposed this concept.
               And when I reported my data and my interpretation,
     this data, both data and interpretation were questioned and
     were disputed by the USGS researchers, so that it had a
     little disagreement in that.
               So DOE must be given credit for initiating the
     project which must resolve this controversy and must resolve
     the situation where two groups of scientists, which study
     samples collected from the same location, report very
     conflicting observations and interpretations.
               So that's my take on this project.
               Now, USGS researchers and State of Nevada
     researchers and UNLV people are doing a parallel study
     basically on the same samples collected, and as far as I
     understand, the temperatures, elevated temperatures of fluid
     inclusions, they're getting basically the same temperatures
     from our samples. But we still do have disagreement on the
     interpretation of these temperatures and this probably will
     be the focus of the forthcoming discussion.
               In closing remarks, I want to say that as you can
     see, this fluid inclusion issue was started probably in
     1995, where data was available.  Nevertheless, the data have
     not been used in the DOE major documents, in the viability
     assessment of 1998 and in the draft environmental assessment
     statement of 1999.
               This leads me to suspect that these data will not
     be used in the site recommendation consideration report,
     either, which is due later this year.
               While it's fairly clear that fluid inclusion
     research is not complete and much more needs to be learned
     about temperatures, about ages of minerals involved, the
     distribution, it is my strong opinion that at this point,
     DOE has no evidence which would justify elimination of this
     issue from consideration in the TSPA.
               Thank you very much.
               MR. WYMER:  Thank you very much.  Are there any
     questions on this?  That's a very fine introduction to the
     problem.
               MR. DUBLYANSKY:  Just one moment.  I have compiled
     a list of publications and they are attached to the handouts
     which I prepared and if you need them, I can try to compile
     the original publications and send them to you.
               MR. WYMER:  Okay.  Thank you.  John, do you have
     any comments or questions?  Thank you very much for the fine
     introduction.  I think Jean Cline is next, is that right?
               Would you give us your affiliation?
               MS. CLINE:  Sure.  I'm Jean Cline.  I'm an
     Associate Professor at University of Nevada-Las Vegas.
               Does anyone have a pointer?  Okay.  We'll wing it. 
     That's okay.  Could we dim the lights a little bit?  I've
     got one dark slide.  What I'd like to do -- can you hear me,
     before I get started?  Okay.
               What I'd like to do this afternoon is to tell you
     about a project that we are doing.  What I would like to do
     this afternoon is to tell you where we are in this project
     that we are now conducting to constrain the movement of
     fluid with elevated temperatures through the Yucca Mountain
     proposed site.
               When I put this proposal together, there were four
     questions that I posed and the project is constructed or
     designed to answer the following four questions.
               First of all, do fluid inclusion assemblages
     record the passage of fluids with elevated temperatures into
     the Yucca Mountain site.  Secondly, if the inclusions are
     present, what fluid temperatures do these inclusions
     indicate.
               Third, if present, when did these fluid incursions
     occur, and then, finally, how widespread throughout the
     proposed site were these fluid incursions.
               The scientific plan that we put together to answer
     these questions involves four related studies and I'm going
     to quickly tell you what they are and the methods we are
     using in these four studies and then I'm going to go through
     each of these four a little bit more slowly and tell you
     what the results are and where we're at now.
               First of all, sampling.  The first thing that we
     did was collect 155 samples from throughout the ESF and the
     ECRB.  If you're looking for a handout, there isn't any.  I
     didn't know there was supposed to be one.  I apologize.
               Paragenesis study next.  This is a critical part
     of this study.  The idea was to put together a growth
     history of each of the samples that we collected and the
     idea was that if we could understand how each sample formed,
     we could then compare site to site and put together an
     overall history for the growth of minerals, secondary
     minerals throughout the repository site.
               The two main tools that we used to do this were
     petrography, mostly using the microscope, and then,
     secondly, chemical analyses using the electron microprobe.
               Additional tools that we are using include
     cathodolominuescence; also, oxygen and carbon isotopes, and
     we will probably also do some laser ICPMS analyses to refine
     the chemistry a bit.
               Third, the fluid inclusion study.  Again, the
     first critical part was to place the fluid inclusions
     observed in each of the samples in the appropriate
     paragenetic contacts or growth history of each of the
     samples.  We also looked at the fluid inclusion petrography
     and then we did heating and freezing studies.  The heating
     studies will tell us about the temperature of the fluids
     that were trapped.  The freezing studies will tell us about
     the composition of the fluids.
               The final study is dating.  We'll be doing uranium
     lead, we're in the process now of doing uranium lead and
     uranium series dating, to put some absolute times on these
     fluid incursions, and, again, the really critical thing is
     to place the samples that we are dating in the appropriate
     paragenetic context with respect to the sample mineralogy
     and growth history and also the fluid inclusions.
               I want to tell you a little bit about the
     sampling, first of all.  Again, 155 samples from throughout
     the ESF and ECRB.  The goals were twofold on sampling; first
     of all, to collect samples of all of the different types of
     secondary minerals that were present and then to get a good
     spatial distribution.
               We tried to collect samples every 50 or 75 meters. 
     In some places, the gaps are a little bit larger.  That's
     because there's no secondary mineralization there.
               Moving on, I want to tell you about the
     paragenesis study.  This is a view of one of our thin
     sections.  It's about an inch and a half across length-wise
     here and what we're looking at is a very thin section of the
     rock as we look at it under the microscope.
     At the base, what we see in black is some tuff and this
     sample grew essentially layer by layer upward or outward
     from the tuff.  The mineralogy is mostly calcite and
     silicate minerals.  The blue is epoxy.  We needed to use
     epoxy to stabilize most of the samples.
               So where you see blue within the sample, that
     reflects zones of porosity and permeability, and there's a
     lot of it.
               What we see in this sample is tuff, which is
     overgrown by some of the early blockier calcite.  There are
     some discontinuous layers of silicate minerals, quartz,
     opal, additional tuff pieces that fell in and were
     encapsulated, bladed calcite that overgrows the earlier
     blocky calcite and silicates, and is then overgrown by outer
     sparry calcite, which also contains some layers of opal.
               This is one of the more complex samples from this
     site.
               Another section, just to show you there is some
     variability.  Here we see, again, at the base, some tuff
     layers, but in this sample, all we see is bladed calcite. 
     So this particular sample site did not see all of the events
     that produced all of the different minerals at some of the
     other sample sites.
               A third sample, again, at the base, some of the
     blockier, more malsave calcite, a discontinuous layer of
     quartz overgrown by bladed calcite, which is overgrown in
     places by, again, the sparry, more equant calcite and opal.
               Here is some selective dissolution of some of the
     bladed calcite.
               What I'm trying to show you here is that the
     samples are in detail very heterogeneous, but that when we
     look from sample site to sample site, we see repetition of
     patterns and the goal here, again, is to put together a
     growth history for each of these samples and then to link
     these samples to compare them from site to site so that we
     know how the secondary mineralization formed throughout the
     repository.
               The second tool that we used, I said,
     significantly was the electron microprobe.  What I want to
     show you next is a probe map of this area right here.
               On the left, we have a back scatter electron
     image.  It reflects atomic number or atomic weight and you
     can see there is mostly one color of gray, so we have one
     mineral here, it's all calcite.
               The black is micro porosity or permeability and
     what the permeability or porosity outlines is some our
     bladed calcite.  So we have bladed calcite overgrown by
     blocky sparry calcite and on the right we have a magnesium
     map.  The bladed calcite is black, indicating very low or no
     magnesium.  However, it is overgrown by sparry calcite,
     which shows very fine detailed oscillatory zoning of
     magnesium, interspersed bands of magnesium, varying
     magnesium bearing calcite.
               It turns out that this magnesium bearing calcite
     is one of the more important discoveries that we have made
     in terms of helping us put together these growth histories
     for these samples.
               This magnesium bearing calcite is present in more
     than 70 percent of the samples across the repository site. 
     It is always the outer-most and youngest calcite.
               So it's critically important in helping us link
     the mineralogy from site to site.
               I will come back to this calcite, because I also
     is critically important in understanding the age of the
     fluid inclusions.
               Cathodolominescence is another tool we used.  This
     figure here is about two millimeters from top to bottom.  It
     just shows you some detailed oscillatory zoning of calcite
     that luminesces.  There are bands that luminesce
     interspersed with bands that do not luminesce.  Luminescence
     is probably caused by small amounts of manganese and the
     absence of iron.
               The important thing is that we look for the
     location or the presence of this in individual samples and
     then its presence allows us to link, again, samples from
     different sites, one to another, and, again, put together
     the big picture for the repository site.
               This picture is important in helping us place
     contextually the fluid inclusions.
               Summarizing the paragenesis, we have 155 samples
     that are heterogeneous, yet they show consistent textural
     and mineralogical patterns.
               Importantly, the outer mineral zones, which
     reflect the youngest geologic events, are especially
     consistent; in particular, the bladed calcite and then the
     overgrown sparry magnesium enriched calcite.
               These patterns allow samples from different sites
     to be related to one another.
               What are fluid inclusions?  A little bit of
     background, for people that aren't familiar with these.
               Most minerals precipitate from some sort of fluid
     and as they precipitate, there are commonly defects that
     occur in the atomic structure of these minerals and these
     defects may result in the formation of small holes or
     cavities in the minerals.
               As the minerals precipitate, they are bathed in
     this fluid.  So this fluid will fill these cavities or holes
     and the mineral may overgrow these holes and seal off these
     small packages of fluid, thus creating fluid inclusions.
               They are important because they are samples of
     some ancient geologic fluid.
               Okay.  Now, if these fluid inclusions or these
     systems are forming at elevated temperatures, then as the
     whole system cools down, the fluid is going to contract. 
     However, the cavity does not change in size.
               So if the fluid contracts enough, eventually a
     vapor bubble is nucleated and this vapor bubble is
     essentially a vacuum.
               Now, what we can do to examine these inclusions is
     to reverse this process, to try to get at the temperature of
     the fluids that were trapped.
               What we do is we heat these inclusions up, we
     monitor their temperature.  As they heat up, the vapor
     bubble, if present, gets smaller and smaller.  The
     temperature at which it disappears is the temperature at
     which the inclusions homogenize, and that temperature
     approximates the temperature of the fluids that were
     trapped.
               In lower temperature systems, the cooling may not
     be sufficient enough to make the fluid contract enough to
     generate a vapor bubble.  So lower temperature systems may
     contain liquid only inclusions.
               These are the two features that we'll be looking
     at.
               Here are some inclusions from Yucca Mountain. 
     What we see here, the dark line is the wall of the
     inclusion.  This is calcite, the mineral calcite.  Here is
     the wall of the inclusion.  The majority of it is filled
     with liquid and there's a small vapor bubble.
               Here is another inclusion, again, mostly filled
     with liquid and a small vapor bubble.
               This is what they typically look like, fairly
     small vapor bubbles.  However, these two phase inclusions,
     which really, again, reflect the trapping of higher
     temperature fluids are far outnumbered by liquid only
     inclusions, which are also shown on this slide.
               It's hard for me to see them at this angle.  These
     would reflect the trapping of lower temperature fluids.
     This is a typical data set.  We have completed collecting
     data from all of our samples.  Approximately half of the 155
     samples that we collected contained assemblages of two-phase
     fluid inclusions.
               What we see here are three different assemblages
     located in different places in a sample.  However, almost
     all of the inclusions homogenized over a four degree range
     from about 49 to 53 degrees C.
               This is extremely tight data.  It's probably the
     best I've ever seen and it's quite representative of what
     we've been able to collect at Yucca Mountain.
               By far, the majority of the two phase fluid
     inclusions at Yucca Mountain homogenize between 45 and 60
     degrees C.  There are a couple sample sites in the north
     ramp where we obtained higher temperatures.  Our
     temperatures are as high as 75 degrees.
               Others, Yuri Dublyansky and some of the USGS folks
     have obtained temperatures as high as 85 degrees, but most
     of our data are between 45 and 60 degrees.
               Where in the samples are these inclusions?  Here,
     again, we see a typical crust with earlier calcite
     overgrowing the tuff and in this case, the earlier calcite
     is overgrown only by the magnesium enriched sparry calcite. 
     So this sample site really saw two calcite forming events.
               This gray line here sort of is the dividing marker
     between these two types of calcite.  The dark squares here
     show the location of the two phase fluid inclusions, again,
     that reflect the higher temperature fluids.
               Outboard of this line, the only inclusions which
     we observed in this magnesium enriched calcite were one
     phase liquid only inclusions reflecting passage of lower
     temperature fluids.
               This pattern is consistent throughout the ESF and
     the ECRB.  Two phase fluid inclusions are trapped in early
     blocky or massive calcite and at the very base are in the
     cores of the earliest bladed calcite.
               One phase fluid inclusions are trapped in outer
     bladed calcite and outer magnesium enriched calcite.
               This slide really summarizes those facts.  Again,
     two phase FIAs, higher temperature fluids are recorded in
     the earlier calcite, two phase FIAs are not recorded in the
     outer or younger bladed calcite or magnesium bearing
     calcite.
               The two phase FIAs are very consistently either
     present or absent in different mineralogical zones and these
     patters allow the relative timing of the elevated fluid
     temperatures to be constrained.
               Now, what we really want to do is to absolutely
     constrain the timing of these elevated fluids, and this is
     where we're at right now.  Here we're looking at the same
     sample again.  The two phase FIAs located in this area here
     and at the boundary or sort of outboard of this zone of two
     phase FIAs, there is, fortunately for us, inner-grown,
     intermittent or discontinuous bands of opal, which contains
     enough uranium for uranium lead dating.
               So what we are now in the process of doing is
     dating some of these opal samples to constrain the timing of
     this event.
               Within the magnesium enriched bad, there is a
     second continuous layer of opal and by dating that, we can
     determine the timing of the changeover from precipitation of
     the underlying calcite and the overlaying calcite, and then
     there is an outermost opal, shown here in yellow, which was
     deposited synchronous with this outer band of magnesium
     enriched opal.
               So right now we're in the process of dating these
     opal bands.  Again, the samples need to be carefully
     constrained paragenetically in the context of the sample
     mineralogy.
               To summarize, our goals for dating are to
     constrain the age of the latest magnesium enriched calcite,
     which is free of two phase FIAs.  In other words, which did
     not see the passage of elevated temperature fluids, to
     constrain the age of the earlier calcite that does contain
     the two phase FIAs, in other words, which saw the passage of
     fluids with elevated temperatures, and to determine the most
     recent timing of fluid passage with elevated temperatures.
               Concerning where we are with respect to project
     completion, we are on target for completion and distribution
     of reports at the end, about the end of March.  We are
     currently finalizing the petrography and the paragenesis. 
     We have completed the fluid inclusion analyses.  We are in
     the process of doing the dating on some of the other
     analyses and we anticipate that they will be completed by
     the end of the year.
               We anticipate reporting significant results,
     including data, at the Geological Society of America meeting
     in mid-November.
               The final comment that I would like to make is
     that in undertaking, as we have been conducting this study,
     we have been meeting regularly with scientists from the USGS
     and the State of Nevada to discuss our procedures, our data,
     what we're doing, how we're doing it.
               We have also been meeting with a larger group that
     includes people from the NRC, Department of Energy, Nye
     County, the Nuclear Waste Technical Review Board and keeping
     them apprised of where we are at and what our results are
     and the goal here is really to inform anyone who is
     interested what we've done, how we've done it, with the hope
     that when we are through this project, there will be a broad
     understanding of what we've done and consensus on the data,
     if not on all the conclusions.
               Thank you.  I would be happy to answer any
     questions.
               MR. WYMER:  Thank you very much.  The former
     director of the Oak Ridge National Laboratory, Ivan
     Weinberg, used to say there was big science and small
     science.  This is certainly small science here.
               MS. CLINE:  Yes.
               MR. WYMER:  Any questions, John?
               MR. GARRICK:  No.
               MR. WYMER:  Do you have a comment over there? 
     Yes, please.
               MR. DUBLYANSKY:  Jean, by saying -- well, by kind
     of making -- elevated temperature and non-related
     temperature, ambient temperature, can you put a numeric
     number on that?
               MS. CLINE:  No.  It would be desirable to do so,
     but we can't.  We have homogenization temperatures as low as
     35 degrees C.  So we can have two phase fluid inclusions
     that were trapped as low as 35 degrees C.
               The one phase fluid inclusions, there is nothing
     that we can do to get at the temperature of trapping. 
     People have hypothesized a range of temperatures.  I've
     heard below 90 degrees, I've heard below 50 degrees, I've
     heard below 75 degrees.
               It's not -- I don't know how to test that.  So
     it's hard to say.  There are -- I guess I'll leave it go at
     that.
               In general, I will review the process here.  The
     idea, again, is that as the system cools down, the fluid
     shrinks, and if it shrinks sufficiently, it generates a
     vapor bubble.
               So those fluids that cool down a lot will generate
     a big vapor bubble.  Those fluids that cool down a little
     bit or a lesser amount will generate a smaller vapor bubble. 
     Those fluids that cool down and shrink minimally will not
     generate a vapor bubble.
     But putting a number at those brackets, you can't do it,
     because in nature, it's going to vary from location to
     location, fluid composition to fluid composition and so on.
               I will leave it at that.
               MR. DUBLYANSKY:  I just wanted to comment on that,
     because from your presentation, seemed to be kind of very
     clear.  We have two phase fluid inclusions, elevated
     temperature, we have one phase fluid inclusion, it's low
     temperature.
               It's not correct, because if you have two phase
     fluid inclusion, you have an advantage, you can measure the
     temperature.  If you have one phase inclusion, you cannot
     actually tell the temperature.     It may be 30 degrees
     Centigrade, it may be 50, it might be 60, but just by some
     reason, this shrinkage bubble did not form.
               So you should not probably put an equal sign
     between one phase fluid inclusion and low temperature fluid.
               MS. CLINE:  That I would argue with.  I think we
     can't put a number on it, but in general, we can say that
     the lower temperature inclusions are those that don't
     generate a vapor bubble and the higher temperature
     inclusions do generate vapor bubbles, and we see very, very
     clear patterns.
               We see certain mineralogical bands which contain
     two phase and one phase inclusions and then we see other
     mineralogical bands that very consistently contain only one
     phase inclusions, and I'm real comfortable saying that there
     is a temperature differential that's reflected by the
     presence of those, but I can't put a number on it.
               MR. DUBLYANSKY:  That means there could be this
     difference, but it does not necessarily mean that the
     temperature was ambient.
               MS. CLINE:  Absolutely.
               MR. DUBLYANSKY:  That was the clarification that I
     wanted to make.
               MR. WYMER:  Is it clear that the all the gas in
     the bubbles is water vapor?
               MS. CLINE:  Good question.  It depends on your
     hypothesis, on what was trapped.  If you believe it was
     meteoric water, then it probably is a vacuum or just air or
     a vacuum essentially by the contraction of meteoric water.
               IF you believe that the fluids came from somewhere
     else, the source in which those fluids might be transporting
     dissolve gases of some sort, and in other systems, it's most
     definitely been shown that that can happen, then that vapor
     bubble could be something else.  It could be carbon dioxide,
     it could be a mixture of carbon dioxide and other gases.
               There are some tests that can be done.  They're
     difficult to do and the tests that have been done to try to
     distinguish that have not definitively shown things one way
     or another.
               MR. WYMER:  What is the biggest ratio of gas to
     liquid that you've seen?
               MS. CLINE:  We usually -- the call that we've made
     is ten percent, ten volume percent or less gas, 90 volume
     percent or more liquid.
               MR. WYMER:  That's the upper limit.
               MS. CLINE:  Yes.
               MR. WYMER:  That's a lot of contraction.
               MS. CLINE:  I should back up.  What I talked about
     are two types of inclusions, the two phase liquid-vapor
     inclusions, the one phase liquid only inclusions.  There are
     also some vapor only inclusions that are not well
     understood.  There are not very many of them, but they are
     present.
               Then there are also some other liquid-plus-vapor
     inclusions that have larger vapor bubbles, but these
     inclusions tend not to have consistent liquid-vapor ratios
     and that makes you suspicious that they formed under some
     other circumstance than the ambient conditions at the time.
               So we throw those out because they are
     inconsistent.  They probably formed as a result of leakage
     or perhaps what we call necking down when the inclusions
     were formed.
               We have textures we can use to sort those out.
               MR. WYMER:  Can you say what the age of the most
     recently formed bubbles is?
               MS. CLINE:  No.  That's the data we're waiting on
     right now.  We have a number of samples that have been
     submitted to a lab in Ontario.  To date, the opal, they've
     been there for a long time.  We thought we'd have more
     numbers by now.  We hope to have them at any time and we
     hope to be reporting those at GSA.  That's a big question.
               MR. WYMER:  Yes.
               MR. DUBLYANSKY:  Can I just add a little bit?  To
     your previous question about gas inclusions.  Yucca Mountain
     does contain all gas inclusions.  It basically does not
     contain visible water.  And I tried to study this inclusion
     by using random spectrometry, but all I got is a huge --
     which indicates luminescence, which is normally interpreted
     as the presence of aromatic hydrocarbons there.
               Enough indication that that's probably the case,
     but this luminescence decay with time, the hydrocarbons
     decompose.
               But still I was not able to identify any
     particular gases.  It's a very interesting subject and I
     have never seen such inclusion in any other environments. 
     It's very unique, I would say.
               MS. CLINE:  One thing that I could add.  The
     freezing point depression gives us some information on the
     composition of the fluids and that freezing point depression
     is very small.  So it indicates that there's very minimal
     salts or minimal gases dissolved in the fluid.  Pretty close
     to pure water.
               MR. DUBLYANSKY:  Can we put a number on that?
               MS. CLINE:  The freezing point depressions were
     about half -- I don't know, Nick, can you help me out?  Do
     you remember what either the freezing -- minus .6 was the
     freezing point depression.  So about a weight percent NICL
     equivalent.
               MR. DUBLYANSKY:  So it's not quite -- one weight
     percent of NICL is not quite fresh water.
               MS. CLINE:  It's not pure water, but it's close to
     pure water.
               MR. DUBLYANSKY:  It's brackish, I would say, in
     terms of hydrogeology.
               MR. GARRICK:  Do you have an opinion about the
     results you might get in terms of whether or not there's a
     real safety issue associated with the repository? 
               What kind of results would give you some concern
     about the safety of the repository?
               MS. CLINE:  I guess I'm hesitant to answer that. 
     What we see now -- I think everybody agrees who has been
     working on this that the fluid temperatures are somewhere in
     the neighborhood of 50 degrees C and in one area they may
     get as high as 75 or 80 or maybe even 85 degrees C.
               What does that mean in terms of engineering the
     repository site?  I don't know that.
               MR. GARRICK:  That's a question we have to answer. 
     We have to answer the so what question and you haven't
     answered that.
               MS. CLINE:  I think part of the answer -- well,
     part of what we're doing will help you answer that question,
     it's going to be the timing of those 50 degree plus fluids. 
     We're waiting for these dates on these opals.  They're going
     to tell us if that happened in the last half a million
     years, million years, five million years ago, nine million
     years ago.  I think that's going to be a big part of the
     answer.
               MR. GARRICK:  What if it's ten million years ago?
               MS. CLINE:  Well, it's a blink of an eye in
     geologic time.
               MR. GARRICK:  I'm talking about the safety of the
     repository.  What's it mean?
               MS. CLINE:  We would have to take the data that we
     get from this study and then put together our hypothesis of
     where that water came from and until I get the dates, we
     can't do that.
               If the hot water happened ten million years ago,
     there were hot volcanic rocks there ten million years ago. 
     That's the obvious answer.  If the hot water was there a
     million years ago, something else was responsible for hot
     water being there, and I don't know what that is at this
     moment.
               I'm willing to wait for the dates before I try to
     answer that.
               MR. GARRICK:  Thank you.
               MR. WYMER:  Any other questions or comments?
               MR. GARRICK:  Do we have another presentation?
               MR. WYMER:  Who is next?  Would you introduce
     yourself and your affiliation, please?  Thank you, Jean.
               MR. GARRICK:  Thank you.
               MR. PETERMAN:  My name is Zell Peterman and I'm
     with the USGS.  I'm currently the Team Chief for the Yucca
     Mountain Project Branch Environmental Science Team.  We
     certainly agree with Yuri's earlier statement that this is
     not a fluid inclusion issue, it's a calcite multiple
     fracture filling issue, but I have to add to that that fluid
     inclusions aren't the only evidence that's going to resolve
     this issue.
               In fact, we think it's already resolved.
               What I would like to do is step back in time a bit
     and if somebody will show me how to work this projector, the
     computer thing.
               The USGS has been studying these fracture
     minerals, first, in drill core and then later in ESF since
     about 1989 and prior to that, there were some early studies
     in the mid '80s of calcite fracture fillings in drill core.
               Listed at the bottom here are the people actually
     doing the work.  Joel Whelan is the person principally in
     charge of our fluid inclusion work and the stable isotope
     work.  Landon Namark and Jim Pacies are doing the isotopic
     dating and Bryan Marshall and I guess to some extent myself
     are worried more with the isotopic heavy isotope signatures
     in these materials.
               So let's have a quick look here.  Can I move
     around here?  Is that coming through?  What do these things
     look like?  The fracture minerals occur in fractures and
     lithofizal cavities.  Calcite and opal are the main
     minerals, although there are other minerals present, clay
     minerals, zeolites, manganese oxides and probably some
     others.
               The coatings, as Jean said, range in thickness
     from several millimeters to several centimeters.  These are
     just two examples.  The one on the left is an example of a
     fracture surface and there is a scale there for reference. 
     The small divisions are one millimeter.
               The one on the right is the floor of a lithofizal
     cavity, lithofizal cavities are gas bubbles that form at the
     top of the ash flow sheets as the ash flow degasses, and
     they're commonly lined with high temperature minerals all
     the way around.
               Typically, silica polymorphs and alkali feldspar
     are the predominant minerals.  There are trace minerals of
     ohemitites, some garnet has been found, things like that,
     but they encrust the whole inside of the lithofizal
     cavities.
               In contrast, the low temperature minerals, calcite
     and opal, are typically on the bottoms of these cavities.
               Now, the green coloration is opal, which is
     fluorescing under ultraviolet light and it's fluorescing
     because it's a rather large uranium content, up to 500 ppm
     uranium, typically around 100 to 300 ppm uranium, and this
     is the key to the dating work.
               This is what we can date.  We can't date the
     calcite directly because of it's very low uranium content.
               Why study these fracture minerals?  AS far as
     we're concerned, the USGS is concerned, they are the
     physical records of long-term infiltration through the UZ. 
     This is -- I think what Jerry Szymanski refers to as the
     USGS rain water hypothesis, and actually I kind of like
     that.
               Our conclusion after looking at all sorts of data
     is that these were formed by downward percolating water.
               We can date these things, as I mentioned, by
     uranium series, uranium lead and carbon-14, and provide a
     history of deposition and, therefore, a history of the
     fluids involved in depositing these materials.
               The calcite especially also contains an isotopic
     record of the source fluids and we look at oxygen, carbon,
     strontium and U-234-238 ratios and they contain fluid
     inclusions that may yield information on the thermal history
     of the rock mass.
     From 1990 to 1995, the only thing we had to look at were
     core samples and this is an exceptional sample here.  And we
     tried to do some dating, some mineralogy, some isotopes. Our
     eyes were really opened when the ESF was constructed and we
     found deposits like this, which never would have survived a
     coring process.
               So we were really misled by what was available in
     only the cores.  And the ESF materials are far superb.  We
     can collect good samples and, again, this green coloration
     is fluorescing opal on the tops of these very delicate
     calcite crystals that are growing up from the base of a
     lithofizal cavity.
               So this is the history, that was the core.  Core
     studies were up to 1995 and in '95 we ramped up our efforts
     because of ESF, did a lot of isotopes, mineralogy, fluid
     inclusions.
               Our early focus was on the history of the
     outer-most surfaces, because we were testing a model at that
     time, in 1995, that, in post-glacial times, the PTN was
     acting as a unit that moved the flow of water laterally
     rather than allowing it to come down.
               So we were tasked with what is the under stage on
     the outer-most surfaces that you can find, the idea being
     that we probably wouldn't find anything less than 10,000
     years.
               A more recent focus has been the long-term
     deposition on the thermal history of the UZ, the
     compositional evolution of fracture water, and turning our
     age information and abundance information into some estimate
     of seepage flux.
               So what do these things look like in the ESF?  And
     you will see some of these tomorrow.  There are primarily
     two major occurrences.  There are other minor occurrences,
     as cementation in fracture zones, things like that.  The
     depiction on the left shows a fracture that opens up, and
     you will see a lot of these in ESF and you will see that the
     deposits are typically on the foot wall side of these
     moderately to steeply dipping fractures.
               The one on the right is a depiction of a
     lithofizal cavity and there you will see that typically the
     deposits are on the bottom.
     Our interpretation here, and this is not an interpretation
     that's agreed to universally, is that this indicates some
     sort of film flow moving down these fractures and it's
     moving under the force of gravity.  So it's staying on the
     low sides of the opening.
               In contrast with what we see in the UZ are what
     you will see in the ESF in the calcite below the water
     table, and, again, we're looking at drill core.  The calcite
     coats all the surfaces of fractures and commonly fills the
     fractures.
               As I say, we interpret the occurrences as
     indicating downward flow along fracture foot walls and
     cavity floors.  We view this as pretty strong evidence that
     those cavities have not been repeatedly hydrologically
     saturated.
               Jean made these same observations.  Calcite and
     opal are intimately associated in micro stratographic
     relationships.  In other words, you can develop the micro
     stratigraphy starting with the base, lying on the tuff,
     going to the outer surfaces, and, in many case, you can
     convince yourself that this is an age progression.
               We have data from WT-24 which says that the
     average calcite abundance in the rock mass and the crystal
     pore member of the Topapah Spring tuff is .24 weight
     percent, and I will show you that data in just a minute.
               Calcite dominates, as you saw in Jean's slide,
     probably less than ten percent, ten percent or less opal and
     other minerals.  The deposits aren't homogeneously
     distributed in ESF and the greatest abundance that we've
     measured and another way we've measured is conduct line
     surveys and actually stretch a line, a 30 meter line every
     100 meters and measure the thickness of the calcite deposits
     in fractures and in lithofizal cavities.
               And this is an example of the data from WT-24,
     which we thought was another way to get an idea, and WT-24
     was drilled by the LM-300.  So there was a lot of cuttings,
     a lot of ream cuttings.  If you've seen the holes drilled by
     that, and these cuttings were captured for us, integrated
     five foot samples and from those five foot samples, we
     ground them up, prepared a sub-sample and chemically
     analyzed the samples for C02, and then we assumed that all
     the C02 as in calcite and that's the way we get the .24
     weight percent.
               And that's the mean and, to me, this is analogous
     to, say, determining the grade of an ore body.  So in this
     case, the arithmetic mean is the appropriate measure, even
     though the distribution of values is highly skewed and looks
     almost maybe fractal in nature.
               There are many, many openings in the ESF that
     don't have secondary minerals, and this is just an example
     of one.  This is a photo moziac of the tunnel wall over ten
     or so meters, 20 meters maybe, and the red coloration
     depicts cavities that have secondary minerals and the white
     that don't have secondary minerals.
               So, again, if these were the result of upwelling
     water, I think we have to ask the question why aren't all
     the possible depositional sites occupied by calcite.
               In terms of the isotopic evidence that favors
     descending water, we have carbon isotopes and the youngest
     calcites overlap those of the surficial calcrete, and that's
     the ultimate source of the calcium.
               If you've been out to Yucca Mountain or anywhere
     in the desert southwest of the U.S., you will see thick
     deposits of calcrete that are ubiquitous.  And rain water
     comes down and periodically dissolves this and carries it
     down fractures.
               You can see this virtually anywhere you go.  The
     oxygen isotopes in the youngest calcites are consistent with
     meteoric water heated as it moves down the geo thermal
     gradient.
               The strontium ratios in the youngest calcite
     overlap the values with calcrete, which you'll see in a
     minute.
               Work at Los Alamos has shown that the UZ calcite
     has pronounced negative cerium anomalies.  This is not
     observed in the saturated zone calcite and cerium anomalies
     are small or nonexistent in ground water.
     The 234-238 ratios are identical to calcrete, values in
     calcrete and runoff, and are much smaller than values
     observed in the tertiary volcanics or ground water.
               And the conclusion is obvious.
               MR. HORNBERGER:  Can you give me just a quick
     tutorial on the cerium anomaly?
               MR. PETERMAN:  This is Los Alamos' work and I
     think it has to do with the oxidation state, the multiple
     oxidation states and the solubility.  This is just an
     example of the oxygen and carbon isotope analyses that Joe
     Whelan has conducted.  I should have put the number up
     there.  There's a lot of measurements and Joe has also
     placed the deposits or determined the paragenetic sequence
     for the deposits, and, again, these are relative things, but
     he can classify things as early, intermediate and late, and
     in general, putting the isotopes in that context forms three
     discreet groups, but with a lot of scatter.
               But if you get enough analyses, you know that
     there are clearly three discreet groups there.  There is an
     early calcite that occurs below the cal layer, I think that
     Jean referred to, and it has these low delta 018 values,
     which Joe interprets as having formed from water with
     elevated temperatures, and he's taken water of minus 12.5
     and then just using the fractionation factor, he would guess
     that those could have formed from water between 50 and 80
     degrees centigrade.
               Here is strontium data on calcite from drill core
     and at the top there is a histogram.  Unfortunately, there
     is not an indication of the number of samples, but the
     shortest box there would be one analysis and these are --
     this is strontium-87-86 values shown as delta 87, which is
     just the deviation of the 87-86 from that value for modern
     sea water.
               So the calcretes have a skewed distribution, but
     they peak around between delta value of between four and
     five, and you can see the shallow calcites pretty mimic that
     distribution and then as you go deeper into the UZ, the
     numbers go down and then in the SZ, the numbers are quite
     different, the strontium numbers are quite different than
     the strontium values in the UZ.
               We have data from core water salts, from, I
     believe, SD-6, and we see a very systematic change with
     depth and what we see is the beginning of a certain amount
     of water-rock interaction right around the PTN and then
     continuing down.
               So we've got a combination of source, plus
     water-rock interaction here.
               Geo chronology.  We've done carbon-14, uranium
     series, uranium lead dating, and as I said, our first
     emphasis was on the outer-most mineral surfaces and these
     three boxes on the left are nested here or correlated.  You
     can see they each represent different spans of time.
               We get carbon-14 ages as young as 16,000 years. 
     We get uranium series ages that span from very young to the
     limit of the method, which is 500,000 years, and then we get
     the uranium lead ages that go to 1.8 million years.
               It was this relationship that led to the concept
     that we're dealing with, very slowly depositing material and
     that a given thickness of these things represents a long
     amount of time.
               So that the challenge here is sampling for dating. 
     You will see in the next slide that we're looking at one to
     four millimeters per million years.  Now, we have to
     physically sample these with a demo drill, so we're
     integrating -- our samples will integrate over a finite
     period of time and the results then will be skewed as a
     function of the decay constant of the method being used.
               And an example would be if you had two layers of
     calcite, one that was modern and a subjacent layer that was
     a million years old and your sample included both of those
     layers, that composite sample would be 50 percent modern
     carbon and your age would be, what, 6,000 years, your
     determined age, but the real average age would be a half a
     million years.
               So that's the kind of bias you can get in the age
     work in dealing with the shorter half-life systems.
               Now, when we go to the uranium lead system, it's a
     much longer half-life, so we're getting much closer to the
     real mean age of the material sample and the histogram on
     the left gives the current distribution of ages we have.
               There is this older calcite layer that's
     associated with this limpid quartz and some of that has
     enough uranium to date and the oldest age we've gotten so
     far is ten million years.
               Then it progresses up and I think the histogram is
     more of a function of how we've sampled what we've
     emphasized than it is a statistical distribution of ages.
               And this shows the data that have gone into the
     interpretation of the growth rate.  In the upper right-hand
     corner is a cross-section of a sample and, again, the green
     fluorescence is opal.
               This has opal embedded in it and opal on the
     outside.  You'll see an age around seven million years at
     the base and then 4.5 and 4.3 and I can't even read them all
     myself, .14 and .09 at the outer surfaces.
               So here is an example of what we call the micro
     stratigraphy and in this case, we seem to have it calibrated
     with ages.
               Right below that, you will see a series of lines
     associated with data and these represent different specimens
     for which we have these same type of data and what we've
     done is just taken the thickness and normalized the
     thickness to unites.
               So plotted on the Y axis is just relative position
     in a crust or deposit and then the slope of that line you
     can calculate back and get some estimates of average growth
     and those go from 1.3 to 4.1 per million years.  So we think
     they're very slow growing.
               This was verified by some work that was recently
     done on what's called the SHRIMP, USGS Stanford SHRIMP, in
     Stanford, and this is the high resolution ion mass
     spectrometer, where you can actually zap a sample with, say,
     a 10 to 20 micron diameter beam, you have a beam of oxygen
     ions, and get ages directly without having to go through
     sampling or chemistry.
     So that other diagram, up at the top there, you will see a
     deposit of opal, one of these hemispheric opal deposits,
     which is in the outer-most surface and that traverse has
     ages ranging from 5.1000 at the outer-most zapped point to
     530,000, but with a huge uncertainty.  Anyway, the graph
     below that plots that data and, again, the growth rate there
     for that opal using those numbers is .72 millimeters per
     million years.
               So this substantiates our contention that these
     are very slow growing deposits.
               Fluid inclusions, Jean and Yuri have already
     covered this.  Joe sees the same thing that everybody else
     does, single phase liquid filled inclusions, single phase
     gas filled inclusions, two phase with variable liquid gas
     ratios, and then the two phase was small, but consistent
     vapor-liquid ratios.
               And with certain assumptions, these may provide
     estimates of depositional temperatures.
               I think it's appropriate to comment here that when
     the major fluid inclusion thrust started, and I think Jean
     would agree that there was a lot of discussion on whether
     calcite was a suitable host for these fluid inclusions,
     whether calcite could be relied upon, because it was known
     from other studies that you look at the mineral crosswise
     and you're going to do something to the fluid inclusion.
               So there was a lot of discussion and I guess being
     a skeptic, I'm still somewhat skeptical, but I'm willing to
     be proven wrong.
               Anyway, as Jean said, we find also 50 percent
     inclusions contain deposits, contain fluid inclusions 40 to
     80 degrees C.  Many of these are in the earliest calcite.  A
     few appear to be in the intermediate stage.  None have been
     found in the latest calcite.
               We think the fluid inclusion assembly is
     consistent with calcite formation under vados conditions,
     but at slightly elevated temperatures.  In other words,
     unsaturated conditions.  Yuri would say that these all
     formed in the sat -- that there was saturation, that these
     fractures were filled with water.  We don't agree with that.
               MR. HORNBERGER:  But you would still have to have,
     as Jean said, a model, a conceptual model for getting the
     elevated temperatures.
               MR. PETERMAN:  That's true, right.  Absolutely. 
     If we can believe the elevated temperatures.  I think one
     has to ask that question first and foremost, and then we
     have to ask how could you get those elevated temperatures.
               We know that you can, you know, 12.7 million years
     and probably a few decades after that, things were very hot. 
     We find evidence fumerolic deposits at the top of the
     Topapah Spring that are 12.7 million years old, no doubt
     about it.  Just like the Valley of 10,000 Smokes, the water
     was getting very hot as it penetrated a little bit and the
     tuffs cooled and crystallized from the top down and there
     was water circulating and coming back out as very vigorous
     hot springs and steam vents and all that.
               There is no doubt about that whatsoever.  Then the
     question is how long did it take to cool the tuffs, say, to
     50 degrees.  I don't think we have a good handle on that.
               There is a Timer Mountain Caldera, which you
     referred to -- it's usually quoted about 10.5 million years,
     called the Timber Mountain Event.  Brian Marshall has done
     some thermal modeling and shows that at Yucca Mountain, 20
     kilometers away from a possible buried pluton, there is a
     thermal pulse that could hit Yucca Mountain about eight
     million years ago and then that has to decay down.
               Now, it's a very simple model, it's conduction
     only, there's no advection and all that.  It's something
     that needs to be pursued.
               MR. WYMER:  We are starting to push our time a
     little.
               MR. PETERMAN:  Okay.  I'm going to finish real
     quick.  This is just another example of distribution of
     fluid inclusions that Joe Whelan did.  This is from one
     little teeny chip, 320 measurements.  He's got the patience
     of Job, 54 degrees.  There are some flyers out there, up to
     80 degrees, 54 degrees, excluding those, standard deviation
     of three.
               This bothers me a bit.  Two standard deviations of
     six, total range of 12 million years.  If we can measure
     these things to better than a degree, why do we get this
     dispersion.  It's telling me there are some other variables
     in here we don't fully understand.
               This is the USGS data now plotted against distance
     from the north portal.  Those higher temperature values that
     Jean referred to in the north ramp around 80 to 90 degrees.
               Again, that's just a map showing the USGS data. 
     Our attempt to constrain ages, this was asked and, again, so
     far, all we've done is try to sample opal that's immediately
     above a zone that contains fluid inclusions.
               We have ten ages now, our youngest age is 1.9
     million.  All of the others fall between eight and nine
     million, here is one that's seven, okay, 6.5 or seven. 
     Again, they only provide a minimum age and the maximum age
     is controlled by the tuff.
               So I would say on the right, the fluid inclusions
     are between 6.5 and 12.7 million years old.  On the left,
     they're between eight and 12.7 million years old, and this
     one sample has this bounding opal at 1.9.
               So as I say, I think we have ten numbers now and
     this is something that both UNLV and the USGS is pursuing
     vigorously.
               These are our conclusions.  We have large and
     comprehensive data that shows that low temperature fracture
     minerals form from meteoric water percolating downward
     through the rock mass during the last ten million years or
     longer.
               I think the fluid inclusions may provide very
     interesting information on the thermal history of the rock
     mass and I think that's something that should be pursued as
     best we can.
               So that's it.
               MR. WYMER:  Thank you very much.  Are there
     questions?  Surely our geochemist must have a question.
               MR. DUBLYANSKY:  I have a question.  Do you have
     conceptual model for high temperature calcite formed in the
     vados zone, how much did it happen?  I didn't quite
     understand your answer.
               MR. PETERMAN:  Our contention is that those
     temperatures are real and I still think we have to be a
     little bit skeptical, because of uncertainties, assumptions
     and all that, but those temperatures are real, then that
     means the rock had to be that hot to heat the downward
     percolating water.
               Now, these could be -- it doesn't just have to be
     done by the cooling of the tuffs.  As I say, there is
     certainly a potential of an eight million year thermal
     coming from the buried pluton in the Timber Mountain
     Caldera.
               There are other ways to move heat into the upper
     crust.  One way is detachment faulting, which is well known. 
     You bring hot rocks up along shallow dipping normal faults
     below cool rocks and you have a perturbation of the
     geotherm.
               And we know that detachment faults are not at all
     uncommon in southern Nevada.  There's a whopper of one just
     across Crater Flat over at Bear Mountain that did exactly
     that and that area over there, based on geo chronology,
     there was activity as young as eight million years.
               So I think there are certainly ways to get heat
     into the unsaturated zone without pumping up hot water.
               MR. WYMER:  Well, John Garrick always likes to
     drive to the very practical end of things, so what does this
     all mean with respect to Yucca Mountain.
               I think I understand what each of you concluded,
     but is there any of the three of you that believe that there
     is a chance that hydro thermal water is going to come up and
     fill the cavities?  That's your position, that that's a
     possibility.
               So of the three of you, there's -- you want to
     vote?
               MR. PETERMAN:  I think we have to leave it up to
     the scientific community to evaluate these results and make
     their own conclusions.  We've already made ours.
               MR. WYMER:  How far off are we of having these
     answers in time?
               MR. PETERMAN:  I would say less than a year.
               MR. WYMER:  Less than a year.  You have another
     presentation?  I'm sorry, Yuri.
               MR. DUBLYANSKY:  Just I will be trying to
     summarize what can be hypothesized about the sources of
     those fluids and the origin of minerals from the data which
     are available now.
               So if you have more data or some new data that
     will show up, we will probably incorporate them into the
     model.
               But I base my presentation on what was done by
     USGS at least what was available to me and on my new fluid
     inclusion research was done just recently during this year.
               So I'm not sure about this idea about heating the
     rock due to faulting or thermal process.  The calculation
     done for Yucca Mountain and as soon as I have this more
     formalized model, I will be happy to make an assessment of
     that.
               We did, however, estimate the time required for
     cooling of tuffs and the age of the bedrock is about 12.7
     million years, and our very conservative estimates show that
     the cooling would require a maximum of about 100,000 years.
               So essentially we attribute the formation of this
     elevated temperature to the cooling of tuff, we have to have
     the age of this calcite to be close to this 12.7 million,
     12.8, but it cannot be -- it cannot last for millions of
     years.
               Then we have this very well known Timber Mountain
     Caldera hydrothermal, which is dated from 11.5 to ten
     million years ago, and this work was extensively studied and
     dated like that.  So we have another time marker here.  And
     then if we see that our sample or our elevated temperature
     calcite was formed at the time after that, I cannot see of
     any other explanations than to attribute it to the thermal
     activity, hydrothermal activity, because again, I repeat, I
     cannot see how it can conductively heat the rock to such a
     high temperature.
               Also, we will continue with further -- we not only
     have to heat it, but also if we find all the data, we have
     to maintain this temperature for very long periods of time,
     and I will come to this later.
               From the data which I had, I didn't have this new
     data that they measured ages of the quartz at ten million
     years, but these preliminary data indicate the age of the
     oldest -- by that, I mean the oldest ages measured from the
     Yucca Mountain samples.  They were about nine million years
     old.
     So they're still somewhat younger than the Timber Mountain
     Caldera hydrothermal, and this new age is also between ten
     and nine million years.
               So I think this makes -- this shows that the
     minerals that we're talking about, they are younger than --
     probably younger than Timber Mountain Caldera, but also they
     are definitely younger than the bedrock tuff, and this is
     the quote from one of the USGS reports and they seem to be
     aware about that.
               MR. HORNBERGER:  Yuri, you disagree that those
     ages represent a minimum age then.
               MR. DUBLYANSKY:  Well, minimum age, as far as I
     understand, talking about uranium series dating, which are
     more short-lived isotopes.
               Would you say that uranium lead ages also provide
     minimum estimates of that, of ages?
               MR. PETERMAN:  The ten million years is on the cal
     simulator.
               MR. DUBLYANSKY:  Right.  The question was --
               MR. HORNBERGER:  But I understood Zell's point to
     be that you're sampling an interval and --
               MR. PETERMAN:  We're sampling an interval, but
     because of the long decay constant for the uranium, we're
     still getting an average age, but it's closer to the real
     age of that material.
               MR. DUBLYANSKY:  On these graphs, I plot the
     thermal reconstruction of the Timber Mountain Event done by
     Bish and Aronson, these red arrows, and this work was done
     based on the transition between clay species.
               And the red rectangle shows the temperature, which
     was obtained from the ESF samples.
               You can see it's quite different, 85-90 degrees
     Centigrade at Yucca Mountain, just ten degrees short of
     boiling at this altitude.  If you just assume normal thermal
     gradient, conductive thermal gradient, which should be
     operational in the vados zone, you will have here a gradient
     of about 200 degrees Centigrade per kilometer, which I don't
     think is reasonable.
               So I don't think by conductivity they can raise
     the temperature to 90 degrees at depths of about 50 or 30 or
     100 degrees from the surface and keep this temperature for a
     long time and I don't see how can we do that without melting
     the rock.
               Now, let's discuss a little bit ages.  First, we
     have this time marker -- well, mostly, the minerals which
     can be dated at Yucca Mountain is silica.  We only can date
     the latest calcites which can be dated by uranium series
     ages, but all calcite cannot be dated because it doesn't
     seem to be old enough.
               So we have to use some indirect methods,
     particularly in those samples that do not contain silica
     phases.
               However, it is well known or established at Yucca
     Mountain that opal is always younger than ten million years,
     or probably eight million years, and the diagram which Zell
     was showing, it's clearly demonstrated that.
               When you talk about opal, you are talking about
     something between eight million years and probably a 100,000
     years, and, again, I refer to the work done by USGS.
               So if we see fluid inclusions associated with this
     late opal, we have to assume that this calcite is also
     young.  This is one example of the situation.
               This is calcite and this -- calcite, opal, which
     is typical young opal which occurs normally in the upper
     part of the samples, and this is one of the fluid inclusions
     with a gas bubble here.
               This is only one inclusion from a group of
     inclusions which can provide quite reliable temperatures.
               So in this case, we have to accept that this
     calcite cannot be older than eight million years.
               Here is another example and it's a very
     interesting sample.  Again, we have opal in the crust of
     calcite, which is about one and a half centimeter.  One of
     the opal from the samples collected from this area --
     actually, there was extensive dating of these samples and
     the ages obtained range from six million down to 300,000
     years or something like that.  Opal from the samples
     collected from those cavities.
               Again, if we accept this rate of deposition which
     was determined by USGS, which is from one to four millimeter
     per million years, we have to assume that the time required
     for generating this crust would have been at least between
     four and ten million years.
               Each millimeter of this calcite requires about a
     million years to grow.  It's quite a long time to form that.
               So what I did, I analyzed calcite layer by layer,
     above the opal and from outside layers, and here are the
     temperatures.
               So basically this part probably are secondary
     inclusion, but those are very consistent temperatures, 50 to
     52 degrees Centigrade, and these temperatures just
     persistent through the all the crust.
               Again, it's a little bit -- well, it isn't
     contradiction, which Jean was showing, that fluid occurs
     only in the bottom.  These particular samples, you can see
     fluid inclusions throughout the crust.
               MS. CLINE:  You can't equate position necessarily
     to age.  I guess what I'm saying is that I don't agree 100
     percent.  You can't just measure some thickness, attribute
     some number of years.  You can have a whole crust that grew
     ten million years ago.  You can have a whole crust that grew
     a million years ago.
               MR. DUBLYANSKY:  Exactly that is my point, but I
     am not very comfortable with this rate of one millimeter or
     four millimeter per million year.  I just kind of
     exaggerate, but still my point is here in this crust, we
     have consistent temperatures throughout the crust and if
     these rates are correct, we have to assume like eight
     million years timeframe of this growth.
               I'm not saying that that's what was happening. 
     I'm just showing some kind of problems with that.  But,
     again, these temperatures are very consistent and is present
     in all calcite.  This calcite looks to me very typical
     bladed calcite.
               The next method which we can use is to use stable
     isotope, like Zell was suggesting, and this is a compilation
     of data.  Well, it's kind of illustration of data which is
     represented in the one of the USGS reports.
               Indeed, this report says that early calcite,
     probably old calcite almost always have this heavy carbon
     and it's light in oxygen and late calcite almost always have
     the reverse, light carbon and heavy oxygen.
               Late calcite form will define between minus five
     and minus eight per mil and this calcite generally
     represents the age distribution over the last several
     hundred thousand years.  That is according to the work by
     USGS.
               So if we have calcite which have these values, we
     probably, again, if we accept this idea of USGS, we have to
     accept that we are talking about calcite over the age of a
     few hundred thousand years.
               So here is one of my samples which I studied. 
     This calcite does not contain opal, so it cannot be dated,
     but it does contain fluoride, not a very typical mineral in
     water, and it consists of three zones; base zone, granular
     zone, and then blocky calcite.
               So I did a very detailed and very careful isotopic
     study on this calcite, actually the same thing just turned
     90 degrees clockwise.
               I used the in situ laser ablation stable isotopic
     analysis and with a spatial resolution about 300 microns for
     each spot.
               Some of this delta C-13 stays positive for the
     most part of the crust and the outer part, it drops
     dramatically to the values of minus six to minus eight,
     which are the typical values which were reported as being
     representative of the calcite with the age of on the order
     of a few hundred thousand years.
               So this essentially carbon oxygen distribution
     from this particular sample, one sample, essentially mimics
     what was shown by USGS.  Again, we have this late calcite
     with consistently light carbon and heavy oxygen.
               So again, if we accept this USGS interpretation,
     we have to assume that this calcite was formed less than
     probably 200,000 years ago.
               Here are the results.  Again, we have fluid
     inclusions through all the crusts from top to bottom and
     this early calcite, heavy carbon calcite, shows a little bit
     more high temperature and this late calcite has a little bit
     cooler temperature, but still the thermal temperature is 40
     to 46 degrees, 50 to 52 degrees.
               So in this situation, we have isotopically light
     calcite, which is probably formed at early -- probably young
     and it does contain fluid inclusion and in this case, again,
     we have fluid inclusion present throughout the crust.
               So my point is that at this point, at this stage,
     we cannot claim that all fluid inclusions or all calcite
     contain elevated temperature.  We cannot claim that this
     calcite is related to the cooling of tuffs because the age
     data does not allow us to tell that, and I think the only
     interpretation of that is that this calcite was formed by
     fluids with elevated temperature within the mountain.
               Thanks.
               MR. WYMER:  Thank you.  Questions?
               MR. PETERMAN:  I just have a comment.  If you look
     at page 12 on my handout, you will see that many, many
     hundreds of carbon and isotope analyses, they form a very
     broad trend and within each group, there is -- maybe it's
     not 12.  It's the one that has the carbon versus oxygen. 
     There it is.
               There's a huge amount of scatter there and other
     than saying there is a real trend, I don't see how you can
     use that data, except in the very earliest calcite, which is
     below the base cal simulator, which has those anomalies or
     very light oxygen values.
               I don't see any way how you can use that cluster
     of data as a chronometer at all.  It's just way too much
     scatter.
               MR. DUBLYANSKY:  My answer will be, first, that
     this graph which you represent here on your handout -- I
     just remembered the graph from your presentation, which
     brackets the temperature of 50 to 80 degrees Centigrade and
     those brackets are around the red dots, which are heavy in
     carbon and light in oxygen.
               It's just not correct, because I just have shown
     you that temperature 50 degrees Centigrade and 55 degrees
     Centigrade can be associated with calcite which should be on
     this blue.
               MR. PETERMAN:  But you had no age information to
     constrain that statement whatsoever.
               MR. DUBLYANSKY:  That's exactly, but the statement
     which you made in your report, they have no bearing on age. 
     You just refer signature and age.  Always, when you report
     stable isotopic values of this and carbon is minus five to
     minus eight.
               MR. PETERMAN:  Joe's depiction there is based upon
     petrographic delineation of the paragenetic sequence. 
     That's his categorization.
               Any one specimen, there may be age controls, but
     he just put together the paragenetic sequence and then he's
     put the carbon and oxygen isotopes in that framework.
               MR. DUBLYANSKY:  Absolutely, but I also have a
     paragenetic sequence in my sample and I just have shown it
     to you.
               MR. PETERMAN:  But you had no age information.
               MR. DUBLYANSKY:  Indeed, I don't have age
     information, but I do have indication that -- at least I am
     using your information and your work, actually I quote this
     in my report, that -- let me just quote you.
               MR. PETERMAN:  It's not going to do me any good. 
     I certainly can't remember the paper quotes at this point in
     time.  But there are typically a lot of things that are
     taken out of context.
               MR. DUBLYANSKY:  Well, this work is based only
     stable isotopes and it summarizes stable isotope work and
     age dating work.
               So I'm just using your information.
               MR. PETERMAN:  The same diagram is published
     somewhere in another report and I would suggest maybe you
     use that distribution of value.
               MR. DUBLYANSKY:  I didn't understand your comment.
               MR. PETERMAN:  I don't know what report that is. 
     It may have been 1992?
               MR. DUBLYANSKY:  No.  This is the report, ages and
     origin of subsurface calcite, and what I did is I just took
     date from your table and quoted them.  That's all I did.
               MR. PETERMAN:  That's fine.  That's a general
     trend.  But if you put the uncertainty on that trend, it's
     huge.
               MR. HORNBERGER:  I think Zell was just saying that
     you can't invert that trend, because if you look at his
     dots, the 13 on the early calcite goes everywhere from plus
     ten to minus six.  It's not very well constrained.  It's a
     trend, but you can't use it as a geo thermometer.
               MR. DUBLYANSKY:  I am not intending to use it as a
     geo thermometer.
               MR. HORNBERGER:  Or a geo chronometer, sorry.
               MR. DUBLYANSKY:  Or geo chronometer, too.  What I
     want to show, we have to have some handle on the -- well,
     I'll put it a different way.  We probably cannot date this
     calcite because it cannot be dated with the uranium lead
     methods.  So should we just throw away the data, fluid
     inclusion data, which we obtained on this calcite?  I don't
     think we should do that.
               So I am using this stable isotope data as as proxy
     of the ages, again, based on the work done elsewhere by
     USGS.
               I understand that that it's not a perfect method
     and I will not claim that.  But at least we cannot claim, by
     the same token, that this calcite is old.
               MS. CLINE:  Do you have paragenetic data that show
     that that particular calcite that forms the outer band in
     that sample is consistently present and is consistently a
     young calcite throughout the repository site?
               MR. DUBLYANSKY:  No, I don't, because it's blocky
     calcite.  It is present -- well, it could be present in many
     samples.  There's a granular calcite in the middle, which I
     was showing it's not very common Yucca Mountain calcite at
     all, also it contains fluorides, which also not all samples
     are -- not all samples contain.
               So I think paragenesis also should be variable
     from place to place in Yucca Mountain.  For instance,
     fluoride, we have found that fluoride is mostly associated
     with samples that are collected close to the fault lines.
               MS. CLINE:  Close to the fault lines?
               MR. DUBLYANSKY:  Right, close to the major fault
     lines and we have found fluoride in many, many samples, in
     much more samples that we expected to find them.
               No, I cannot parallel this particular calcite as
     paragenesis and that's why I'm using -- I am trying to use
     stable isotope report to get a handle on where I can place
     this calcite.
               MS. CLINE:  I would just say that in the work we
     have done on all the samples where we see the fluoride, it
     is in an older part of the sample.  We have not seen
     fluorite anywhere we can constrain the fluoride as being
     part of the younger event, either the later belated event or
     the magnesium enriched calcite.  It's just not there.
               MR. DUBLYANSKY:  Yes, but I don't think you have
     an age constraint on the calcite either.
               MS. CLINE:  This is relative age.
               MR. DUBLYANSKY:  Relative age.  In terms of
     relative age, well, this fluorite definitely is present in
     this zone and the outer zone of calcite, and here you can
     see fluorite just protrudes into the surface of calcite. 
     It's a common calcite and fluorite.  Again, basically, it's
     one way of addressing this issue, to calculate the
     thermodynamics of the calcite-fluorite system and that's
     what we are doing right now.
               MR. WYMER:  According to my schedule, we're over
     with our break now.  Is there one last burning question or
     comment?  It's a very interesting discussion.  Is there any
     final thing that one of you feels constrained to say?
               If not, well, thank you very much.  It's an
     interesting discussion.  Let's take a break.
               MR. GARRICK:  Yes.  Thank you.  Before you break,
     I want to advise you of what we're going to do.  There's two
     things remaining to be done.
               One is to prepare the committee for the tour
     tomorrow and the other is simply to do some homework in
     preparation for future meetings.  I don't think we need the
     court reporter for the rest of the day.  So I'm getting a
     favorable head nod from the staff, so you are excused, as we
     adjourn for the break.
               [Whereupon, the meeting was concluded.]