skip navigation links 
 
Index | Site Map | FAQ | Facility Info | Reading Rm | New | Help | Glossary | Contact Us blue spacer  
secondary page banner Return to NRC Home Page
                       UNITED STATES OF AMERICA
                     NUCLEAR REGULATORY COMMISSION
               ADVISORY COMMITTEE ON REACTOR SAFEGUARDS
                                  ***
                        MEETING:  HUMAN FACTORS
     
                        U.S. NRC
                        Conference Room 28-1
                        Two White Flint North
                        11545 Rockville Pike
                        Rockville, Maryland
     
                        Friday, November 19, 1999
     
         The committee met, pursuant to notice, at 8:30 a.m.
     MEMBERS PRESENT:
         GEORGE APOSTOLAKIS, Chairman, ACRS
         DANA A. POWERS, Member, ACRS 
         THOMAS S. KRESS, Member, ACRS
         JOHN J. BARTON, Member, ACRS
         JOHN D. SIEBER, Member, ACRS
         MARIO V. BONACA, Member, ACRS
         ROBERT E. UHRIG, Member, ACRS
         ROBERT L. SEALE, Member, ACRS.                         P R O C E E D I N G S
                                                      [8:30 a.m.]
         DR. APOSTOLAKIS:  The meeting will now come to order.
         This is a meeting of the ACRS Subcommittee on Human Factors. 
     I'm George Apostolakis, chairman of the subcommittee.  The ACRS members
     in attendance are Mario Bonaca, John Barton, Robert Seale, Dana Powers,
     Jack Sieber and Tom Kress.
         The purpose of this meeting is to review a proposed revision
     to NUREG 1624, Technical Basis and Implementation for a Technique for
     Human Event Analysis, ATHEANA, period and assist staff research
     activities related to human reliability analysis, pilot application of
     ATHEANA to assess design basis accidents and associated matters.  The
     subcommittee will gather information, analyze relevant issues and facts
     and formulate proposed positions and actions as appropriate for
     deliberation by the full committee.
         Mr. Juan Piralta is the cognizant ACRS staff engineer for
     this meeting.  The rules for participation in today's meeting have been
     announced as part of the notice of this meeting previously published in
     the Federal Register on October 14, 1999.  The transcript of this
     meeting is being kept and will be made available as stated in the
     Federal Register notice.  It is requested that speakers first identify
     themselves and speak with sufficient clarity and volume so that they can
     be readily heard.
         We have received no written comments or requests for time to
     make oral statements from members of the public.  We have to recess at
     11:45, because I have to go to another meeting, and then, we will
     reconvene again at maybe 12:45, okay?  So, if you can plan your
     presentation around that schedule, that will be good.
         We will now proceed with the meeting, and I call upon Mr.
     Mark Cunningham, for a change, to begin.
         [Laughter.]
         DR. APOSTOLAKIS:  Was there ever a meeting where Mr.
     Cunningham was not the first speaker?
         [Laughter.]
         DR. APOSTOLAKIS:  We all ask.
         MR. CUNNINGHAM:  Probably one or two in the last 20 years;
     not much beyond that, it seems.
         Good morning.
         DR. APOSTOLAKIS:  And Dr. Uhrig just joined us, for the
     record.
         MR. CUNNINGHAM:  All right; on the agenda, I've got a couple
     of items to begin with this morning.  First is just an overview of what
     we're doing.  The second is topics related to international efforts. 
     I'd like to put the international efforts, to delay that a little bit
     and discuss it after the ATHEANA presentation, because I think the
     context is much better after you've heard more about ATHEANA and the way
     we're treating human errors and things, unsafe acts, I'm sorry, that
     sort of thing.
         But anyway, by introduction, we have I guess by and large
     one big topic and a couple of smaller topics to discuss this morning. 
     The big topic is the work we've been doing over the last year or so to
     the ATHEANA project to respond to the peer review that we had in Seattle
     awhile back, June, okay?  That's the main topic for the day, so we'll
     talk about that; we'll talk about the structure of ATHEANA, what the
     objectives of the project are and then have an example.
         One of the things we've been doing over the last year is
     demonstrating the model in an analysis of a fire accident scenario in a
     plant that gets involved with this self-induced station blackout, a
     SISBO plant, if you will.  After that, we'll come back and talk about
     two smaller topics.  One is a base proposal, which are basically our
     international efforts in the human reliability analysis.  We had some
     work underway for the last couple of years with CSNIs, PWG-5, Principal
     Working Group 5, and you had errors of commission; we also had a CUPRA
     program related to trying to relate risk -- bring into risk analysis the
     impact of organizational influences.  So, I'll talk briefly about those
     later on in the morning or right after lunch or something like that.
         DR. APOSTOLAKIS:  Oh, I forgot to mention that Mr. Sorenson,
     a fellow of the ACRS, will make a presentation on safety culture after
     lunch, and we would appreciate it if some of you guys stay around and
     express comments and views.  This is an initiative of the ACRS, and
     certainly, your views and input would be greatly appreciated.  So don't
     disappear after the ATHEANA presentation.
         MR. CUNNINGHAM:  We won't.  Most of us won't.
         DR. APOSTOLAKIS:  Good.
         MR. CUNNINGHAM:  With that, I'll turn it over to Katharine
     Thompson.  Katharine is the project manager of the ATHEANA project in
     the office built by two support people, John Forester from Sandia and
     Alan Kolaczkowski from SAIC.  We've got some others in the audience,
     too, but we'll get back to that in a minute.
         DR. THOMPSON:  Good morning, and it's my pleasure to be here
     this morning to discuss ATHEANA with you for the first time, I guess.  I
     know you've heard a lot about it.
         DR. APOSTOLAKIS:  We should invite you more often,
     Katharine.
         [Laughter.]
         DR. POWERS:  Well, George, I will point out that the first
     speaker before the committee usually gets asked a fairly similar
     question.
         DR. APOSTOLAKIS:  Yes; go ahead, Dana.
         DR. POWERS:  What in the world qualifies you to speak before
     this august body?
         [Laughter.]
         DR. THOMPSON:  I have orders from my manager.
         [Laughter.]
         DR. POWERS:  No, I'm serious; could you give us a little bit
     of your background?
         DR. THOMPSON:  Oh, sorry; I have a Ph.D. in industrial and
     organizational psychology.  I've been at the NRC for about 10 years.  I
     was in NRR and human factors for a few years, and then, I went as a
     project manager for the Palo Verde plant.  I've been over here in the
     research and assessment branch for about 5 or 6 years, and I've been
     working on ATHEANA for the past about 5 years.
         DR. POWERS:  What in the world makes you think that this
     body will understand anything you have to say?
         [Laughter.]
         SPEAKER:  We'll be slow in delivery.
         DR. THOMPSON:  Okay; just a brief outline of the
     presentation.  I'm going to be discussing the overview and a brief
     introduction.  Dr. John Forester will be going through the structure of
     ATHEANA and how it's done.  Alan Kolaczkowski will be talking about the
     fire application, and then, I'll be back to talk about some conclusions
     and some follow-up activities.
         We're not going to talk about the peer review in the
     interests of time, but in the back of your handout, you have all of the
     slides and discussion of the peer review, so you can look at that in
     your own time.
         DR. APOSTOLAKIS:  Unless we raise some issues.
         DR. THOMPSON:  Unless you raise some issues.
         I guess the first question that always comes is why do we
     need a new HRM method?  And so, we've talked about this and looked at
     accidents that happen in the industry and other industries, events that
     have happened, and certain patterns and things come to the surface. 
     What we're finding is that a lot of problems involve situation
     assessment; that scenarios and the events deviate from the operator's
     expectation.  Perhaps they were trained in one way on how to approach a
     situation, and the scenario didn't happen that they were trained on.
         We've seen that plant behavior is often not understood, that
     multiple failures happen that are outside the expectations of the
     operators, and they don't know how to respond to this or how to handle
     it properly.  They weren't trained on how to follow these scenarios. 
     And we also know that plant conditions are not addressed by procedures. 
     A lot of times, these things don't match.  The procedures tell them how
     to go through a scenario, but yet, the scenario isn't matched with the
     procedures at hand, so that they may do something that's not in the
     procedures; that could, in fact, worsen the conditions.
         And these types of things aren't handled appropriately in
     current ERAs and HRAs, and so, we need to address these problems with
     situation assessment and how the plan is understood by the operators.
         DR. APOSTOLAKIS:  Now, this thing about the procedures is
     interesting.  Isn't it true that this agency requires verbatim
     compliance with procedures, unlike the French, for example, who consider
     them general guidelines?
         DR. POWERS:  Guidance.
         DR. APOSTOLAKIS:  Yes; it's like traffic lights somewhere
     else.
         So how -- what are we going to do with this?  I mean, should
     the agency change its policy?
         MR. CUNNINGHAM:  We are probably not the best people to say,
     but I don't think that's the policy of the agency, to follow -- require
     verbatim compliance with the procedures.
         DR. BARTON:  George, the agency requires that you have to
     procedures to conduct operations, to handle emergencies, et cetera. 
     Some procedures are categorized in different categories:  continuous
     use, reference, stuff like that.  But there really isn't --
         DR. APOSTOLAKIS:  There is no --
         DR. BARTON:  -- a requirement that you do verbatim
     compliance.
         DR. BONACA:  Although the utilities --
         DR. BARTON:  Utilities have placed compliance, strict
     compliance, on certain groups of procedures, and they have also policies
     that say if you can't comply with the procedure, what you do:  stop and
     ask your supervisor, change the procedure, et cetera.  But I don't think
     there are any regulations that say you have to follow procedures
     verbatim.
         DR. APOSTOLAKIS:  Although we have been told otherwise,
     though.  What's that?
         DR. BONACA:  Control room procedures, however, in an
     emergency, EOPs, for example, there is following verbatim to line by
     line.
         DR. APOSTOLAKIS:  But these are the ones that Katie is
     talking about, right?  EOPs?
         DR. BONACA:  Yes.
         DR. APOSTOLAKIS:  Not procedures for maintenance.  I mean,
     you're talking about --
         MR. CUNNINGHAM:  Again, I don't know that it's a requirement
     of the agency that they follow line-by-line the procedures.  It's my
     understanding that it's not.
         DR. UHRIG:  It was 20 years ago at one point, but that was
     believed.
         MR. CUNNINGHAM:  Okay.
         DR. BONACA:  Well, the order in which you step through an
     emergency procedure is very strict.  I mean, at least -- I don't know if
     it is coming from a regulation, but it is extremely strict.  You cannot
     -- I mean, the order of the steps you have to take; that's why you have
     the approach in the control room with three people, and one reads the
     procedure; the others follow the steps.
         DR. APOSTOLAKIS:  Yes, that's true.
         MR. CUNNINGHAM:  Again, I think all of that is very true.  I
     just don't think it's a requirement -- it's not in the regulations that
     they do that is my understanding.
         DR. APOSTOLAKIS:  You say you are not the appropriate
     people.  Who are the appropriate people who should be notified?
         MR. CUNNINGHAM:  I'm sorry?
         DR. APOSTOLAKIS:  Maybe that will do something about it.
         MR. CUNNINGHAM:  Who --
         DR. APOSTOLAKIS:  Who in the agency is in charge of the
     procedures and compliance?
         MR. CUNNINGHAM:  It's our colleagues in NRR, obviously, and
     where exactly in the last reorganization this ended up, I'm not quite
     sure.
         DR. APOSTOLAKIS:  Okay.
         MR. CUNNINGHAM:  But the issue of whether or not there is
     verbatim compliance is an NRR issue that --
         DR. SEALE:  It might be interesting to discuss this with
     some inspectors in the plant.
         DR. KRESS:  Whenever we've heard -- one of these things that
     always seems to show up.  I'm sorry; I can't talk to them and listen at
     the same time, but it seems to me like there was almost an implied -- on
     these procedures.
         DR. APOSTOLAKIS:  Yes.
         DR. KRESS:  Whether it's real or within the regulations or
     not.
         DR. BONACA:  But that certainly has been interpreted by now
     by the licensees.  I mean, for the past 10 years, especially -- even the
     severe accident guidelines, in some cases, where you look at the
     procedures, they are very strictly proceduralized, I mean.  And you
     check to see that people do not even in the simulator room do not invert
     the order of the stuff.
         DR. THOMPSON:  Yes, but a lot of that came from the
     analysis, because following the procedure requirement, it's the next
     step that you must deal with, I don't ever recall a regulation requiring
     verbatim compliance.  We had company policy about certain procedures.
         DR. APOSTOLAKIS:  Okay.
         DR. THOMPSON:  Okay; so what we know from all of these
     reviews of accidents and events is that situations in the context
     creates the appearance that a certain operator action is needed when, in
     fact, it may not be and that operators act rationally; they want to do
     the right thing; they try to do the right thing, and sometimes, the
     action is not the appropriate action to take.  The purpose for ATHEANA,
     then, is to provide a workable, realistic way to identify and quantify
     errors of commission and errors of omission.
         There are three objectives of ATHEANA.  First is to enhance
     how human behavior is represented in accidents and near miss events.  We
     do this by looking at the decision process involved, how people -- their
     information processing abilities and how they assess a situation, and we
     also integrate knowledge from different disciplines.  We look -- we have
     technology factors, engineering risk assessments.  We try to incorporate
     many different areas of knowledge there.
         DR. POWERS:  I guess I'm struck by how this view graph would
     have been written by somebody -- who developed human error analysis
     methodologies they use now.  They probably would use this view graph and
     just change the title, right?  Everybody that advances the human -- our
     reliability analysis program says he's going to make it realistic; he's
     going to integrate perspectives of ERA with plant engineering,
     operations training, psychology, risk-informed and have insights.  I
     mean, this is true of any conceivable human error analysis.
         MR. CUNNINGHAM:  In theory.  Now, we could go back perhaps
     in another session and talk about how much did other methods really
     accomplish this, and I think what you see, and you hear stories of how,
     in the poorer qualities HRAs, if you will, how this is implemented in a
     way that, in fact, the issues such as psychology and operations and
     training and things like that are handled on a rather -- one way to put
     it is a crude way, and one way would be just a mechanical way or
     something like that.
         DR. POWERS:  You know, I mean when you look for things like
     your hallowed Navier Stokes equations, people come up with --
         DR. KRESS:  Hallowed, not hollowed.
         DR. POWERS:  That's right, hollowed.
         [Laughter.]
         DR. POWERS:  The fount of all wisdom, and you call it the
     big bang; everything else was just thermohydraulics.
         [Laughter.]
         DR. SEALE:  And a little chaos thrown in.
         [Laughter.]
         DR. POWERS:  You know, in your equations, you say, well,
     we'll make an approximation.  We may have zeroeth Ns, and you can see
     that there is no dimensionality in the zeroeth approximation, and then,
     you have first order ones and second order ones and third order ones,
     and it's very clear when somebody is getting more realistic and
     incorporating more terms.  How am I going to look and see that this
     ATHEANA program is more realistic?  You know, what is it that says
     clearly that this is more realistic than what was done many, many years
     ago for the weapons programs?
         MR. CUNNINGHAM:  I guess in my mind, there would be a couple
     of clues.  I guess one would be how well we can mimic, if you will, or
     reproduce the real world accidents that Katharine started talking about,
     and again, those are the accidents that are, if you will, I think of
     them as the more catastrophic accidents.  If you look back and see,
     investigate human performance in catastrophic accidents, how well does
     this model -- I don't want to say predict but work with those types of
     events?
         DR. KRESS:  You're not talking about neutral.
         MR. CUNNINGHAM:  No, I'm talking about in general.  I can
     think of --
         DR. KRESS:  Can you transfer that technology to technology?
         MR. CUNNINGHAM:  Yes, I think you can, and that's kind of
     one of the subtle, underlying presumptions is that the human performance
     in catastrophic accidents can be translated across different industries,
     highly complex, high-tech industries, if you will:  aircraft, chemical
     facilities and that sort of thing.
         DR. APOSTOLAKIS:  I think there is a message here,
     Katharine:  use your judgment as you go along, and skip the view graphs
     that are sort of general and focus on ATHEANA only.  Do not raise
     anything until you come to the specifics.  Otherwise, you're going to
     get discussions like this.
         [Laughter.]
         DR. APOSTOLAKIS:  So can you go on, and we'll come back to
     these questions?
         DR. THOMPSON:  Skip the next one, John.
         This is just to show you the basic framework of ATHEANA and
     to underscore again -- well, we use different ones here; that's the left
     part.  Psychology, engineering -- this is something we've been working
     on.  The left-hand side shows you the elements of psychology, human
     factors engineering that are folded into the framework.
         DR. APOSTOLAKIS:  Go ahead.
         DR. THOMPSON:  And then, it flows into the PRA logic models
     and the rest.  You've seen this before.  John is going to talk more
     about this in the future, so I don't want to spend too much time on this
     right now.
         DR. APOSTOLAKIS:  I have a couple of comments.
         DR. THOMPSON:  Okay.
         DR. APOSTOLAKIS:  I have complained in the past that
     error-forcing context is a misnomer, and then, I read your chapter 10,
     which tells me that there may be situations where the error-forcing
     context really doesn't do anything.  So I don't know why it's forcing. 
     I notice that some of the reviewers also said that it's probably better
     to call it error-producing, error -- I don't know, some other word than
     forcing, because you, yourselves say in chapter 10 that the probability
     of error, given an error-forcing context, is not one, may not be one.
         DR. THOMPSON:  Right.
         DR. APOSTOLAKIS:  Second, I don't understand why you call
     them unsafe actions.  I fully agree that the human failure event makes
     sense, but until you go to the human failure event, you don't know that
     the action is unsafe.  I mean, you insist -- in fact, you just told us
     -- that people aren't rational, and I'm willing to accept that.  So the
     poor guy there took action according to the context, which led to a
     human failure event.  So I don't think you should call it unsafe.  I
     mean, human actions -- don't you think that that would be a better
     terminology?
         And then, finally, coming back to Dr. Powers' question, I
     give you my overall impression of the report.  I think the greatest
     contribution that ATHEANA has is the extreme attention it paid to the
     plant, at the plant conditions; that there is an awfully good discussion
     of how the plant conditions shape the context.  But I must say that
     chapter 10 was a disappointment.  The quantification part, I didn't see
     anything there that really built on the beautiful stuff that was in the
     previous chapters.  In fact, it just tells you go find a method and use
     it.
         It's a little harsh, but, I mean, in essence, that's what it
     says.  I mean, I have this context.  I spent all this effort to find the
     error-forcing context.  And then, all you are telling me is now, you can
     use half.  You can use, you know, slim model if you like.  I thought I
     was going to see much more.  I mean, this thing of error mechanisms has
     always intrigued me, why you bother to use it.  And then, in chapter 10,
     you don't use it, which is sort of what I expected.  I mean, I can't
     imagine anybody quantifying error mechanisms.
         So I don't know if this is the proper place to discuss this,
     because it's jumping way ahead, but I'm just letting you know that
     chapter 10, I thought, was a let-down after the wonderful stuff that was
     in the previous chapters.
         MR. CUNNINGHAM:  Yes, I think we are getting a little ahead
     of --
         DR. APOSTOLAKIS:  Yes, okay.
         MR. CUNNINGHAM:  I mean, after John and Alan talk for
     awhile, we can come back to this.
         DR. APOSTOLAKIS:  But, I mean, one part of the answer to Dr.
     Powers is that this is really the first HRA approach that really paid
     serious attention to the plant conditions, and I think that is very,
     very good, very good, but we are really -- we are not just speculating
     now.  You guys went out of your way to see how this circle there, plant
     design, operations and maintenance and plant conditions shape the
     context.  I've always had reservations about the error mechanisms, but I
     deferred to people more knowledgeable than I.
         But chapter 10 now makes me wonder again.  So, but the
     terminology, I think, is very important.  I'm not sure that you should
     insist calling it error-forcing context when you say in chapter 10 that
     -- I don't remember the exact words but, you know, sometimes, you know,
     it doesn't really matter.  How can it be forcing it?
         Yes, John?
         MR. FORESTER:  Do you want me to comment on it?
         DR. APOSTOLAKIS:  I want you to comment on this.
         MR. FORESTER:  I suggest we come back and --
         DR. APOSTOLAKIS:  Great.
         MR. FORESTER:  -- the natural progression of the talk will
     get us to chapter 10.
         DR. APOSTOLAKIS:  Okay; fine, fine.
         MR. FORESTER:  Sometime today so --
         DR. APOSTOLAKIS:  Do you have any reaction to the comments
     on the terminology?  I mean, last time, you dismissed me.  Are you still
     dismissing me?
         [Laughter.]
         DR. THOMPSON:  We'll come back to it.
         MR. FORESTER:  We will come back to it.
         MR. KOLACZKOWSKI:  The answer is yes.
         DR. APOSTOLAKIS:  Well, then, that gives me time to find
     your exact words in chapter 10.
         [Laughter.]
         DR. APOSTOLAKIS:  Okay.
         DR. THOMPSON:  This slide going real fast.  I wanted to just
     briefly recognize the team, because they all did a wonderful, wonderful
     job, and it, again, underscores the different disciplines we've brought
     to this program.  We've got psychologists, the first three,
     specifically.
         DR. APOSTOLAKIS:  Always pleased to see names that are more
     difficult to pronounce than my own.
         [Laughter.]
         MR. KOLACZKOWSKI:  I don't see any such names here.
         [Laughter.]
         DR. THOMPSON:  He's referred to as Alan K., because I can't
     pronounce it either.
         [Laughter.]
         DR. THOMPSON:  Engineers, risk assessment experts,
     psychologists, human factors, so we've brought all of the disciplines to
     this project that we need.
         DR. APOSTOLAKIS:  By the way, I hope you don't misunderstand
     my comments.  I really want this project to succeed, okay?  So I think,
     you know, being frank and up front is the best policy.  So I must tell
     you that it was not a happy time for me when I read chapter 10.
         MR. CUNNINGHAM:  We appreciate that over the years, we've
     gotten a lot of good advice from the various subcommittees and
     committees here, and we appreciate that and take it in that vein, even
     though we may take your name in vain occasionally.
         [Laughter.]
         DR. POWERS:  We are probably in good company.
         DR. APOSTOLAKIS:  Now, you know why Mr. Cunningham is always
     there --
         [Laughter.]
         DR. APOSTOLAKIS:  -- every time we meet.  He knows how to
     handle situations like this.
         [Laughter.]
         MR. FORESTER:  Yes; I am John Forester with Sandia National
     Laboratories, and I'm, I guess, the project manager, the program
     manager.  I work for Katharine, and I'm the project leader for the team.
         DR. APOSTOLAKIS:  She's not Kitty anymore?  Is it Katharine
     now?
         MR. FORESTER:  Katharine, yes.
         DR. APOSTOLAKIS:  Okay.
         [Laughter.]
         MR. FORESTER:  For this part of the presentation, I'm going
     to discuss the structure of ATHEANA, and what I'd like to do is focus on
     the critical aspects and processes that make up the ATHEANA method.
         DR. APOSTOLAKIS:  So, you skipped the project studies.
         DR. THOMPSON:  I'm sorry; I'll get back to that at the end
     when we talk about the completion.
         DR. APOSTOLAKIS:  Okay.
         MR. FORESTER:  Okay; ATHEANA includes both a process for
     doing retrospective analysis of existing events and a process for doing
     prospective analysis of events.
         DR. KRESS:  A retrospective?  Is that an attempt to find out
     the cause?
         MR. FORESTER:  Right, an analysis of the event to find out
     what the causes were and, you know, ATHEANA has had a process or a
     structure, at least, for doing that for quite awhile, to be able to
     analyze and represent events from the ATHEANA perspective so that you
     can understand what the causes were and also, by doing that in this kind
     of formal way, you'd have a way to maybe identify how to, you know, fix
     the problems in a better way.
         DR. KRESS:  And you can use that retrospective iteratively
     to improve some of the models in the ATHEANA process?
         MR. FORESTER:  Yes; you know, the idea was that by doing
     these retrospective analyses, we learn a lot about the nature of events
     that had occurred and then can take that forward and use it in the
     prospective analysis.
         DR. APOSTOLAKIS:  But today, you will focus on prospective
     analysis.
         MR. FORESTER:  That is correct; yes, I just want to note
     that one of the recommendations from the peer review in June of 1998 was
     that we had the structure for doing the retrospective, but we did not
     have an explicitly documented process for doing the retrospective, and
     we have included that now, okay?  And we do see that as an important
     part of the ATHEANA process in the sense that, you know, when plants or
     individuals go to apply the process, they can look at events that have
     occurred in their own plant and get an understanding of what the kinds
     of things ATHEANA is looking for, sort of the objectives of it, and that
     way, it will help them be able to use the method, in addition to just
     learning about events in the plant and maybe ways to improve the process
     or improve the problem, fix the problem.
         Okay; now, we do see in terms of the prospective analysis,
     as George said, we're going to focus on that mostly today.  We do see
     the process as being a tool for addressing and resolving issues.  Now,
     those issues can be fairly broadly defined in the sense of we're going
     to do an HRA to support a new PRA, but we also see it as a tool to use
     more specifically in the sense -- for example, you might want to extend
     an existing PRA or HRA to address a new issue of concern; for example,
     maybe, you know, the impact of cable aging or operator contributions to
     pressurized thermal shock kind of scenarios or fire scenarios.  So it
     can be used in a very effective manner, I think, to address specific
     issues.
         Also, maybe, to enhance an existing HRA or, you know,
     upgrade an existing HRA to be able to -- for purposes of risk-informed
     regulation submittals and things like that.  So it can be a very
     issue-driven kind of process.
         The four items there on the bottom are essentially sort of
     the major aspects of the tool, and I'm going to talk about each one of
     those in detail, but in general, the process involves identifying base
     case scenarios; sort of what's the expected scenario given a particular
     initiator and then trying to identify deviations from that base case
     that could cause problems for the operators.
         Another major aspect of the --
         DR. KRESS:  Are those the usual scenarios in a PRA that
     you're talking about?
         MR. FORESTER:  The -- well, no, the base case is sort of --
     I'll go into more detail about what the base case scenario actually is,
     but it is what the operators expect to occur, and it's also based on
     detailed plant engineering models, okay?  So maybe you'll lift something
     from the plant FSAR, but I'll talk about that a little bit more.
         And again, another major aspect of the revised method is
     that we try to clarify the relationship between the deviations, the
     plant conditions and the impact on human error mechanisms and
     performance shaping factors.  So we tried to tie that together a little
     better, and I think we've created at least a useful tool to do that
     with.  And then, finally, the other major aspect is the integrated
     recovery analysis and quantification, and I would like to say Kitty has
     already pointed out that I'll kind of go through the general aspects of
     the process, and then, Alan is going to give us an illustration of that
     process, okay?
         [Pause.]
         MR. FORESTER:  Okay; I think as we mentioned earlier, sort
     of the underlying basis for the prospective analysis is that most
     serious accidents occur when the crew sort of gets into a situation
     where they don't understand what's going on in the plant.
         DR. APOSTOLAKIS:  Is this Rasmussen's knowledge-based
     failure?
         MR. FORESTER:  Yes, I guess it would be.  It's where the
     procedures don't maybe fit exactly; they may be technically correct, but
     they may not fit exactly, and, well, even in the aviation industry or
     any other kind of industry, what you see in these kind of serious
     accidents was that they just didn't understand what was going on. 
     Either they couldn't interpret the information correctly.  I mean, in
     principle, I guess it could have been responded to in a rule-based kind
     of way, but they didn't recognize that, so it did put them into a
     knowledge based kind of situation.
         DR. KRESS:  When I read that first bullet, I'm thinking of
     nuclear plants because it comes from the broad plan.
         MR. FORESTER:  Yes; that's true, but there have been some
     events.  I mean, they haven't led to serious events, necessarily, and
     even beyond TMI and --
         DR. KRESS:  Yes, but that's one data point.
         MR. FORESTER:  I mean, there are other events, though, that
     haven't gone to core damage or, I mean, that haven't really led to any
     serious effects.
         DR. KRESS:  But you're getting this information from --
         MR. FORESTER:  Yes, yes; okay.
         DR. KRESS:  Because in designing nuclear plants, we talk
     about conditions not understood.  We've gone to great pains to get that
     out. I'm sorry; I'll just quit talking.
         [Laughter.]
         MR. FORESTER:  It does seem, even in the nuclear industry,
     you know, there are times where people do things wrong.  I mean, it
     doesn't lead to serious problems, but people do, you know, they bypass
     SPASS --
         DR. SEALE:  You know, it really goes back to George's
     comment about human error.  Human error is a slippery slope.  It's not a
     cliff.  And, in fact, when human error occurs, the angle of that slope
     will vary from error to error, and while you may talk about TMI as a
     case where you led to an accident, I bet you you could find a dozen
     where people did something, recognized that they were on a slippery
     slope, and recovered, and that seems to me, that should be just as
     useful an analysis, an identification to do in your ATHEANA process as
     was the TMI event, because it's the process you're trying to understand.
         MR. CUNNINGHAM:  No, I think that's right; you learn from
     your mistakes.  You also learn from the mistakes you avoid.
         DR. SEALE:  And the ability to recover is important
     knowledge.
         MR. CUNNINGHAM:  Yes; there's a lot of work that's been done
     about TMI; an operator response to initial events, and as you said,
     there is still the residual that they don't understand, and that's where
     we can get into very severe accidents, even after all that training.
         DR. POWERS:  It seems to me that a double-ended guillotine
     pipe break, that's a severe accident that a crew would understand
     absolutely what it was doing in a double-ended guillotine pipe break.
         DR. KRESS:  So we are never going to have one.
         [Laughter.]
         DR. POWERS:  If we had one, you would damn well know what
     happened.
         [Laughter.]
         DR. POWERS:  You wouldn't be able to mistake it for much. 
     It seems like what you're saying may be true for accidents that are of
     real concern to us, but it's going to run counter to the DBAs.  The
     DBAs, you know what's going on, and it doesn't seem like it applies to
     the DBAs.
         MR. CUNNINGHAM:  DBAs are obviously very stylized accidents. 
     DBAs themselves are very stylized accidents, and the training, you know,
     25 years ago was fairly stylized to go with those accidents.  We've made
     a lot of progress since then in taking a step back from the very
     stylized type of approach, but you can still have accidents or events. 
     The one that comes to mind for me is the Rancho Seco event of -- I don't
     know -- the early eighties or something like that, where they lost a
     great deal of their indication; another indication was confusing and
     that sort of thing.  It's not a design-basis accident, but it was a
     serious challenge to the core, if you will.
         DR. APOSTOLAKIS:  Isn't, John, I don't see anything about
     the values of operators, the references; again, the classic example is
     Davis-Bessie, you know, where the guy was very reluctant to go to bleed
     and feed and waited until that pump was fixed, and the NRC staff, in its
     augmented inspection team report, blamed the operators that they put
     economics ahead of safety.  The operators, of course, denied it.  The
     guy said, no, I knew that the pump was going to be fixed, but isn't that
     really an issue of values, of limits?  It's a decision making problem.
         MR. CUNNINGHAM:  Right.
         DR. APOSTOLAKIS:  Where in this structure would these things
     -- are these things accounted for?  Is it in the performance shaping
     factors, or is it something else?
         MR. FORESTER:  Well, one place it comes through is with the
     informal rules.  We try to evaluate informal rules.  And if there's sort
     of a rule of, you know, we've got to watch out for economics, I mean, in
     their minds, it may not be an explicit rule, but in their minds, they're
     not going to do anything that's going to cost the utility a lot of
     money.  That's one way we try to capture it.
         There's also -- we try and look at their action tendencies. 
     We have some basic tables in there that addresses both the BWR and PWR
     operator action tendencies, what they're likely to do in given
     scenarios.
         DR. APOSTOLAKIS:  But if I look at your multidisciplinary
     framework picture that you showed earlier, I don't see anything about
     rules.  So the question is where, in which box, you put things like
     that.
         MR. FORESTER:  Well, I guess it would probably be sort of
     part of the performance shaping factors.
         DR. APOSTOLAKIS:  I'm sorry, what?
         MR. FORESTER:  Well, overall, the impact of rules would sort
     of be -- or of what you're describing here, and I used informal rules as
     how we get at that in terms of the framework, it would certainly be
     covered under part of the error forcing context, essentially.
         DR. APOSTOLAKIS:  But this is the performance shaping
     factor, part of the performance shaping factor?
         MR. FORESTER:  I think it -- I guess it would also -- I'm
     not sure we'd directly consider it as a performance shaping factor.
         DR. APOSTOLAKIS:  What is a performance shaping factor in
     this context?  Give us a definition.
         MR. FORESTER:  Well, procedures, training, all of those
     things would be -- the man-machine interface, all those would be --
         DR. APOSTOLAKIS:  Technological conditions?  Is that
     performance-shaping factors?
         MR. FORESTER:  Stress and --
         DR. APOSTOLAKIS:  So the error forcing context is the union
     of the performance shaping factors and the plant conditions.  Is that
     the correct interpretation of this?
         MR. FORESTER:  That's a correct interpretation.
         DR. APOSTOLAKIS:  So clearly, values cannot be part of the
     plant conditions, so they must be performance-shaping factors.  I mean,
     if it's the union --
         MR. KOLACZKOWSKI:  I'm Alan Kolaczkowski with SAIC.  Yes, if
     you want to parcel it out, if you want to actually put tendencies of
     operators or roles into a box, it would best fit in the performance
     shaping factors, yes, but the reason why I think we're struggling is
     that we recognize that to really define the error-forcing context, you
     have to think about the plant conditions and all the influences on the
     operator in an integrated fashion, and it's hard to parcel it out, but
     if you want to put it in a box, I would say yes, it's affecting the
     performance shaping factors.
         DR. APOSTOLAKIS:  That's what the box says:  all of these
     influences --
         MR. KOLACZKOWSKI:  I understand.
         DR. APOSTOLAKIS:  -- are the PSFs, because there's nothing
     else.
         MR. FORESTER:  Well, it could be more specified, I would
     say, in the sense that part of what you're bringing up is augmented in
     the organizational factors, maybe even team issues, things like that,
     which are going to be -- which are certainly going to contribute to the
     potential for error.  Those are not explicitly captured.  In some sense,
     they could be looked at as part of the plant conditions, and they could
     also be looked at as performance shaping factors.
         DR. APOSTOLAKIS:  Now, this sector on the left, what do you
     mean by operations?
         MR. FORESTER:  Just the way they do things there, the
     procedures, their modus operandi, I guess, as to the way they run the
     plant.
         DR. APOSTOLAKIS:  Is what other people call safety culture
     there?
         MR. FORESTER:  I think that's more --
         DR. APOSTOLAKIS:  No, but that's part of it.  there's an
     error there on the left, plant design, operations and maintenance.  I
     remember the figure from Jim Reason's book, where he talks about line
     management deficiencies and valuable decisions.  Are you lumping those
     into that circle, or are you ignoring them?  I mean, the issue of
     culture --
         MR. FORESTER:  We have not explicitly tried to represent
     those yet.
         DR. APOSTOLAKIS:  But this is a generic figure, so that's
     where it would belong, right?
         MR. FORESTER:  I'm not sure I would normally necessarily
     pigeonhole it there.  It's all part of that whole -- the whole error
     force in context and what feeds into the error force in context.
         DR. APOSTOLAKIS:  But the error force in context is shaped
     by these outside influences.  It does not exist by itself.  You have
     these arrows there.
         MR. FORESTER:  Right.
         DR. APOSTOLAKIS:  So this is an outside influence, so, for
     example, if I wanted to study the impact of electricity market
     deregulation, that would be an external input --
         MR. FORESTER:  Yes.
         DR. APOSTOLAKIS:  -- that would affect the performance
     shaping factors and possibly the plant condition.
         MR. FORESTER:  Yes; that is correct.
         DR. APOSTOLAKIS:  Okay.
         MR. CUNNINGHAM:  That is correct.
         DR. APOSTOLAKIS:  So all of these are external influences
     that shape what you call error force in context.
         MR. CUNNINGHAM:  That's right.  This is a very conceptual
     description of the process.
         DR. APOSTOLAKIS:  Yes.
         MR. CUNNINGHAM:  And it's probably a little broader than
     ATHEANA is today, but again, if we could go back and get into ATHEANA as
     it is today, it might help --
         DR. APOSTOLAKIS:  Okay.
         MR. CUNNINGHAM:  -- some of the others understand what we're
     going through here.
         MR. FORESTER:  Well, given what we've identified as the
     nature of serious accidents, we think a good HRA method should identify
     these conditions prospectively, and we have several processes that we
     use to do that.  Mr. Chairman, I'm going to talk about these in more
     detail, to identify the base case scenarios, and again, these are
     conditions that are expected by the operators and trainers given a
     particular initiating event.
         They may want to identify potential operational
     vulnerabilities, and these might include operators' expectations about
     how they think the event is going to evolve.  It could include
     vulnerabilities and procedures; for example, where the timing of the
     event is a little bit different than what they expect.  The procedure
     could be technically correct, but there could be some areas of ambiguity
     or confusion possibly.
         And then, based on those vulnerabilities, at least part of
     what we use is those vulnerabilities, then try and identify reasonable
     deviations from these base case conditions, to sort of see if there are
     kinds of scenarios that could capitalize on those vulnerabilities and
     then get the operators in trouble.
         DR. APOSTOLAKIS:  So I think it's important to ask at this
     point:  what were the objectives of the thing?  It's clear to me from
     the way the report is structured and the way you are making the
     presentation that the objective was not just to support PRA.
         MR. FORESTER:  Not just to support PRA, no; I guess that's
     maybe how we started out, but I think the method itself can be used more
     generally than in PRA.  I think it needs to be tied to PRA because of
     some of the ways we do things, but no, certainly, it could be used more
     generally.
         DR. APOSTOLAKIS:  What other uses do you see?
         MR. FORESTER:  You can do qualitative kind of analysis, so
     if you're not doing a PRA, you don't need explicit quantitative
     analysis.  So with, for example, in the aviation industry, there is not
     a whole lot of risk assessment done as far as I know on what goes on in
     the airplane cockpits, but that doesn't mean that you couldn't use this
     kind of approach to develop interesting scenarios, potentially dangerous
     scenarios, that you could then run in simulators, for example, or in the
     nuclears, you can run these things as simulators and give operators
     experience with them and see how they handle the situation.
         DR. APOSTOLAKIS:  So this would help with operator training?
         MR. FORESTER:  I believe it would, yes, because there is a
     very explicit process.
         DR. BONACA:  I think we have the fundamental elements of
     root cause, for example, and so, that would help with that.
         MR. SIEBER:  I think it also helps in revising procedures,
     because you have a confusing procedure, and it doesn't really give you
     the -- but this technique helps you pinpoint --
         DR. APOSTOLAKIS:  This is an important point that I think
     you should be making whenever you make presentations like this, because
     the sole objective is to support the PRA, and I think a legitimate
     question would be are you sure you can quantify that?  Maybe you can't,
     but if your objective is also to develop health of operator training and
     other things, then, I think it's perfectly all right.
         DR. BONACA:  I think the value of this, you know, when I
     looked at this stuff is that -- was in part, I mean, some of the issues
     are based on the mindset that the operators have.  Here, you have a
     boundary where they believe they have the leeway not to follow
     procedures; for example, the issue of not going to bleed and feed was
     very debated in the eighties, because it seemed like an option was that
     severe accidents, something, and if you look at the procedure, before
     1988 or so, there was no procedure to do bleed and feed.  I mean, simply
     said, if you have a dry steam generator, do something.  One thing you
     could do was bleed and feed.
         Well, then, leave it to the judgment of the operator to do
     so.  Well, today, you go into it.  We learned that that was a mistake. 
     So we said the only thing you can do is bleed and feed, so do it, and
     you put it in the procedure now, and they follow it now, but it took a
     long time for the operators to convince them to go into it.  I mean,
     they didn't like that.
         So I'm saying that in a model like this, it would help to
     talk about some of the shortcomings.
         MR. SIEBER:  I'm pretty well convinced that even if you
     didn't have a PRA, you could profit from looking at how --
         DR. APOSTOLAKIS:  And all I'm saying is that those
     statements should have been made up front, because the review, then,
     doesn't say what you are presenting, and I would agree.  I agree, by the
     way, that this is a very valuable result.
         DR. SEALE:  It's interesting, because the utility of this
     method actually begins in terms of influencing procedures and so forth
     before it gets terribly quantitative, and yet, it's the ultimate
     objective, presumably, or let's say the most sophisticated use of it is
     when it gets quantitative so that you can use it in the PRA, but it
     strikes me that it might be when you talk about these other uses to
     actually identify the fact that in its less quantitative form, it's
     still useful --
         MR. CUNNINGHAM:  Yes.
         DR. SEALE:  -- in doing these other things, and that
     supports the idea, then, that you can evolve to your ultimate objective,
     but you have something that's useful before it ever becomes the final
     product.
         MR. CUNNINGHAM:  That's very useful.  We've talked about
     that and those types of benefits, but we could make it clearer.
         DR. APOSTOLAKIS:  Okay; can we move on?
         DR. KRESS:  Before you take that slide off --
         DR. APOSTOLAKIS:  We have two members who have comments.
         Dr. Kress?
         DR. KRESS:  The three sub-bullets under two, if I could
     rephrase what I think they mean, you start out with some sort of set of
     base case scenarios, and you look at that scenario and look at places
     where the scenario could be described wrong, and it could go a different
     way somehow all through it, so those are the vulnerabilities or place
     where it could go differently than you think or might even go different. 
     And then, the abbreviations are the possible choices of these different
     directions a scenario might go; it looks a whole lot to me like an
     uncertainty analysis on scenarios, which I've never actually seen done. 
     So it looks to me like a continuum.  I don't know how you would make
     this a set of integers.
         MR. CUNNINGHAM:  We'll talk about that later.
         DR. KRESS:  You'll talk about that later?
         MR. FORESTER:  Yes.
         DR. KRESS:  Okay.
         MR. CUNNINGHAM:  We want to get to that later.
         DR. KRESS:  Okay, so I'll wait until you do.
         DR. APOSTOLAKIS:  Mr. Sieber?
         MR. SIEBER:  I have a question.  When I read through this, I
     had a sort of an understanding of what the performance shaping factors
     were.  It's all the things that go into the operator, like training, the
     culture of the organization, mission of the crew, formal and informal
     rules, et cetera.  That to me makes this whole process unique to each
     utility, because the performance shaping factors are specific to a unit. 
     And this stuff is not transferable from one plant to another; is that
     correct?
         MR. FORESTER:  That is absolutely correct.
         MR. CUNNINGHAM:  The process would be transferable but not
     the results.  That is correct.
         MR. SIEBER:  So you just couldn't take some catalog of all
     of these potential possibilities for error and move them into your PRA,
     and anything that had any relevance to anything --
         MR. CUNNINGHAM:  The potentials and the experience base are
     useful inputs, but they are not substitutes for the analysis of an
     individual plant.
         MR. SIEBER:  Well, when you're doing, then, a retrospective
     analysis, you have to do it with the crew who was actually on the shift,
     and you will reach a conclusion based on that crew, not necessarily that
     plant; certainly not some other plant; is that correct?
         MR. KOLACZKOWSKI:  That would be the best track, correct.
         MR. SIEBER:  Thank you.
         MR. FORESTER:  So, sort of the next critical step after the
     issue has been defined and the scope of the analysis is laid out is to
     identify the base case scenario.  So we've got to go into a little bit
     more detail about exactly what we mean by base case scenario.
         Usually, the base case scenario is going to be a combination
     of the expectations of the operators as to how the scenario should play
     out given a particular initiating event.
         DR. APOSTOLAKIS:  So these are key words.  You're analyzing
     response to something that has happened.
         MR. FORESTER:  Yes.
         DR. APOSTOLAKIS:  You have a nice description in chapter 10
     of the various places where human errors may occur.  Essentially,
     they're also saying there that we recognize that the crew may create an
     initiating event, but that's not really the main purpose of ATHEANA.
         MR. FORESTER:  Right; that's -- yes, the crew could
     certainly create an initiating event, but they still have to respond to
     it once they create it.
         DR. APOSTOLAKIS:  Right; so, the understanding is what an
     event three, now, in the traditional sense, and the operators have to do
     something.
         MR. FORESTER:  Right.
         DR. APOSTOLAKIS:  Okay.
         MR. FORESTER:  Okay; so, we're looking at that kind of
     scenario, and it is the expectations for operators and trainers as to
     how that scenario should evolve, what, sort of, their expectations are,
     combined with some sort of reference analysis.  Again, that could be
     some sort of detailed engineering analysis of how this scenario expected
     to proceed, and again, that could be something from the FSAR.
         DR. KRESS:  Would the structure of ATHEANA allow you to do
     essentially what George says it doesn't do, and that is go into how an
     initiating event is created in the first place, if it's created by an
     operator acting of some kind?
         MR. FORESTER:  Well, certainly, we could --
         DR. KRESS:  Because you're starting out with normal
     operating conditions.
         MR. FORESTER:  Right; well, in terms of what the process
     does right now, it doesn't really matter whether the initiating event
     was caused by an operator or someone working out in the plant or some
     sort of hardware failure.
         DR. KRESS:  I know, but I was trying to extend it to where
     we could do some control over initiating events by looking at the --
         MR. FORESTER:  Well, we didn't explicitly consider that, but
     certainly, you could, you know, begin to examine activities that take
     place in the plant and sort of map out how those things could occur and
     then sort of use the process to identify potential problems with those
     processes that take place in the plant that could cause an initiating
     event, so it certainly could be generalized in that way.
         DR. SEALE:  That's an interesting point, because we always
     worry about completeness of the PRA, and this is another way to cut into
     the question of what are the possible scenarios that can be initiated
     and do my intervention mechanisms, cross-cut those scenarios to give me
     relief.
         DR. KRESS:  Well, my concern was initiating event
     frequencies are kind of standardized across the industry, and they're
     not plant specific.  They probably ought to be.
         DR. APOSTOLAKIS:  I think this operator-induced initiate is
     more important for low-power and shutdown point.
         DR. KRESS:  Yes, that's where I had -- that's what I was
     thinking of.
         DR. APOSTOLAKIS:  But anyway, if they do a good job here,
     that's a major advance, so let's not --
         DR. KRESS:  Let's don't push it yet.
         MR. KOLACZKOWSKI:  I was just going to comment that, for
     instance, if you could have as the base case scenario how an operator
     normally does a surveillance procedure, and then, you could look at the
     vulnerabilities associated with that in terms of how well is he trained? 
     How well is the procedure written?  Et cetera.  And then, the deviations
     would be how could the surveillance be carried out slightly different,
     such that the end result is he causes a plant trip, so we still think
     the process could apply.  It is true that in the examples right now
     provided in the NUREG, we don't have such an example, but we don't see
     why the process would not work for that as well.
         DR. APOSTOLAKIS:  Because in those cases, the fact that you
     have different people doing different things is much more important, and
     ATHEANA has not really focused on that.  Dr. Hullnager observed that,
     too.  So, I mean, the principles would apply, but it would take much
     more work, which brings me to the question:  what is the consensus
     operator model?  Are you talking about everybody having the same mental
     model of the plant?
         MR. FORESTER:  Yes; well, and the same sort of mental model
     of how the scenario is going to evolve.  So, if you ask a set of
     operators and trainers how they would expect a particular scenario to
     evolve in their plant, you would get some sort of consensus.  We try and
     derive -- the analysts would try to derive what that consensus was.
         DR. APOSTOLAKIS:  Now, again, one of the criticisms of the
     peer reviewers was that you really did not consider explicitly the fact
     that you have more than one operator, that you sort of lumped everybody
     together as though they were one entity.  So in some instances, you go
     beyond that, and you ask yourselves do they think, do they have the same
     mental model of a facility, but the so-called social elements or factors
     that may affect the function of the group are not really explicitly
     stated; is that correct?
         MR. FORESTER:  It is in some ways, in the sense that when
     you look at a crew perform, you can identify characteristics of how
     crews tend to perform at plants.
         DR. SEALE:  You can find the alpha mayo, huh?
         DR. APOSTOLAKIS:  By the way, John, you don't have have to
     have done everything.
         MR. FORESTER:  And I was going to say, we have not
     explicitly considered --
         DR. APOSTOLAKIS:  Okay; good; let's go on.
         MR. FORESTER:  -- the two dynamics, okay?
         [Laughter.]
         MR. FORESTER:  But it's not totally out of it is what I'm --
     the point I was --
         DR. APOSTOLAKIS:  I agree.
         MR. FORESTER:  Okay.
         DR. APOSTOLAKIS:  Because you're talking about consensus
     over the model.
         MR. FORESTER:  That is correct.
         DR. APOSTOLAKIS:  So it's not totally --
         MR. FORESTER:  Right.
         DR. KRESS:  I am still interested in the consensus operator
     model.  Excuse me for talking at the table but --
         MR. KOLACZKOWSKI:  That's okay; we understand why.
         DR. KRESS:  But, you know, I envision you've got two or
     three sets of operators, so you have maybe -- I don't know -- 10 people
     you're dealing with, and they each have some notion of how a given
     scenario might progress.  My question is really, do you have a technique
     for combining different opinions on how things progress into a consensus
     model?  Do you have some sort of a process or technique for doing that
     that you can defend or an interim entropy process or something?
         MR. FORESTER:  We don't have an explicit process for that. 
     I think the analysts were going to base their development of the base
     case scenario on what they understand from what the operators are
     saying; from what trainers are saying; what they see done in the
     simulators when they run this kind of initiator in the simulator, how
     does it evolve?  Again, you have reference case.
         DR. KRESS:  It's a judgment.
         MR. FORESTER:  It is a judgment.
         DR. KRESS:  Of who is putting together --
         MR. FORESTER:  Yes, it is.
         DR. KRESS:  -- your model.
         MR. FORESTER:  Yes.
         DR. KRESS:  Okay.
         MR. FORESTER:  Okay; well, there's what we see as the
     critical characteristics of the base case scenario, the ideal base case
     scenario is going to be well-defined operationally; the procedures
     explicitly address it; those procedures are in line with the consensus
     operator model; well-defined physics; well-documented.  It's not
     conservative, and it's realistic.  Again, we're striving for a realistic
     description of expected plant behavior, so that then, we can try and
     identify deviations from those expectations.
         One thing I do want to note, that part of what is done
     usually in developing the base case scenario is to develop parameter
     plots, so that if a given initiating event occurs, we try and map out
     how the different parameters are going to be behaving, but the
     expectations of the parameter behavior will be over the length of the
     scenario, because that's what the operators deal with.  They have
     parameters; they have plant characteristics that they're responding to. 
     So we try and represent that with the base case.  And not every issue
     allows that, but in general, that's the approach we want to take.
         DR. POWERS:  You have based those ideal scenarios on the
     FSAR, you have you looked at how they deviate from the FSAR?
         MR. FORESTER:  That's right; okay; the next step, then, is
     to see if we can identify potential operational vulnerabilities in the
     base case.  The idea is to try and find sort of areas in the base case
     where things are not perfect, and there could be some potential for
     human error to develop.  We look for biases in operator expectations, so
     if operators have particular biases, maybe they train a particular way a
     lot, and they've been doing that particular training a lot; the idea is
     to look at and try to identify what it is they expect and see if those
     expectations could possibly get them to trouble if the scenario changed
     in some ways, if things didn't evolve exactly like they expect them to.
         DR. APOSTOLAKIS:  So you are not really trying to model
     situations like the Brown's Ferry, where they did something that was not
     expected of them with the control rod drive pumps to cool the core?  You
     are looking for things that they can do wrong, but you're not looking
     for things that they can do right to create -- because I don't know that
     that was -- what was the base case scenario in that case, and what it is
     it that made them take this action that would raise the core?
         MR. FORESTER:  I'm not sure I understand the -- no, no, yes,
     the Brown's Ferry fire scenario.
         DR. APOSTOLAKIS:  Yes, the fire.  They were very creative
     using an alternative source of water.
         MR. KOLACZKOWSKI:  George, like PRA, this is basically, yes,
     we're trying to learn from things that the operator might do wrong. 
     This is in PRA; we try to -- we treat things in failure space and then
     try to learn from that.  But we certainly consider the things that the
     operator could do right, and particularly when we get to the recovery
     step, which we'll get to in the process, in the case of the Brown's
     Ferry fire, one of the things that the -- if we had now -- were doing an
     ATHEANA analysis, if you will, of that event, a retrospective analysis,
     one of the things you would recognize is that there was still a way out,
     and that was to use the CRD control system as an injection source, and
     that would be a recognized part of the process.
         But, yes, just like PRA, we are basically trying to find
     ways that the scenario characteristics can be somewhat different from
     the operator's expectations, such that the operator then makes a mistake
     or, if you will, unsafe act, as we call it, unsafe in the context of the
     scenario, and ends up making things worse as opposed to better, and
     then, we hope to learn from that by then improving procedures or
     training or whatever, based on what the analysis shows us the
     vulnerabilities are.  So --
         DR. APOSTOLAKIS:  The emphasis here is on unsafe acts.
         MR. KOLACZKOWSKI:  That's what I ended up trying to figure
     out is what could be the unsafe acts?  What could be the errors of
     commission or omission?  How might they come about, and then, what can
     we learn from that to make things better in the future?
         DR. SEALE:  But it still would be useful to understand what
     it takes to be a hero.
         MR. KOLACZKOWSKI:  I agree it's still part of the recovery.
         DR. BONACA:  In all of the power plants, that's what people
     refer to as tribal knowledge, especially discussions of the operators in
     the crews and among themselves:  what would you do if this happens and
     so on?  That would demonstrate the ways to get there, and in some cases,
     they lead you to success, like, for example, the example you made here,
     they would proceduralize and yet, they succeeded.
         In the other cases, I've noticed things that they have that
     they were talking about that would never lead to success; for example,
     the assumption that, you know, you dry your steam generator, and now,
     you do something to put some water in it; well, it doesn't cool that
     way.  You've got to recover some levels before you can do that.  So the
     question I'm having is is there any time to -- or is there any
     possibility?  I guess you can incorporate the type of information into
     this knowledge, right?  You would look for it.  Is there any extended
     process to look for it that you would model with ATHEANA?
         MR. CUNNINGHAM:  I think we'll come back to that.
         DR. BONACA:  The reason that I mentioned it is that that is
     -- you know, if you look at a lot of scenarios we have in accidents, it
     has a lot of that stuff going on.  As soon as you get out of your
     procedures, it comes in, and people do what they believe that --
         DR. APOSTOLAKIS:  In other terms, this is called informal
     culture.
         MR. FORESTER:  That's right, and we are taking steps to
     address those things; we certainly do.
         DR. KRESS:  I'm sorry to be asking so many questions, but
     I'm still trying to figure out exactly what you're doing.  If I'm
     looking at, say a design basis accident scenario, what I have before me
     is a bunch of signals of things like temperatures, pressures, water
     levels, maybe activity levels in the various parts of the plant as a
     function of time.  This is my description of the progression of events. 
     Now, when you say you're looking for deviations that might cause the
     operator to do something different than what -- are you looking for
     differences that might exist in those parameters?  The temperature might
     be this at this time, or the water level might be this?
         MR. FORESTER:  It might change at a faster rate than this.
         DR. KRESS:  It might change at a faster rate than you
     expect.  So, those are the indicators you are looking at.
         MR. FORESTER:  Exactly.
         DR. KRESS:  And you're looking at how those might possibly
     be different from what he expects and what he might do based on this
     difference.
         MR. FORESTER:  Right.
         DR. KRESS:  Okay; thank you.
         MR. FORESTER:  Okay; so, there are essentially several
     different approaches for identifying the vulnerabilities is what we have
     up there.  Again, we want to look for vulnerabilities due to their
     expectations.  We also want to look at a time line or the timing of how
     the scenario should evolve to see if there is any particular places in
     there where time may be very short, so if the scenarios are a little bit
     different than expected, then, there should be some potential for
     problems there, again, focusing on the timing of events and how the
     operators might respond to it.
         We also then tried to identify operator action tendencies,
     so this is based on what we call standardized responses to indications
     of plant conditions.  Generally, for PWRs and BWRs, you can look at
     particular parameters or particular initiators, and there are operator
     tendencies given these things.  We try and examine places where those
     tendencies could get them in trouble if things aren't exactly right.
         And then, finally, there is a search for vulnerabilities
     related to formal rules and emergency operating procedures.  Again, if
     the scenario evolves in a little bit different way, the timing is a
     little bit different than they would expect, there is some chance that,
     again, even though the procedures may be technically correct, there may
     be some ambiguities at critical decision points.  Again, we try and
     identify where these vulnerabilities might be.
         And once we've identified those vulnerabilities, we go to
     the process of identifying potential deviation scenarios.  And again, by
     deviations, we're looking for reasonable plant conditions or behaviors
     that set up unsafe actions by creating mismatches.  So again, we're
     looking for deviations that might capitalize on those vulnerabilities,
     and we're looking for physical deviations, okay, actual changes in the
     plant that could cause the parameters to behave in unusual ways or not
     as they expect, at least.
         In this step of the process, we're also developing what we
     call the error-forcing context.  We're going to identify what the plant
     conditions are.  We want to look at how those plant conditions may
     trigger or cause to become operable certain human error mechanisms that
     could lead them to take unsafe actions and also begin to identify
     performance shaping factors like the human-machine interface, recent
     kinds of training they had that could have created biases that could
     lead them, again, to take an unsafe action.  So part of the deviation
     analysis is to begin to identify what we call the error-forcing context,
     and ATHEANA has search schemes to guide the analysts to find these real
     deviations in plant behavior, and again, we are trying to focus on
     realistic representations.
         Part of the deviation analysis does involve, also, again,
     developing parameter plots that try and represent what it is the
     operators are going to be seeing and what is going to be different about
     the way this scenario would evolve, the deviation scenario would evolve
     relative to what they would.  So these four basic search schemes that we
     use to identify potential characteristics for a deviation scenario,
     there are similarities between these searches; there is overlap.  They
     use similar tools and resources.  There are a lot of tables and
     information in the document to guide this process, but in general, we
     recommend that each step is done sequentially, and by doing that, some
     new information could come out of each step.
         DR. APOSTOLAKIS:  John, this is a fairly elaborate process,
     and shouldn't there be a screening process before this to decide which
     possibly human actions deserve this treatment?  This is too much.  Am I
     supposed to do it at every node of the event three?  If I look at an
     event three, for example, and it has some point, you know, go to bleed
     and feed, I know that's a major decision, major human action.  I can see
     how it deserves this full treatment, but there are so many other places
     where the operators may do things here or there.
         Surely, you don't expect the analysts to do this for every
     possibility of human action.  So shouldn't there be some sort of a
     guideline as to when this full treatment must be applied and when other,
     simpler schemes perhaps would be sufficient?  Because as you know very
     well, one of the criticisms of ATHEANA is its complexity.  So some
     guidelines before you go to the four search schemes, so right after, as
     to which human actions deserve this treatment --
         MR. FORESTER:  Correct.
         DR. APOSTOLAKIS:  -- would be very helpful.
         MR. FORESTER:  Well, just a couple things.  One is there is
     an -- you know, if you identify a particular issue that you're concerned
     with, then, you can identify what particular human failure events you
     might be interested in, okay, or unsafe actions, so the issue may help
     you resolve some of that in terms of what you would like to respond to. 
     If that's not the case, if you are dealing more with a full PRA, you're
     trying to narrow down what it is you want to look at, then, we do
     provide some general guidance in there for how to focus on what might be
     important scenarios to initially focus your resources on.
         DR. APOSTOLAKIS:  But you're not going to talk about it.
         MR. FORESTER:  No, I hadn't planned on talking about that
     explicitly.  It's -- you know, I mean, you can say that, you know, it's
     the usual kind of things, I guess, in terms of looking for -- trying to
     prioritize things, you know, do you have some short time frame kinds of
     scenarios?  We have a set of characteristics; they're not coming to mind
     right at this second, but a set of characteristics that were used to
     prioritize those scenarios to focus on.
         On the other hand, I think that the process itself, the
     search for the deviation scenarios, you are reducing the problem,
     because you're trying -- you're narrowing down to the problem kind of
     scenarios.  Okay; once you've identified, you know, an initiator, for
     example, and maybe you're going to focus on several critical functions
     that the operators have to achieve to respond to that initiator, then,
     what the process does is it focuses the analyst in on the problem
     scenario.  So the process itself reduces what has to be dealt with. 
     We're not trying to deal with every possible scenario; we're trying to
     deal with the scenarios that are going to cause the operators problems.
         MR. KOLACZKOWSKI:  Let me also add, George, though, I think
     if you were going to apply this to an entire PRA, if your issue was I
     want to redo the HRA and the PRA, I would say that no matter what HRA
     method you used, that's a major undertaking.
         DR. APOSTOLAKIS:  Yes, but you are being criticized as
     producing something that only you can apply.
         MR. KOLACZKOWSKI:  I was going to say -- thanks, Ann -- I
     think you'll see, as we go through some more of the presentation and
     show you the example, the method now has become much more -- excuse me,
     methodical, and the old method that you saw in Seattle, it has changed
     actually quite a bit from that method now.  It's far quicker to use as
     long as you don't want to get caught up in all of the little minute
     documentation.  You can actually do an entire scenario, set of
     sequences, probably in a matter of hours to a day kind of thing.
         MR. FORESTER:  Once you've done a little bit of front end
     work on this.
         [Laughter.]
         MR. FORESTER:  So again, though, I do think the process
     itself -- you're looking for the deviation scenarios; I think that
     narrows the problem solving.  Is that -- you know, the prioritizations
     -- okay; okay, we have four basic searches.  The first search involves
     using HAZOP guide words to try and discover troublesome ways that the
     scenario may differ from the base case.  So again, we try and use these
     kinds of words to ask questions like, well, is there any way the
     scenario might move quicker than we expect it to or faster?  Could it
     move slower?  Could it be more, in some sense, than what they expect,
     given a particular initiator?  For example, maybe given one initiator,
     you also have a loss of instrument error.  So now, it's more than it
     was.
         Another example might be in one of our examples in the
     document is we're a small loca, close to a small loca, but it's actually
     more than a small loca; yet, it's not really a large loca either.  So,
     again, we begin to look -- one way is to use these HAZOP guide words
     simply as ways to investigate, you know, potential ways that the
     scenario might deviate from what is expected, and the -- we're
     interested in the behavior of the parameters, once again:  are the
     parameters moving faster than we expected in things like that?  So
     that's one way we do the search.
         Another search scheme is then to identify that given the
     vulnerabilities we already identified, maybe with procedures and
     informal rules, are there particular ways that the scenario might behave
     that could capitalize on those vulnerabilities?  Should the timing
     change in some way to make the procedures a little bit ambiguous in some
     ways?  That type of thing.
         Third, we look for deviations that might be caused by subtle
     failures in support systems, so this is sort of the way the event occurs
     and the way something else happens might cause the scenario to behave a
     little bit differently.  They might not be aware that there is a problem
     with the support system.  So again, a subtle failure there could cause
     them problems in terms of identifying what's happening.
         DR. APOSTOLAKIS:  Are you also identifying deviations that
     may be created by the operators themselves, by slips?
         MR. FORESTER:  Yes, I don't see why we couldn't do that.  I
     mean, to arbitrarily examine what kinds of slips are possible at this
     point in time, I'm not sure we've done that explicitly, but that's
     certainly an option in terms of doing the deviation search.
         DR. APOSTOLAKIS:  Because it has happened.
         MR. FORESTER:  That's going to get pretty complex but --
         DR. APOSTOLAKIS:  It has happened.
         MR. FORESTER:  It has happened; that's true.
         DR. APOSTOLAKIS:  That isolated systems simply by their own
     problem, but then, it takes about a half an hour to recover.
         MR. FORESTER:  Yes; I guess, you know, if we found some
     vulnerabilities or we found some inclinations or some situations where
     they might be focusing on particular parts of the control room or
     something or on the panel, part of what we do examine are performance
     shaping factors like the human-machine interface that could contribute
     to the potential for an unsafe action, and in examining those things, we
     would determine that there is some poor labeling or something that
     creates the potential for a slip, that would certainly be figured into
     the analysis.
         DR. APOSTOLAKIS:  So it could be, but it's not right now.
         MR. FORESTER:  No; I guess I shouldn't have said it that
     way.  I think it is.  As I'm saying, once you've identified potential
     deviations, part of the process is involved in looking at the
     human-machine interface; looking at other performance shaping factors
     that could contribute to the potential of the unsafe action.  So, and
     that is part of the process.  That is explicitly part of the process, to
     examine those things.  So you might, then, identify, you know, it would
     take someone knowledgeable about the way the control room panels and so
     forth should be designed to maybe identify those problems, but
     presumably, you'll have a human factors person on the team.
         DR. KRESS:  I'd like to go back to my question about the
     continuous nature of deviations.  Let's say you have a base case
     scenario, and you've identified in there a place along the time line
     that's a vulnerability and that the operator might do something, and
     then, when he does that something, it places you in another scenario
     that's different than your base case.
         MR. FORESTER:  Right.
         DR. KRESS:  And then, there are things going on after that,
     and there may be different vulnerabilities in that line than there were
     in the base case.
         MR. FORESTER:  That's true.
         DR. KRESS:  And there's an infinite number of these.  I just
     wonder how you deal with that kind of --
         MR. FORESTER:  Well, we try and deal with it during the
     recovery analysis, when we move to quantification, when we try and
     determine whether -- what the likelihood of the unsafe act might be. 
     Once they've taken that action, we then try and look at what kind of
     cues would they get, what kind of feedback would they get about the
     impact that that action has had on the plant; you know, what other
     things; how much time would be available; what other input could they
     receive in order to try and recover that action.
         DR. KRESS:  So you did tend to follow the new scenario out
     --
         MR. FORESTER:  Right.
         DR. KRESS:  -- to see what he might be doing.
         MR. FORESTER:  Exactly.
         Okay; and before I go to the last search scheme, I'd like to
     go to the next slide.  Actually, we sort of cover it on the next slide
     anyway so --
         DR. KRESS:  When you say search, what I'm envisioning is a
     person sitting down and looking at event trees and things and doing this
     by hand.  This is not automated.
         MR. FORESTER:  It's not automated at this point, no.
         DR. KRESS:  You're actually setting --
         MR. FORESTER:  It could be automated, yes, and we hope to be
     able to automate it, provide a lot of support for the process.
         DR. BONACA:  You know, I had just a question.  You know, it
     took a number of years to develop the symptom-oriented procedures, and
     they really went through a lot of steps from what you're describing
     here.  In fact, it was a time-consuming effort that lasted years, and
     they had operators involved.  Have you looked at them at all to try to
     verify, for example, the process you are outlining here?  Because they
     did a lot of that work that could be useful.
         DR. KRESS:  It sounds very similar to that.
         DR. BONACA:  Yes; I mean, they have to go through so many
     painstaking steps; you know, is this action or recommendation in the
     procedure confusing?  I wonder if you had the opportunity --
         MR. FORESTER:  Well, part of our process involves doing flow
     charts of the procedures, specifically investigate where the ambiguities
     could occur.  So we go through that process.
         Now, in terms of have we actively tried to look at, you
     know, validating the existing procedures?  No, we haven't taken that
     step.  But I think the general consensus is is that there are -- the
     procedures are not perfect; that things don't evolve exactly -- I mean,
     there can be timing kinds of issues, and there can be combinations of
     different kinds of parameters that can be confusing.
         DR. BONACA:  So I think that probably, they would exercise
     at one point with one set of procedures is what rules would be a good
     foundation for a code like this and furthermore would give you some
     indication of the strengths you may have in the process here of
     identifying things or only the key points that -- for example, the key
     points that were then central to the discussions of an owners' group, so
     that they can identify in this process what they were, and they actually
     go through the same situations.  So there is a lot that can be learned
     to verify the adequacy of a tool like this.
         MR. CUNNINGHAM:  No, that's a good point.  We'll follow up
     with that somewhere along the line here.
         MR. FORESTER:  Okay; on the next slide, one thing I wanted
     to emphasize that a major part of the first three searches while we're
     looking for the expectations, and they're using the guide words to sort
     of characterize the way the scenarios could develop, we're also trying
     to evaluate what the effect of those deviations, what the effect of the
     deviations could be on the operators.  What we wanted to determine is
     the way particular parameters behave or the way the scenario was
     unfolding, could that trigger particular human error mechanisms that
     could contribute to the likelihood of an unsafe act?
         Also, are there other performance shaping factors that could
     then, based on the characteristics of the scenario and potential human
     error mechanisms, are there performance shaping factors that could also
     contribute to that potential for an unsafe act?  So we're doing that at
     the same time we're developing the actual deviations, and one thing
     we've done, which I'll talk about here somewhere, I think -- maybe not
     -- is to try and tie particular characteristics of the scenario:  are
     the parameters changing faster than expected?  Or are two of them
     changing in different ways?  And try to identify how the characteristics
     of the scenario could elicit particular types of error mechanisms: 
     could it cause the operators to get into a place where they're in kind
     of a tunnel vision kind of state?  They're focused on the particular
     aspects of the scenario, or do they have some kind of confirmation bias
     developed, or based on their expectancies, they have, you know, a
     frequency bias of some sort.
         And then, we try and tie the behavior of the scenario, the
     characteristics of the scenario, to potential error mechanisms and then
     relate specific performance shaping factors to the potential for the
     error.  We have tried to provide some tables that make that process a
     little easier, so we have -- essentially, we have made an effort to try
     and tie those factors together much more explicitly.
         So getting that process, then, the fourth scheme, the fourth
     search, is to sort of do a reverse process.  If once you identify
     potential error types and tendencies or operator tendencies that could
     cause the human failure events or unsafe facts of interest, then, you
     simply use conjecture to try and ask are there any kind of deviations
     that could make these things occur, that have the right characteristics
     that could make these things occur.  So it's sort of coming from the
     other direction rather than starting with the physical characteristics;
     you just kind of start with the human tendencies and see if there are
     deviations that could cause that.
         So with those four searches, we think we do a pretty good
     job of identifying a lot of potential deviation kinds of
     characteristics.  Then, once that's --
         DR. APOSTOLAKIS:  Does everyone around the table understand
     what an error mechanism and an error type is?
         MR. FORESTER:  Well, error types are fairly straightforward,
     in the sense that it's just things that they could do that could lead to
     the unsafe fact, like make a wrong response; skip a step in a procedure;
     normal kinds of -- it's not a real sophisticated kind of concept there;
     it's just things that they could do.
         Error mechanisms, we're referring to, you know, essentially
     things within the human, general processing, human information
     processing characteristics, what their tendencies are, maybe some
     processing heuristics that they might use; not everything is going to be
     a very carefully analyzed, completely systematic kind of analysis. 
     They'll use bounded rationality, so people have sort of general
     strategies for how they deal with situations.  Now, most of the time,
     those kinds of situations, those kinds of strategies can be very
     effective, but in some situations, the characteristics of the scenario
     that may, where those particular tests may apply, may lead to an error,
     because they're misapplied.  So that's how we're characterizing error
     mechanisms.
         DR. APOSTOLAKIS:  Is the inclusion of error mechanisms in
     the model what makes it, perhaps, a cognitive model?  I've always
     wondered about these things.  Because you have included these error
     mechanisms, you can claim that now, you have something from cognitive
     psychology in there?
         MR. FORESTER:  Well, we have the error mechanisms.  We also
     have the information processing model, you know, the monitoring and
     detection process; the situation assessment.  The human error
     mechanisms, to some extent, are tied to those particular stages of
     processing, so, you know, we try and include all of that.  In fact, the
     use of the tables that address the error mechanisms is broken down by
     situation assessment and monitoring.
         DR. APOSTOLAKIS:  We are going very slowly.
         MR. FORESTER:  Okay; well, I'm just about done.
         Once you have identified all of the deviation
     characteristics, basically, you've got to put them all together and
     identify the ones that you think are going to be the most relevant,
     okay?
         We can look to that.  And the final slide is, again, we just
     want to emphasize that once we have identified what we consider a valid,
     a viable deviation scenario that has a lot of potential to cause
     problems for the operator, and we analyze that, we want to quantify the
     potential for the human failure event to occur or the unsafe actions. 
     We can directly address the frequency of plant conditions; standard
     systems analysis to calculate that.  We can get the probability of
     unsafe act and the probability of nonrecovery at the same time given the
     plant conditions and the performance shaping factors.
         We look at this thing in an integrated way, and we do want
     to emphasize that, that we carry out the scenario all the way out to the
     very end, in a sense, to the last moment, when they can have a chance to
     do something.  We consider everything that's going on, and then,
     ideally, in my mind, in terms of quantifying that, we have the input of
     operators and trainers.  Once you -- for example, if you can set up the
     scenario on a simulator, you can run a couple of crews through that. 
     You may not necessarily -- you're not using that to estimate the
     probability, but what I like to look for is what it is the operators and
     trainers, what they think will happen when their crews in the plant are
     sent through that scenario.
         If everyone pretty much agrees, oh, yes, you know, if that
     happened like that, we would probably do the wrong thing, then, you have
     a very strong error-forcing context, and quantification is simple.  For
     situations where that is not the case, where there are disagreements
     about what happened or not or a lot of high expectation that the actual
     unsafe actions would take place, then, we do not have a new or a special
     approach for dealing with that problem for a couple of reasons:  one,
     none of the existing approaches are completely adequate as they are. 
     For one thing, we have no empirical basis from psychology to support
     those kinds of quantifications, those kinds of estimates.  It just
     doesn't exist.
         Nor do we have an adequate existing database of events that
     we can base it on.  So, getting that situation, our suggestion for now
     is to try and use existing methods.  However, I think there are some
     things that we could do to improve our existing quantification process. 
     You know, part of what we're recommending is maybe use SLIM.  Well, the
     problem with SLIM, of course, is you don't have adequate anchors.  It's
     hard to determine what the anchors might be so you can actually use a
     SLIM kind of process.
         So one thing we'd like to investigate, I think, is how we
     could identify some maybe anchor kinds of events; we could characterize
     the events that we could pretty substantially determine what the
     probability of that event was; characterize that event in some way, at
     least maybe a couple of events on the continuum, so that then, when we
     characterize events using the ATHEANA methodology, we would know roughly
     where they fit along that continuum.  Okay; so, that's one improvement
     that we could make that we haven't made right now.
         DR. BONACA:  One question I have is that in your
     presentation, you are discussing the operator, but there are operators
     who operate.  One thing is to talk about the operators in the control
     room who have been trained on system-oriented procedures, and there,
     it's pretty clear how you can define the problem.  The problem is that
     they're following a procedure to the letter, and then, if there is some
     area where we have misdesigned the procedures, then, we mislead them,
     and they may have to initiate something that they're not used to, and
     that's all kind of stuff.
         Life is pretty clear that in the operators in the plant,
     they follow procedures to do maintenance, for example, it seems to me
     that the way you would train those kinds of operators would be very
     different from the ones in the control room, because there, they have
     their options on the procedures, on how you use them and so on and so
     forth.  Also, the operators are at the mercy of other operators doing
     other things with other systems.  I think even if you talked about how
     they would --
         DR. APOSTOLAKIS:  They haven't done that.
         MR. FORESTER:  No.
         DR. BONACA:  So when you're talking about operators, you're
     talking about the ones --
         DR. APOSTOLAKIS:  A single entity.  A single entity.
         DR. BONACA:  Yes.
         DR. APOSTOLAKIS:  In the control room.
         MR. FORESTER:  In the control room, that is correct.
         DR. APOSTOLAKIS:  I have a few comments.  This is the only
     slide on quantification?
         MR. FORESTER:  Yes.
         DR. APOSTOLAKIS:  So I will give you a few comments.
         MR. KOLACZKOWSKI:  Except for the example.
         MR. FORESTER:  Yes, we do have the example.
         DR. APOSTOLAKIS:  Okay; on page 10-7, coming back to my
     favorite theme, item two, the error-forcing context is so non-compelling
     that there is no increased likelihood of the unsafe act.  If you really
     want the error-forcing context, the error-forcing context is so
     non-compelling that there is no increased likelihood -- I really don't
     understand your insistence on calling it forcing.
         MR. FORESTER:  Well I guess --
         DR. APOSTOLAKIS:  You don't have to comment.
         MR. FORESTER:  We've also been criticized for using the term
     error at all, okay?  But the point we want to make is operators are led
     to take these unsafe actions.
         DR. APOSTOLAKIS:  Forcing -- and later on, you say that the
     probability, even if it's very relevant, will be something like 0.5.
         MR. FORESTER:  Yes; I know Strater uses error-prone
     conditions or error-prone situations, so there are other terms.
         DR. APOSTOLAKIS:  You saw here the HEART methodology.  Have
     you scrutinized it?  I'll give you some things that bother me.  On Table
     10-1, there are generic task failure probabilities, so that first one is
     totally unfamiliar; performed at speeds with no real idea of likely
     consequences, and there is a distribution between 0.35 and 0.97.
         Then, it says that in Table 10-2, HEART uses performance
     shaping factors to modify these things, and the first 10-3 is
     unfamiliarity.  So now, I have a generic description of a totally
     unfamiliar situation that I have to modify because I'm unfamiliar with
     it, and the factor is 17.  It's the highest on the table.  So I don't
     know what that means.  Either I was unfamiliar to begin with, and
     second, there is a distribution in Table 10-1.  Am I supposed to
     multiply everything by 17?  What am I doing?  Am I multiplying the 95th
     percentile by 17?  Am I multiplying the mean by 17?
         MR. FORESTER:  It's just the action.  It's just the
     probability for the action.
         DR. APOSTOLAKIS:  It's not explained.
         MR. FORESTER:  We didn't really claim to completely explain
     HEART in there.  We're trying to provide some guidance.
         DR. APOSTOLAKIS:  You need to scrutinize it, I think, a
     little better.
         MR. FORESTER:  I think you're right, and a lot of the
     categories are not always easily used.  It's not a perfect method.
         DR. APOSTOLAKIS:  And then you say that one of the modifiers
     is a need to unlearn a technique and apply another that requires the
     application of an opposing philosophy.  I'm at a loss to understand how
     you make that decision, that somebody has to unlearn something and apply
     something else.
         And then, there is a modifying factor of five if there is a
     mismatch between the perceived and the real risk.  I don't know what
     that means, risk.  If I were you, I would throw this out of the window. 
     You don't have to take all these great stuff you presented in the first
     18 view graphs and then present this thing.  You should do your own work
     here, in my view.
         As I said earlier, I thought that the quantification part is
     not at the same level of quality as the rest of the report.
         MR. FORESTER:  Agreed.
         MR. KOLACZKOWSKI:  Agreed.
         DR. APOSTOLAKIS:  You are throwing away a lot of the details
     that you took pains to explain to us.  There are no error mechanisms
     here anywhere.  And I fully agree, by the way, with what you said about
     the difficulty and, you know, there has to be some sort of a judgment
     here.  There is no question about it, and this committee will be very
     sympathetic to that, but not this kind of thing.  And this is old,
     right?  The reference is from 1988, way before ATHEANA came into
     existence.
         The thing that is really startling is that it is not very
     clear how the error-forcing context is to be used.  They mention SLIM. 
     I thought I was going to see here an application of SLIM with the
     problems that you mentioned.  Everybody has those problems; where you
     would remedy one of the difficulties or weaknesses of SLIM, namely,
     which performance shaping factors one has to consider.  And I think your
     error-forcing context or whatever you call it in the future is ideal for
     producing those.  I mean, you have done such a detailed analysis.  Now,
     you can say, well, a rational application of SLIM would require perhaps
     a set of independent ESFs or mutually exclusive -- I don't know what the
     right term is -- and these are derived from the error-forcing context we
     just defined in this systematic way, and no one will blame you for that,
     because, I mean, if you've worked in this field for a month, you can
     realize that the numbers will never be, you know, like failure rates,
     where you can have data and all of that, and the anchors, I think you
     pointed out, is an extremely important point, and perhaps you can do
     something about it to give some idea.
         But this guy who developed HEART had no heart.
         [Laughter.]
         DR. APOSTOLAKIS:  His task is unfamiliar, and then, they
     modified because I'm unfamiliar with the situation?  I mean, what is
     this?  And a factor of 17, right?  You increase the probabilities by
     approximately 17.
         [Laughter.]
         MR. FORESTER:  The only advantage to that method is this guy
     did claim that a lot of these numbers were based on empirical data.
         DR. APOSTOLAKIS:  And you know very well --
         MR. FORESTER:  Yes, well, okay --
         DR. APOSTOLAKIS:  -- what that means.
         [Laughter.]
         DR. APOSTOLAKIS:  Now, another thing -- so I'm very glad
     that you are not willing to really defend to the end chapter 10.
         MR. FORESTER:  No.
         DR. APOSTOLAKIS:  It's probably something you wouldn't be
     working on.  Okay; I'm very happy to hear that, I must say, because I
     was very surprised when I saw that.
         Now, the -- actually, some discussion is really great.  The
     figures there, there is some type of figure 10-1 is repeated twice. 
     Well, that's okay.  There was one other point that I wanted to make
     which now escapes me -- oh, this -- all the information processing
     paradigm is not here, right?  You are not really using that.
         MR. FORESTER:  Well, we're using --
         DR. APOSTOLAKIS:  All of this stuff, I didn't see it playing
     any role, at least the way it is now.
         MR. FORESTER:  It's not explicitly represented; you're
     right.
         DR. APOSTOLAKIS:  The way it is now; okay.
         MR. FORESTER:  In our minds, it's represented.
         DR. APOSTOLAKIS:  Oh, I know that the mind is a much broader
     term.
         Okay; I'm very glad for that.
         Okay; the dynamic element, and I believe Hullnagel commented
     on that, too.  We were doing in a different context some retrospective
     analysis recently at MIT of two incidents.  One was at Davis-Bessey; the
     other was the Catawba.  And what you find there is that there are some
     times, critical times, when the operators have to make a lot of
     decisions.  There's no question about it.  That's why you ask about the
     training, and, I mean, you don't really want to attack each one with a
     full-blown analysis.
         MR. FORESTER:  Right.
         DR. APOSTOLAKIS:  But in one of the incidents, I think it
     was the Catawba, there were two critical points.  One was 6 minutes into
     the accident; the other 9 minutes.  Where they had to make some critical
     decisions, and the contexts were different, there was a dynamic
     evolution.  In other words, at 9 minutes, they had more information;
     they were informed that something was going on, so now, they had to make
     an additional decision.  This specific element, the dynamic nature of
     the EFC, is not something that I see here, and perhaps it's too much to
     ask for at this stage of development, but it appears to be important,
     unless I'm mistaken.
         In other words, is the error-forcing context defined as a
     deviation from what's expected?  And for this sequence, it's once and
     for all?
         MS. RAMEY-SMITH:  No.
         MR. CUNNINGHAM:  No.
         DR. APOSTOLAKIS:  No, so you are following the evolution and
     the information that is in the control room, and you may have to do this
     maybe two or three times at two or three different --
         MR. KOLACZKOWSKI:  Exactly, George.  We present this as a
     very serial type of process.  Your point is well taken.  You really have
     to iterate and iterate.  I think in one of the examples that we have for
     the loss of main feed water event, one of our deviation scenarios is X
     minutes into the event, all of a sudden, the spray valve on the
     pressurizer is called for, and it sticks.
         DR. APOSTOLAKIS:  Right.
         MR. KOLACZKOWSKI:  That changes the scenario; it changes the
     operator's potential response, and that's carried through.  So I think
     we try to do that.
         DR. APOSTOLAKIS:  Okay.
         MR. KOLACZKOWSKI:  But clearly, we're still discretizing the
     situation into pieces of time, yes.
         DR. APOSTOLAKIS:  Okay; good, so, the dynamic nature of that
     is recognized; that's good.
         Now, a recovery in this context, my impression is it means
     recovering from errors that they have made, not recovery in the sense
     that the average person or the plant person will use it to recover from
     the incident.  They are two different things, aren't they?
         MR. KOLACZKOWSKI:  Well, ultimately, we're worried about it. 
     The core damage is the situation we're worried about.  We're ultimately
     worried about recovering the scenario.  So, as I said, it will go back
     to a success path.  But part of that recovery may be overcoming a
     previous error or unsafe act that the operator has performed.  So now,
     something has to come in that changes his mind about what I did an hour
     ago, I now recognize was a mistake, and now, I need to do this.  So,
     that could be part of the recovery, but ultimately, we're looking at
     recoveries of the scenario, yes.
         DR. APOSTOLAKIS:  So both.
         MR. FORESTER:  Both.
         DR. APOSTOLAKIS:  Okay; well, fine; if the main thing was to
     realize that you, yourselves felt that chapter 10 needed more work, so I
     have no more questions.
         DR. POWERS:  But I may still.
         DR. APOSTOLAKIS:  I'm sorry; yes.
         DR. POWERS:  As you're willing to say that the system is
     more complicated, how do we decide that it's better?
         DR. APOSTOLAKIS:  In my view, as I said earlier, the
     emphasis on context, the extreme attention that they have paid to
     context is a very good step forward.  Other HRA analyses, they do some
     of it but not -- the quantification part, I am not prepared to say that
     it is better, but I am glad to see that they are not saying that either. 
     But I think this detailed analysis that you see, there are other
     argumentations in scope, but that's expected.
         I think it's a good step forward.  It's a very good step
     forward.  If I look at the --
         DR. POWERS:  Maybe the question is just different.  The
     analysis is more complicated.  Therefore, you wouldn't have to be
     sparing in your application of it.  How would we know when this
     complicated system --
         DR. APOSTOLAKIS:  I asked them that question and
     unfortunately, they got upset.
         [Laughter.]
         DR. POWERS:  And when can I do something else, and what is
     that something else that I should do?
         DR. APOSTOLAKIS:  I think the message is very clear,
     gentlemen, that you have to come up with a good screening approach.  You
     can't apply this to every conceivable human action.
         MR. CUNNINGHAM:  That's right, and if we need to better
     describe how to do that and take that on, we've already talked about
     that as an issue in terms of next year's work or this year's work, that
     sort of thing.
         DR. APOSTOLAKIS:  Speaking of years, Hullnagel points that
     out, and I must say I'm a little disturbed myself.  This project started
     in 1992, 7 years.  Do all the members feel that this is a reasonable
     amount of time for the kind of work they see in front of them?
         DR. KRESS:  Well, we'd have to know whether this work was
     continuously done and how many people --
         DR. APOSTOLAKIS:  Mr. Cunningham is here.  He can explain
     that to us.  Were there any --
         DR. POWERS:  Well, come on, George.  It's difficult, is it
     not, to manage the NRC?  And besides, on the performance that they want
     --
         DR. APOSTOLAKIS:  No, but on the other hand, if I'm
     presented with a piece of work, I mean, how much effort has been
     expended on it is a factor in deciding whether the work is good or not.
         DR. POWERS:  It is?  That stuns me.  It certainly is not in
     the thermal hydraulics community.
         [Laughter.]
         DR. APOSTOLAKIS:  After such a powerful
     argument --
         [Laughter.]
         DR. APOSTOLAKIS:  I defer humbly to -- I withdraw my
     question.
         DR. SEALE:  The thing is that the entropy is always
     increasing, whether you do a damn thing about it or not.
         [Laughter.]
         DR. KRESS:  Only in closed systems.
         DR. BONACA:  One thing that I'd like to -- I like the
     process, et cetera.  Still, it seems to me that the process doesn't
     distinguish, for example, between the French situation and the American
     situation.  In the U.S., we have extremely detailed procedures that the
     operators will live by, and literally 10 years were expended to put them
     together, going through a process which was as thorough as this and
     involved all kinds of people, from the operators to engineers to
     everybody else.  And it seems to me that -- I'm trying to understand if
     I go to review a possible situation that develops in an accident under
     the French plan, where, in fact, there isn't a structural procedure; I
     understand how I would have used it.
         In fact, I would use it to see if the operator was
     discussing the elements and what kind of errors he will make.  I would
     make a hypothesis.  But in the U.S., I would tend to say that applied in
     a way to review the procedures that they followed to see what errors he
     would make in the U.S. and to eliminate all of those elements that are
     then focused purely on the many possible -- see what I'm trying to say? 
     I don't see any of the --
         MR. KOLACZKOWSKI:  Yes.
         DR. POWERS:  It seems to me that it would be that way
     because of the tie to the DBAs.  When you tie them to the DBAs, you've
     only got one measure.  You say, gee, I can use this just to make sure my
     -- but I think that when you go into the severe accident space, and you
     have multiple failures, this network of deviations, there is an infinite
     net that they show, and it changes character.
         DR. BONACA:  It does.  There are new procedures.  It's
     totally different.  They're not at all looking at these DBAs.  They're
     looking at the air pressure, temperature condition, et cetera, is moving
     in this direction; what are you going to do?
         DR. BARTON:  And you still have underlying error.
         DR. POWERS:  But still you have underlying a failure, and
     when you go to multiple failures --
         DR. BONACA:  You do, and it makes an assumption that, you
     know, you are going to a key procedure, because you have conditions that
     will require your ECCS to come up, for example, so there are some entry
     decisions you make, but then, especially for the EPGs, for BWRs, they're
     extremely symptom-oriented.  I mean, at some point, you forget where you
     came from.
         DR. POWERS:  Even with the symptom-oriented, you do things
     that apply to an area that ultimately get you to what's wrong.
         DR. BONACA:  I understand, but again, if it was a plant X,
     and they would use this, the first thing I would do, I would go through
     this process to understand where my procedures were invested billions of
     dollars; you're correct.  That's really what happened.  I mean, if it
     followed literally, then, it would be different in certain respects from
     the application that we make for -- where I have no prescribed way, and
     so, I may discover that that's why I led the operator in the situation
     we are in.
         Now, I don't know if this had to have a different
     perspective when you apply it to our plants, which are going through
     very structured procedures.  It seems to me every scenario would be
     still open if you review it in a way where everything is possible, and
     yet, you're ignoring the existence of the framework, which is exactly
     the pattern of the steps you're suggesting here.
         MR. CUNNINGHAM:  I guess my reaction is that I think we
     would have to kick that around among the team as to implications of the
     French style versus the American style and that sort of thing.  I just
     -- I don't think we've thought much about that.
         DR. APOSTOLAKIS:  It may require a designer approach.
         We will recess for 12 minutes, until 10:35.
         [Recess.]
         DR. APOSTOLAKIS:  We have about an hour and 5 minutes, so
     you will decide how best you want to use it.  It's yours.
         MR. FORESTER:  Okay; I think what we'd like to do is present
     an example of application of the method to some fire scenarios.  This is
     part of another task that we have to apply ATHEANA to fire scenarios. 
     We want to sort of do a demonstration of the methodology for fire
     applications, and Alan Kolaczkowski is going to present this.
         DR. APOSTOLAKIS:  We have this or we don't have this?  We
     don't have it.  No, we don't have the report.
         MS. RAMEY-SMITH:  It hasn't been written.
         MR. KOLACZKOWSKI:  My name is Alan Kolaczkowski.  I work for
     Science Applications International Corporation.  George, I'm one of the
     new team members.  I've only been around for about a year and a half so
     --
         DR. KRESS:  You're saying we can't blame you.
         MR. KOLACZKOWSKI:  Blame?  No, I guess you can't.
         Okay; well, you've heard at least in the abstract now what
     the methodology involves, and again, I think the important points is
     that -- and I think George articulated this very well -- is that we're
     really trying to look at the combination of how plant conditions can,
     based on certain vulnerabilities either in the operator's knowledge
     about how the scenario might proceed, weaknesses in the procedures,
     whatever, how those two things may come together in a way that if the
     scenario is somewhat different from, if you will, the base case scenario
     that maybe the operator is prone to perform certain actions which would
     be unsafe in light of the way the scenario is actually proceeding.
         I want to demonstrate now, actually, the stepping through
     the process that will make some of these things and some of these
     abstract ideas perhaps a little bit more concrete, step through it by
     actually showing you an example, and as John pointed out, what I want to
     do is take you through a set of a couple of fire analyses that we've
     done, and as Ann pointed out, this report is currently in process in
     terms of being put together.
         So, the first slide, what I'd like to point out here really
     is focus primarily on the third bullet, unless you have questions on the
     others, and that is if you look at current HRA methods and the extent
     that they look at fire events, and certainly, this had to be done as
     part of the IPEEE program by the licensees, et cetera, what you find is
     that a lot of the current HRA methods look at the human reliability
     portion of the issue pretty simplistically.  Most of the IPEEEs, if you
     look at them, what they've done is they've taken their human error
     probabilities from the internal events, and they might put a factor of
     five on it and say, well, the stress is probably higher because there's
     a fire going on, and there's a bunch of smoke, et cetera, and that's
     what we're going to use for our human error probabilities.
         And there really is, for the most part, not a hard look at
     what is the fire doing?  How is the equipment responding?  Might some of
     those responses be erratic?  How might that change the way the operator
     responds during the scenario, et cetera?  That kind of look at what the
     human is doing is typically not looked at.  It's treated pretty
     simplistically, for the most part.  And so, we thought that this was an
     error that would be very fruitful for ATHEANA to look at in order to
     look at the context of fires and how scenarios from fire initiators
     might affect the way the operators will respond as the fire progresses
     and so on and so forth.  So that's kind of why we looked at this.
         DR. APOSTOLAKIS:  What is SISBO?
         DR. POWERS:  Self-induced station blackout.
         DR. APOSTOLAKIS:  What?
         DR. POWERS:  Self-induced station blackout.
         MR. KOLACZKOWSKI:  I'm going to describe that in the next
     slide, I believe.
         So we decided that this was a pretty fruitful area to look
     at, and that's why we chose this one as a good example to present here
     in front of the committee.
         DR. POWERS:  Do we have a good phenomenological
     understanding of how the fire affects equipment and other things?
         MR. KOLACZKOWSKI:  I guess I don't know how to measure good. 
     I think we have some general ideas, but that's part of the problem is
     that fires can affect equipment in many, many different ways, which can,
     therefore, make scenarios be somewhat different than what we expect, and
     it's these kinds of deviation scenarios that we're talking about.
         MR. CUNNINGHAM:  In parallel with our work on human
     reliability analysis, we have a separate program that's looking at the
     issue of modeling of fires in risk analyses.
         DR. POWERS:  They repeatedly tell me that they can't really
     predict what -- that that's why their research needs to go on --
         MR. CUNNINGHAM:  Yes.
         DR. POWERS:  -- is because they don't know what kinds of
     things will happen to equipment.
         MR. CUNNINGHAM:  That's right; both are viable subjects,
     reasonable subjects for research.
         DR. POWERS:  And I have had the licensees in saying the
     vicious and evil thing about the NRC staff, because they take too
     conservative a position on fire-induced changes and things like that.
         MR. CUNNINGHAM:  Again, we have another program.  Part of
     the reason for picking the fire example was to try to bring some of
     these -- bring the two programs a little closer together.
         MR. KOLACZKOWSKI:  The next slide, as you're going to see in
     a moment, we picked two particular scenarios to look at, but first, you
     have to understand a little bit what the plant design is like, at least
     in a general sense, for dealing with fires and what this SISBO concept
     is, because we did decide to look at a so-called SISBO plant.
         This cartoon, if you will, is meant to at least show you
     what the separation is typically like in a nuclear power plant for
     dealing with fire, and then, as I said, I want to introduce the SISBO
     concept.  You can see here that if you look at the cabling equipment in
     the plant and so on, typically, for Appendix R purposes and so on, in a
     very simple, two-division kind of plant, you end up with separating the
     cables in the various cable trays and having certain walls and rooms and
     fire barriers, et cetera, between equipment such that all the division A
     equipment is located somewhat separately and at least are protected from
     a fire standpoint from division B equipment, and we see that displayed
     in this cartoon.
         Of course, plants have now a remote shutdown panel
     associated with them.  Usually, that remote shutdown panel has a limited
     amount of instrumentation and controls associated with it for
     controlling one of the divisions of equipment for shutting down the
     plant safely should the operators have to leave the main control room,
     which might be the case for fire in the control room area as well as, as
     you'll see in a moment, if it's a SISBO plant, there are other reasons
     why they may leave the main control room as well.
         So anyway, we have this standard separation between the two
     divisions, and that separation, to the extent possible, is maintained
     all the way up through the cable spreading room, the relay room, the
     main control room, where we have the various fire barriers and so on and
     so forth.  As I indicated, we have this remote shutdown panel, the idea
     being that if we need to leave the main control room, we go down to the
     remote shutdown panel as well as other local areas in the plant, and we
     operate this -- what's called dedicated areas of equipment or division A
     equipment, and typically, what's done is that there is a set of switches
     there on the remote shutdown panel, and that's just shown as one switch
     in this little cartoon, that are thrown such that we become now isolated
     from the main control room so that shorts, hot shorts or other
     electrical problems that might be propagating up through the main
     control room won't come down to the remote shutdown panel.
         And now, we hook in the remote shutdown panel directly with
     the equipment out in the field, and then we safely shot down the plant
     from there.  What's unique about the SISBO idea is that some plants, in
     order to respond to various requirements in Appendix R and other
     fire-related requirements for dealing with potential hot shorts and so
     on, have taken on this so-called self-induced station blackout approach,
     in which basically, what happens is the plant, once the fire gets so
     severe that they feel that they are losing control of the equipment
     because of erratic behavior, potentially because of hot shorts,
     whatever, they essentially de-energize all of the equipment in the
     plant, and at the same time, energize only either the alternate area
     equipment if the fire is in a dedicated area zone, or they would go down
     to the remote shutdown panel and operate the dedicated area of equipment
     if the fire is in an alternate equipment zone and then re-energize that
     equipment off that diesel.  And then, they operate just that particular
     set of equipment to safely shut down the plant.
         So essentially, they put the plant into a loss of power
     situation and then re-energize either A-bus or B-bus and then use just
     selected equipment off of that bus that they think is not being affected
     by the fire.  Of course, the advantage of that is that now, hot shorts
     can't occur in the A equipment, let's say if that's where the fire is,
     because you've got it all de-energized, and so, you won't have a
     spurious opening of the PORV or something like that that could make the
     scenario much worse.  So that's kind of the concept behind the SISBO
     idea.
         Next slide.  Now, for illustrating the ATHEANA process, what
     we've done is we've reanalyzed two fire scenarios that have been
     previously analyzed in an existing PRA.  This just highlights what the
     two fires are and what the potential effects of the fires are for this
     particular plant.  One is an oil fire in the auxiliary feed water system
     pump B room.  This is for their classification, a so-called alternate
     fire area, and you can see that if the fire does become significant, the
     effects are quite severe.  Four out of four of the non-safety busses
     become affected and would potentially have to be shut down.  You also
     potentially lose the division B 4160-volt safety bus.  That's the safety
     bus for the various safety loads.
         Of course, you lose, obviously, pump B of auxiliary feed
     water, and it turns out in this particular plant, because of where the
     cabling is located, if you had a severe fire in this room, you would
     also affect the ability to operate and control the turbine pump.  This
     is a three-pump system that has two motor pumps, A and B, as well as a
     turbine pump.  This fire would affect one of the motor pumps as well as
     the turbine pump.
         If this situation got this severe, the expectations,
     according to the procedures, would be that you would leave the main
     control room, and then, you would shut down using limited division A,
     that is, dedicated equipment, from the remote shutdown panel, and there
     is an EOP, so called FP-Y, that governs how this is actually
     implemented.
         The other fire is, as I indicated there, a fire concerning
     certain safety busses, and it turns out these safety busses are located
     in the same area, room, if you will, that the remote shutdown panel is
     located.  So this is a so-called dedicated area fire, and again, if this
     fire got severe, such that the feeling was that the operators were
     losing control of the plant, the expectations, per the EOP, would that
     -- well, first of all, you would lose the division A busses and the
     ability to use that diesel and its various loads, and the expectations
     would be you would shut down using division B equipment or so-called
     alternate equipment.
         In this case, they would stay in the main control room to
     operate that equipment, but they're still going to de-energize
     everything and then only energize the B busses and then use the B
     equipment.  So you're still going into a self-induced loss of power
     situation.
         Lastly on this slide, I wanted to indicate what the current
     PRA insights are about the human reliability performance in these two
     fires.  And if you look at what are the sort of dominant lessons learned
     from the HRA analysis for this existing PRA, those are highlighted there
     on the third slide, that there is a potential for a diagnosis error to
     even enter the right EOP, either EOP-Y if it's an alternate area fire or
     EOP-Z if it's a dedicated area fire, so notice that one of the things
     they have to know is where is the fire in order to know which EOP to
     enter.
         And the reasons why the existing human reliability analysis
     technique says that a diagnosis error might occur are indicated here: 
     either the operator would misread or miscommunicate the cues to enter
     the procedure, or he might just plain skip the step and not enter the
     procedure or might misinterpret the instruction regarding when to enter
     the procedure.  Those were highlighted in the PRA as possible reasons
     for why he might make this diagnostic error.
         The more dominant errors, however, in the HRA, if you
     actually look at the quantified results:  they claim that it's much more
     likely the operators will make mistakes in actually implementing the
     EOPs themselves, just because they're very complex and so on and so
     forth.  There are a lot of steps involved.
         Most of the errors, they claim, will be as a result of
     switch positioning errors or just because of the fact that they may omit
     certain steps because they're in a high stress situation.  So that's
     kind of what you learn from the existing PRA if you look at the human
     reliability analysis for these two fires.
         DR. POWERS:  The regulation is that they're required to be
     able to shut this plant down, so you're going to look at carrying out
     that requirement.
         MR. KOLACZKOWSKI:  That is correct; we don't look at the
     errors associated with still safely shutting down, but look at it now
     from an ATHEANA perspective and say that if we think about the context
     of these fires a little more, what might we learn that might be new,
     more lessons learned that we could apply to ways to make the operators
     better-prepared for dealing with these fires than just simply, well,
     they might skip the step.  Well, what are we supposed to do about that? 
     I guess we could say increased training, maybe, but we want to see if
     ATHEANA can provide some additional insights as to how the operator may
     not bring the plant back to a safe condition.
         Yes?
         DR. APOSTOLAKIS:  Who did the PRA you are referring to?
         MR. KOLACZKOWSKI:  I'm sorry?
         DR. APOSTOLAKIS:  The PRA, the existing PRA.  Is that the
     utility?
         MR. KOLACZKOWSKI:  It is a -- yes, it's an IPEEE from a
     licensee.
         DR. APOSTOLAKIS:  Okay.
         MR. KOLACZKOWSKI:  Now, John indicated that one of the first
     things we do after really defining the issue, which, in this case, is
     how can we learn better how the operators might make mistake given these
     two kinds of fires and, therefore, take from that lessons learned and
     ways to improve operator performance given these kinds of fires, once
     we're able to identify that issue, one of the first things we have to do
     is try to understand how does an operator, how does he think these two
     fires would normally proceed?
         This is that defining the base case scenario step.  This is
     trying to come up with that collective operator mindset as to what his
     expectations would be given that these fires actually occurred, and our
     base case is essentially summarized in this and the next slide, and let
     me just kind of quickly go through this, and then, if you have any
     questions, we can proceed to those.
         Of course, one of the first things that would eventually
     occur most likely is once the fire has happened, let's assume for the
     moment that it happens without a person being in the room at the
     particular time, et cetera; it's going to start to affect some
     equipment, et cetera, but one of the first things that will probably
     occur is that we will eventually get a fire detection alarm.  There are,
     at this plant, multiple alarms for detecting smoke, et cetera, in these
     rooms and so on, so we would expect that fairly early in the scenario
     that one of the first indications would be this fire detection alarm.
         The operators then enter what is called EOP FP-X upon a fire
     detection alarm, which basically provides the initial things that they
     do for dealing with once a fire has been detected in the plant.  One of
     the first steps in that procedure is they ask another operator out in
     the plant to go and visually validate that there actually is a fire,
     that this is not a spurious or false alarm, and the procedure almost
     reads as though the intent is that they don't do too much more until
     that validation comes back.
         Let's assume they do get the validation.  Then, the fire
     brigade is then assembled.  It's called on.  And one of the things they
     do is they unlock the doors to the suspected area to make sure that the
     fire brigade is going to have fairly easy access to that area, et
     cetera, and there's a general notification over the Gaitronic system
     that there is a fire in the plant and those kinds of things.
         Now, during this time, especially if the fire is not yet all
     that severe, the plant is still running.  It's just humming along,
     running along fine, and, in fact, the main control room staff are
     attempting to just maintain the plant online and under proper control
     while the fire brigade is now getting assembled and getting ready to do
     their thing.
         We expect that as time proceeds, and let's say the fire
     brigade is finally getting down there, perhaps entering the room, et
     cetera, but if the fire is getting to the point where it's approaching
     the severities that I talked about in the previous slides, then, we're
     going to start seeing erratic operation of some of the
     normally-operating equipment.  Perhaps we're going to start seeing flow
     acting erratically; maybe if you have current indications on certain
     pumps, like the AFW pump, you might begin to see erratic indications of
     the current or maybe voltages on certain busses, depending, again, on
     which cables are affected and when that occurs.
         DR. POWERS:  Isn't it much more likely that the things that
     are going to be affected are the instrumentation and not the core
     itself?
         MR. KOLACZKOWSKI:  That is true, too.  I mean, it depends
     on, looking at in each individual room, how much control and power
     cables there are versus how much instrumentation cables.  Certainly, the
     AFW pump is instrumented to some degree, but the flow instrument for
     flow going to the steam generator might be in an entirely different
     room, and it's unaffected at all.
         So it's very, very plant-specific, obviously, as to what the
     specific effects are, but we would generally say erratic operation of
     equipment, and certainly, your point is well-taken, Dana, of some
     indications may be possible.  But the point is the plant isn't
     necessarily going to trip right away, and in a lot of small fires, as we
     know, the plant ran through the entire scenario just fine.  They put the
     fire out, and that's it.
         Now let's assume for the --
         DR. POWERS:  There is nothing at this point to indicate to
     trip this plant.
         MR. KOLACZKOWSKI:  I'm sorry?
         DR. POWERS:  There is nothing at this point --
         MR. KOLACZKOWSKI:  No, FP-X does not require them at this
     point yet to trip the plant.  And, in fact, they will try to maintain
     plant operation per their procedure at this plant.
         So we have potential erratic behavior of some of the normal
     operating equipment, perhaps some of the indications.  Notice that
     certain standby equipment may also be affected; for instance, that
     turbine pump, the turbine auxiliary feed water pump, and it may also,
     maybe, have cables associated with that pump's control that are burning,
     and yet, they will have no necessarily idea that that pump has been
     affected, because they haven't asked it to try to work yet.  They're
     still running the plant; feed water plants are still on.  They'd have no
     idea that the AFW turbine pump has now become inoperative.  They won't
     know that until they try to use it.
         So just recognize that there is some missing information
     with their situation assessment as to how bad this fire is, okay?  Now,
     also during this time; let's assume the fire brigade is trying to do its
     job.  There is going to be some diversion of attention as well, because
     there's going to be periodic communication between the fire brigade and
     the main control room staff.  One of the things they do is hand out
     radios, et cetera, and there's going to be talking back and forth:  how
     are you coming?  What's the situation?
         Maybe the brigade is saying, well, we haven't entered the
     room yet; there's an awful lot of smoke, et cetera, et cetera.  There's
     going to be some diversion of attention dealing with the fire brigade as
     well as trying to just make sure that the plant is okay.  That's part of
     the overall situation.
         Let's assume for the moment that the conditions get even
     worse.  Either the fire brigade is having trouble getting out the fire
     or whatever.  At some point, if enough erratic behavior is occurring,
     and we're actually beginning to have a lot of difficulty in actually
     controlling the plant, maintaining pressurizer level, maintaining feed
     water flows, whatever, that's when the judgment occurs for the operators
     to then enter either EOP-FP-Y if the fire is in an alternate zone or
     EOP-FP-Z if the fire is in a dedicated zone, and at that point, one of
     the first steps in that procedure is, yes, trip the plant, okay?
         Secondly, then, what they do after that is they, in the
     procedures, is they basically isolate the steam generators, and then,
     they leave -- if they have to, if they're in EOP-FP-Y, they have to
     actually leave the main control room, and then, they start the
     de-energization process, and that's when they actually are pulling
     fuses, pulling breakers out locally in the plant, et cetera, and
     essentially putting the plant into a self-induced blackout.
         Simultaneously, they are -- and they actually take the crew
     and separate them up into about three or four different areas of the
     plant, so you have to also recognize that the crew is no longer working
     as a unit in one room anymore; they're now located in various areas of
     the plant talking on radios.  One guy is over pulling fuses in a DC
     panel; another person is over pulling breakers in an AC bus, et cetera. 
     So they're acting now certainly still in communication but as separate
     entities.
         They de-energize the various buses in the plant, and then,
     they bring on the appropriate bus, depending on whether the fire is in
     an alternate or dedicated zone, and then begin to bring on manually the
     equipment they're going to use to safely shut down the plant.  Now, in
     the base case scenario, even if the fire got this bad, the expectations
     of the operator would be, okay, we enter the right EOP procedure; we go
     through its implementing steps; we carry it out; we eventually
     restabilize the plant.  Sometime during this time, the fire eventually
     gets extinguished, and the scenario is over.
         So in a general sense, this would be sort of the
     expectations, even if the fire got fairly severe, as to what the
     operators' expectations would be as to how the scenario would proceed,
     and that's going to be our starting point to then build deviations on
     that scenario.
         One of the things we'll also do early on in the process is
     we try to focus on, well, what human failure event or events and what
     particular unsafe acts are we really interested in analyzing for?  And
     this slide is meant to attempt to try to summarize really the specific
     human failure event that we're looking at, which is really failure to
     accomplish heat removal.  Let's say we get to the point where they have
     to trip the plant, and now, they have to bring it back into a
     stabilized, cooled state, recognizing they may have to leave the main
     control room and go through this de-energization process and so on, and
     what if they fail to carry that out correctly for one reason or another?
         Taking that overall human failure event and really breaking
     it down into, as we have here, three separate unsafe acts that we're
     really going to be trying to analyze and determine, if we can, the
     probability of that occurring.  UA-1 is really very much closely
     associated with that diagnostic error I talked about in the original
     PRA; that is, one unsafe act could be the failure to enter the right EOP
     or wait too long to enter that EOP, to the point where, perhaps by that
     point, so much equipment damage has occurred; maybe hot shorts have also
     occurred that they have essentially lost all control of the plant and
     the ability to even bring it back to a cooled and safe and stable safe.
         DR. APOSTOLAKIS:  What's too long?  Who determines the
     length of fire?
         MR. KOLACZKOWSKI:  For purposes of this illustration, we
     haven't tried to necessarily answer that question, George.  It would
     obviously depend on the specific plant; how big the fire grows; how fast
     the equipment gets affected.  You know, you could do that by doing
     various com burn runs for that room and so on and so forth.  It would be
     very plant specific.  I mean, I could try to give you some general
     ideas, I suppose, but we have not tried to address that specifically in
     this illustration.
         DR. APOSTOLAKIS:  Okay; but in terms of the base case
     scenario --
         MR. KOLACZKOWSKI:  Yes?
         DR. APOSTOLAKIS:  -- do you have an idea as to how much time
     they have?  I thought that was one of the premises of defining the base
     scenario.
         MR. SIEBER:  It depends on how big the fire is.
         DR. APOSTOLAKIS:  Well, okay, but they have to have some
     sort of an idea how quickly they have to do it.
         MR. KOLACZKOWSKI:  I agree that as part of the base case
     scenario, you would describe for a specific plant how long do they think
     it would take before this fire would get that large and so on, and
     that's going to be a very plant-specific answer.
         DR. APOSTOLAKIS:  I see Jack is shaking his head here.
         MR. SIEBER:  I don't think you can do it.
         DR. APOSTOLAKIS:  So how will the operators act?
         MR. SIEBER:  You act as quickly as you can without making
     any mistakes.
         [Laughter.]
         DR. POWERS:  What's happening in reality is that you've got
     something, the fire alarm or something.  You've got some people doing
     things.  They're talking to you about what they're finding.  In the mean
     time, you're going to have instruments that are telling you something is
     going on, and the urgency, well, it's urgent to get the fire out, but
     it's not urgent to take the plant, to trip the plant until you get
     something urgent.  Who says that?  It's the instrumentation board or the
     people that are talking about it.  They say the fire is very big, and we
     can't get it out with the people we've got; you're going to trip the
     plant.
         DR. APOSTOLAKIS:  And this is now on the order of minutes?
         DR. POWERS:  Minutes.
         MR. KOLACZKOWSKI:  It could be.
         DR. POWERS:  Yes; I know.  I mean, some of us are more
     incredulous than others, but maybe that's just an area that somebody is
     going to have to work on.  It's in the area of most extreme abuse, I
     think; what's already a very laborious process.
         DR. APOSTOLAKIS:  I think that's related also to the problem
     of screening at the beginning.  In other words, you really have to try
     to make this not to look like it's an open-ended process that only a few
     select people can apply.
         I have another question.  I'm confused there by the second
     paragraph.
         MR. KOLACZKOWSKI:  Okay; I was going to get to that, George.
         DR. APOSTOLAKIS:  I think we have to hurry.
         MR. KOLACZKOWSKI:  Okay; go ahead.
         DR. APOSTOLAKIS:  Triggered error mechanisms include no
     entry to procedures.  And then, it says tends to lead to unsafe acts,
     including taking no action.  I thought the mechanism was something
     different.  I agree with the last statement, but if they delay or they
     take no action, that's an unsafe act.  I just don't see how it is an
     error mechanism.
         MR. KOLACZKOWSKI:  Yes, it looks like maybe that is
     miscategorized and should be down as an error type.
         DR. APOSTOLAKIS:  Okay; so it shouldn't be classified as a
     trigger mechanism.
         MR. KOLACZKOWSKI:  I think I would agree with you, George.
         DR. APOSTOLAKIS:  Okay; I think we've got the flavor of the
     search.
         MR. KOLACZKOWSKI:  Okay.
         DR. APOSTOLAKIS:  Unless the members want to see two, three
     -- do you want to continue on to the deviation scenario development now?
         MR. KOLACZKOWSKI:  That's fine; that's fine.
         DR. APOSTOLAKIS:  Number 30?
         MR. KOLACZKOWSKI:  That's fine.
         So we go through various searches to try to come up with
     credible ways a scenario could be different, such that they trigger
     certain error mechanisms that we think will lead to the error types of
     interests, okay?  Now, we actually -- once we've gone through those
     searches, and we have some idea of credible ways that the scenario might
     deviate from the base that really sets up the potential for the unsafe
     acts that we're interested in, we then summarize those characteristics
     into a troublesome scenario or scenarios; it might be more than one,
     okay?
         In this particular case, based on what we learned on the
     searches in this illustration, we selected the following time line of
     events that would be somewhat different.  Imagine, if you will, that the
     fire detection for whatever reason was delayed, either because of
     perhaps some of the fire detection equipment not working and/or the fire
     develops very slowly, which is getting sort of to the next bullet but
     progressively.
         Also, let's say the fire brigade has trouble putting out the
     fire, although perhaps it reports back to the main control room that it
     is almost under control.  Obviously, with the kinds of things that
     that's going to do, it's going to delay the decision process; allow the
     potential for more equipment to be damaged before, in fact, the
     operational staff take action; and if they're getting reports back by
     the fire brigade saying we've just about got it out, again, the feeling
     is going to be one of almost relief and say well, we're just about out
     of this thing.
         Now, beyond the initial fire conditions, also some other
     later deviations that we're going to include in this "deviation
     scenario" is that suppose that the fire duration and progression is such
     that it gets so severe that it actually has cross-divisional equipment
     effects.  Perhaps it lasts longer than two or three hours, and
     eventually, fire barriers get defeated or whatever, and/or other good
     equipment, that is, the equipment they're going to try to use to safely
     shut down the plant, what if it fails to function, like the diesel
     doesn't start?  Those that we think are credible, realistic deviations
     in the scenario that could make the scenario much more troublesome. 
     Next slide.
         DR. APOSTOLAKIS:  So where are you using the fact that they
     may be reluctant to abandon the control room?
         MR. KOLACZKOWSKI:  Well, again, that's been recognized as
     part of one of the vulnerabilities, and the fact that we have a scenario
     now that is going to develop slowly, and also, they're going to be
     getting good reports from the fire brigade, we're basically saying
     that's going to strengthen that reluctance.  They're going to be less
     willing to leave the main control room given that's the situation,
     because they think the fire is just about out, and they're not sure what
     all the effects of the fire are, in fact, because it's progressed so
     slowly.
         DR. APOSTOLAKIS:  So that's not part of the deviation
     scenario?
         MR. KOLACZKOWSKI:  It is a reason why the deviation scenario
     is what it is.  We're saying that this kind of a scenario, as described,
     is going to strengthen or increase the reluctance factor.  The scenario
     is not the PCF.  The scenario is described in an equipment sense.
         DR. APOSTOLAKIS:  What's the PCF?
         MR. KOLACZKOWSKI:  I'm sorry; I said PCS; PSS.  The scenario
     is going to strengthen certain performance shaping factors.  In one case
     here, one of the performance shaping factors, one of the negative ones,
     is this reluctance.
         DR. APOSTOLAKIS:  So if one asks now what is the error
     forcing context --
         MR. KOLACZKOWSKI:  Yes.
         DR. APOSTOLAKIS:  How many do you have, and which ones are
     they?
         MR. KOLACZKOWSKI:  Okay; in this case, I guess we would say
     we're describing one overall context.  What you have before you on this
     deviation scenario slide, the previous slide, is essentially the plant
     conditions part of it.  The actual performance shaping factors, I don't
     think I have a slide on that, but the performance shaping factors which
     make up the other part of the context would be things like unfamiliarity
     with such a situation; reluctance to want to deenergize the plant and/or
     if necessary leave the main control room and so on and so on.
         And so, you would then describe those performance shaping
     factors, and then, together, if you say given those performance shaping
     factors and this kind of a scenario, we think we have an overall context
     which may lead to higher probabilities of not entering the procedure in
     time or carrying it out incorrectly, et cetera, those three UAs that I
     talked about.
         DR. APOSTOLAKIS:  I mean, I thought that the error forcing
     context is central to all of this.  So I sort of expected the view graph
     that said this is it.
         MR. KOLACZKOWSKI:  Probably should have stressed the
     performance shaping factors; you're right.  We only presented this --
         DR. APOSTOLAKIS:  Is it the performance shaping factors or
     the context?  Or these are part of the context?
         MR. KOLACZKOWSKI:  Yes; if you go back to the framework,
     you'll notice that the error forcing context box has in it two things: 
     the plant conditions --
         DR. APOSTOLAKIS:  Yes.
         MR. KOLACZKOWSKI:  -- and the operator performance shaping
     factors, and what we're saying is suppose the plant conditions are as
     I've described in this deviation scenario.  That's going to trigger a
     lot of those other vulnerabilities that we talked about in the previous
     step, which really become the performance shaping factors; that is, he's
     going to have a reluctance to want to deenergize the plant, et cetera,
     et cetera.
         DR. APOSTOLAKIS:  So you have a number of error forcing
     contexts by selecting from the deviation scenario development.
         MR. KOLACZKOWSKI:  Yes, you could; yes, you could.
         DR. APOSTOLAKIS:  I think that's a critical --
         MR. KOLACZKOWSKI:  You could potentially have numerous
     contexts.
         DR. APOSTOLAKIS:  You need to emphasize it and say these are
     the contexts we're identifying.
         MR. KOLACZKOWSKI:  Okay; okay, good point.
         Okay; given now we think we have a scenario that will, if it
     develops in the way that we described in the deviation scenario, we
     think along with the performance shaping factors provides us a more
     error-prone situation or error forcing context, as we call it.  One of
     the things that we also do before we really enter the quantification
     stage is think about well, what if it really did get this bad?  What are
     the potential recoveries?
         I guess just quickly, for the case where he doesn't enter
     the EOP or enters it way too late, we've assumed that if things got that
     bad, right now for this illustrative analysis, we're not allowing any
     recovery in that situation, and by the way, that's very similar to what
     was done in the existing PRA.  The existing PRA said if things get that
     bad that he never made the decision to even enter the EOP, he's not
     going to get out of this thing if the fire continues.  So we're sort of
     in line with what the existing PRA was in that case.
         If the fire grows, and it affects both the alternate and the
     dedicated equipment, which was one of the aspects of our deviation
     scenario possibilities, well, obviously, now, now, the question becomes
     what's he going to do, given he's got alternate equipment burning as
     well as dedicated equipment burning, and really, there is no procedural
     guidance for that.  He's supposed to enter one or the other case, not
     both.  So if the fire grows and affects both the equipment, or, if when
     he gets to the so-called good equipment, that is, the equipment not
     affected by the fire that randomly fails, that could occur because of --
     this is getting to your point, George -- the operator could be making
     those problems occur, not just that the equipment fails.
         This is sort of the operator inducing an initiator; in this
     case, this is the operator actually causing the reason why the equipment
     doesn't work.  Maybe he doesn't try to start it up in the right sequence
     or something like that, and so, it doesn't work properly.
         Now, we have allowed recovery for that in the analysis, and
     I think maybe the best thing I ought to do is go to the event tree,
     which is the next slide, that will show the interrelationship of the
     recovery with these unsafe acts.  This is obviously very simplistic, but
     what it's meant to do is cover really the key points that we're worried
     about in how the scenario could progress.  Notice we have the fire at
     the beginning.  Suppose the operator does not timely enter into the
     correct EOP?  That was the one that we said we're not going to allow a
     recovery for.  That's unsafe act number one.  If that occurs, we're
     going to assume for event tree purposes that that goes to core damage,
     like the existing PRA did.
         But suppose it does enter the procedure, and suppose the
     fire does not jump to separation barriers; that is, it still remains in
     only the alternate area or only the dedicated area.  And then,
     additionally, if the good equipment that he then tries to operate works,
     well, that's the way out.  That's the okay scenario he's trying to get
     to.  But if there is a problem either with the equipment working or if
     the fire, in fact, jumps over into -- let's say it starts in the
     alternate area and jumps to the dedicated area, maybe because of an
     Appendix R weakness, or maybe there's a fire door inadvertently left
     open, something like that, so the fire could get into the AFW pump A
     room, for instance, as well.
         Then, the operator is going to have to try to deal with this
     situation that he's got fire affecting both alternate and dedicated
     equipment, or he has to deal with the fact that the good equipment has
     randomly failed and is not working, and when allowing a recovery there,
     he has to make a decision as to what sort of recovery action to take,
     and then, obviously, he has to carry out that recovery action.
         That recovery action would probably be something like, well,
     let me go try to use the A equipment again, even though it's the
     equipment that's burning, because the B diesel isn't starting, so I've
     got to go try to use the A diesel.  That's my only out at this point. 
     So in event tree space, this is sort of the relationship between the UAs
     and the equipment and the recovery and how that's sort of all panning
     out.
         DR. APOSTOLAKIS:  Isn't this similar to an operator action?
         MR. KOLACZKOWSKI:  I guess certainly from the concept
     standpoint, yes; in terms of laying out the possible sequences, yes.
         Next slide.  George, I don't know if you want to get into
     the details --
         DR. APOSTOLAKIS:  No.
         MR. KOLACZKOWSKI:  -- of the codification other than to say
     that we used the existing PRA information to try to quantify, well,
     what's the chance this set of plant conditions would actually occur this
     way.  And then, as we said, as far as actually coming up with the
     probabilities of the unsafe acts, at this point, they're still largely
     based on judgment and using other types of techniques like HEART to try
     to get some idea of what those numbers ought to be.
         DR. APOSTOLAKIS:  Why don't you go on to the difference
     between existing --
         MR. KOLACZKOWSKI:  Okay.
         DR. APOSTOLAKIS:  -- PRAs?
         MR. KOLACZKOWSKI:  So that takes me to the last slide in my
     presentation, which is really what we want to stress more than the
     quantitative numbers.  As with PRA, the real value of doing PRA is what
     you get out of doing the process.  The numbers are fine, and they sort
     of set some priorities, but we think the same is true of ATHEANA.  And
     from a qualitative aspect, what we've done here is compare the existing
     PRA human performance observations and sort of what you learned out of
     the existing HRA and what you might learn out of doing an ATHEANA type
     of HRA on these same two fires, and these are meant just to compare the
     types of fixes or lessons learned, if you will, out of the HRA analysis
     that one might gain from the existing PRA versus the ATHEANA results,
     and let me just generally characterize them as I think the existing PRA
     gives you some sort of very high level ideas of some things that you
     might fix, and they generally fit the category of well, let's just train
     them more, or let's make this step bolder in the procedure so he won't
     skip it.
         I think in going through the ATHEANA process and really
     understanding what the vulnerabilities are and how the scenario
     differences might trigger those vulnerabilities to be more prominent, I
     think you learn more specifics as to ways to improve the plant, either
     from a procedural standpoint, a labelling standpoint, et cetera, and
     what the specific needs are, such as like that first one up there on the
     extreme upper right.  Clearly, there is a need for a minimum and
     definitive criteria for when to enter EOP-FP-Y or Z.
         DR. BARTON:  That may be almost impossible to come up with: 
     how many meters; out of whack by how many degrees?  Some of that is
     going to be real hard to put numbers on, numbers or definite criteria
     for getting in there.
         MR. KOLACZKOWSKI:  Granted; I'm not saying that all of them
     can be done or should be done, but these are the types of insights one
     can gain out of doing an ATHEANA type of analysis out of this.  Unless
     you want to go through specific ones, that pretty much ends the
     presentation.  It's trying to be a practical illustration of how the
     actual searches and everything work.
         DR. POWERS:  I guess I'm going back to the question of what
     has been accomplished?  Why do we feel it's necessary to go to such a
     heroic effort on the human reliability analysis?  And if we could
     understand why we want to do that, maybe we could decide whether we've
     accomplished what we set out to do.
         MR. KOLACZKOWSKI:  My short answer to that is go back to one
     of the first slides we had this morning.  If you look at real serious
     accidents, they usually involve operators not quite understanding what
     the situation was; certain tendencies, et cetera, are built into their
     response mechanisms, and therefore, they made mistakes, and PRAs, quite
     frankly, as good as they do to try to determine where the risks of
     nuclear power plant accidents lie, et cetera, still do not deal very
     well with possible errors of commission, places where operators might
     take an action that, in fact, would be unsafe relative to the scenario. 
     So maybe we're missing some of where the real risk lies.
         DR. POWERS:  I think we see this kind of a problem,
     especially when we look at severe accidents, pertaining to accidents
     where the operators disappear.  Something happens to them, because they
     don't affect things very much.
         And you get peculiar findings out of that, like we have
     people swearing that the surge line is going to fail; the four steam
     generators to fail or the vessel fails, because that's where -- the
     operator has apparently taken a powder and gone someplace and don't try
     to put any water into it, and despite what we saw at TMI, the surge line
     fails, and so, accidents become benign that otherwise would be -- and
     understanding the operator is going to take a powder, that will do
     something that seems like a very valuable thing.
         The question you have to ask is is this enough, or should we
     do something much more?
         [Laughter.]
         MR. KOLACZKOWSKI:  I don't know how to respond to that.
         DR. POWERS:  Well, putting it another way, I assume you can
     figure out the inverse to that statement, because that's already too
     much.
         MR. CUNNINGHAM:  Part of the reason we're coming out to talk
     to the committee and other people is just to sort out, okay, what are
     the next steps?  We've taken a set of steps.  We've made an investment
     and made a decision to go down a particular route.
         DR. POWERS:  Well, could you work and research just maybe
     operators might put water in and the surge line not fail first?
         [Laughter.]
         MR. KOLACZKOWSKI:  We'll do that.  We'll try to convince
     them.
         DR. POWERS:  Try to convince them that TMI actually did
     occur.
         [Laughter.]
         MR. CUNNINGHAM:  People forget things.
         DR. POWERS:  But it is possible that it pours down under
     pressure and not had the surge line fail.
         MR. CUNNINGHAM:  Yes.
         DR. APOSTOLAKIS:  Are you going to be here this afternoon?
         MR. CUNNINGHAM:  I don't know about most of us
     but --
         DR. APOSTOLAKIS:  Until about 3:00?
         MR. FORESTER:  I'd have to change my flight.
         MR. CUNNINGHAM:  Some of us will be here.
         DR. APOSTOLAKIS:  Okay; I propose that we recess at this
     time so that Tom and I can go to a meeting, and we will talk about the
     conclusion, followup activities at 12:45.
         MR. CUNNINGHAM:  12:45 is fine by us.
         DR. THOMPSON:  I only have two more slides.
         MR. CUNNINGHAM:  We just have two slides, George, if you can
     just bear with us.
         DR. APOSTOLAKIS:  Yes, but I want to go around the table.
         DR. POWERS:  Unfortunately, he has an hour and a half of
     questions.
         DR. APOSTOLAKIS:  Yes.
         [Laughter.]
         DR. APOSTOLAKIS:  Is the staff requesting a letter?
         MR. CUNNINGHAM:  We are not requesting a letter, no.
         DR. APOSTOLAKIS:  Okay.
         MR. CUNNINGHAM:  If you would like to write one, that's
     fine, but we are not requesting it.
         DR. APOSTOLAKIS:  Okay.
         DR. POWERS:  We could write one on surge line failures.
         [Laughter.]
         DR. APOSTOLAKIS:  So let's reconvene at 12:45.
         MR. CUNNINGHAM:  12:45.
         [Whereupon, at 11:45 a.m., the meeting was recessed, to
     reconvene at 12:43 p.m., this same day.].                   A F T E R N O O N  S E S S I O N
                                                     [12:43 p.m.]
         DR. APOSTOLAKIS:  Okay; we are back in session.
         Mr. Cunningham is going to go over the conclusions,
     Catherine, so then, perhaps, we can go around the table here and get the
     members' views on two questions:  the first one, do we need to write a
     letter, given the error forcing context that the staff is not requesting
     a letter.
         [Laughter.]
         DR. APOSTOLAKIS:  And the second, what do you think, okay? 
     So the staff will have a record of what you think.  So, who is speaking? 
     Catherine?
         DR. THOMPSON:  Okay; just real quickly, I want to go over
     two slides:  the conclusion slide, we talked about all of this in the
     last couple of hours that we think ATHEANA provides a workable approach
     that achieves realistic assessments of risk.  We can get a lot of
     insights into plant safety and performance and have fixes, if you will.
         DR. POWERS:  It boils down to a lot on what you call
     workable.  It looks to me like it's not a workable approach.  If I try
     to apply it unfettered, I have some limitation on where I'm going to
     focus it, but it completely gets out of hand very quickly.
         MR. CUNNINGHAM:  That's also true of event tree and fault
     tree analysis and lots of other parts of PRA.  I think one of the issues
     that was discussed this morning of how do we fetter it, if you will, or
     keep it from becoming unfettered, and I think that's a legitimate issue
     that we perhaps can talk to you about more.
         DR. POWERS:  Yes; you need something that says, okay, you
     need something that's a nice progression, so that you can go from
     zeroeth order, first order, second order and have everybody agree, yes,
     this is a second order application.
         MR. CUNNINGHAM:  Yes, yes, and that, I think, again,
     probably within the team, we have those types of things in our heads.
         DR. POWERS:  Yes.
         MR. CUNNINGHAM:  But it's not very constructive from the
     outside world, yes.
         DR. APOSTOLAKIS:  The same goes to a straightforward.
         MR. CUNNINGHAM:  Of course; it's intuitively obvious,
     perhaps, that it's straightforward or some such things.
         DR. POWERS:  I got the impression that you had a variety of
     search processes that made it comprehensive; they may not have made it
     straightforward but a comprehensive search process.
         MR. CUNNINGHAM:  Okay.
         DR. THOMPSON:  Some of the followup activities.
         DR. APOSTOLAKIS:  Wait a minute, now, Catherine, you were
     too quick to change that.
         DR. THOMPSON:  Good try.
         DR. APOSTOLAKIS:  This comes back to the earlier comment
     regarding objectives.  I don't think your first bullet should refer to
     risk.  Your major contribution now is not risk assessment.  You may have
     laid the foundation; that's different.  But right now, it seems to me
     the insights that one gains by trying to identify the contexts and so on
     is your major contribution, you know, and that can have a variety of
     uses at the plant and so on.
         So I wouldn't start out by saying that you have an approach
     to achieve a realistic assessment of risk.
         MR. CUNNINGHAM:  Okay.
         DR. APOSTOLAKIS:  You don't yet.
         MR. CUNNINGHAM:  Okay.
         DR. APOSTOLAKIS:  I, in fact, would make it very clear that
     there are two objectives here, if you agree, of course.  One is this
     qualitative analysis, which I think I view as been knocked down a little
     bit and then the risk part, okay?
         MR. CUNNINGHAM:  Yes.
         DR. APOSTOLAKIS:  I think you should make it very clear,
     because if I judge this on the basis of risk assessment, then I form a
     certain opinion.  If I judge it from the other perspective, the opinion
     is very different.
         MR. CUNNINGHAM:  Okay; I'll note that.
         DR. APOSTOLAKIS:  Develops insights:  I have associated over
     the years the word insights with failed projects.
         [Laughter.]
         DR. APOSTOLAKIS:  Whenever some project doesn't produce
     anything --
         [Laughter.]
         DR. APOSTOLAKIS:  -- you have useful insights.
         [Laughter.]
         DR. APOSTOLAKIS:  So in my view, you should not use that
     word, even though it may be true.
         MR. CUNNINGHAM:  Okay.
         DR. APOSTOLAKIS:  Supports resolution of regulatory and
     industry issues; you didn't give us any evidence of that, but I take
     your word for it.
         MR. CUNNINGHAM:  Okay.
         DR. APOSTOLAKIS:  Okay.
         MR. CUNNINGHAM:  So insights will be removed from the
     lexicon.
         [Laughter.]
         MR. CUNNINGHAM:  Along with forcing, I guess, is another one
     we have to remove.
         DR. APOSTOLAKIS:  Yes; the thing about unsafe acts and human
     failure events, I really don't understand the difference.
         MR. CUNNINGHAM:  Yes; that's one of the things I was
     thinking about this morning in listening to this is again, within the
     team, I think it's well understood what those different terms means. 
     But to the --
         DR. APOSTOLAKIS:  Yes.
         MR. CUNNINGHAM:  -- the general public, it's not going to be
     real clear.
         DR. APOSTOLAKIS:  But if it's an unsafe act, it should be a
     failure demand?  That's why it's unsafe?
         MR. CUNNINGHAM:  I don't know.
         DR. APOSTOLAKIS:  From the words, from the words; it doesn't
     follow.  And you are saying in the text that they are expected to act
     rationally.  So why are you calling what they did -- anyway.
         MR. CUNNINGHAM:  Anyway, yes, we will try to do a better job
     of mapping those things out.
         DR. THOMPSON:  Okay.
         MR. CUNNINGHAM:  Followup issues?
         DR. THOMPSON:  These are some activities that we'd like to
     get in a little bit more.  Some of them are already planned.
         DR. POWERS:  You don't have any my surge line up there.
         DR. THOMPSON:  Surge line?
         [Laughter.]
         MR. CUNNINGHAM:  There was a typo.  We meant to say surge
     line.
         [Laughter.]
         DR. POWERS:  What you do is you didn't get the steam
     generator tube rupture problems.
         DR. THOMPSON:  Okay; we obviously are pretty much done with
     the fire issue.  We're now working on PTS issue with Mr. Woods and some
     other members of the branch and helping him look at the human aspects of
     that.  We'd like to get into some of the digital INC area, see what that
     could add to the human error when they start working along with digital
     INC.
         DR. UHRIG:  Are you looking at that strictly from the
     operations standpoint, or are you going to get back into the code
     development aspect?
         DR. SEALE:  The software side.
         DR. THOMPSON:  Software; we haven't -- these are things that
     possibly we could get into.  This isn't really planned yet, digital INC
     part.  So I don't know how far we would get into that.
         DR. APOSTOLAKIS:  So when you say digital, what exactly do
     you mean?  I guess it's the same question.  The development of the
     software or the man-machine interaction?
         DR. THOMPSON:  I think the man-machine.
         MR. CUNNINGHAM:  We were thinking not so much the
     development as it's being used in the facilities.
         DR. THOMPSON:  Right.
         DR. UHRIG:  The difference between an analog and a digital
     system is relatively minor when it comes to the interface.  It's the
     guts that's different.  Pushing the wrong button, it doesn't make any
     difference whether it's digital or analog.
         MR. CUNNINGHAM:  Yes; again, this has been suggested as a
     topic that what we're doing here might dovetail well with other things
     that are going on in the office.  It hasn't gone much further than that
     at this point.
         DR. POWERS:  At what point do we get some sort of comparison
     of the leading alternatives to ATHEANA for analyzing human fault so that
     you get some sort of quantitative comparison of why ATHEANA is so much
     better than the leading competitors?
         MR. CUNNINGHAM:  A quantitative comparison or --
         DR. POWERS:  Well, a transparent comparison.  You tried some
     things where you said here's what you get from ATHEANA, and here's what
     you get from something else.  Any other different?  But it's hard for me
     to go away from saying this saying ATHEANA is just infinitely better
     than the existing PRA results.  Quite the contrary; I'm feeling that the
     things in the existing PRA must be pretty good.
         DR. BARTON:  A lot of them are very similar.
         DR. POWERS:  Yes, pretty similar.
         MR. CUNNINGHAM:  Okay; they are similar but --
         DR. BARTON:  The whole process may end up fixed it sooner to
     the fix out of play, the methods I'm using now.
         MR. CUNNINGHAM:  What happens in the context of like the
     fire example is you're identifying new scenarios as you go through the
     trees that seem to have some credible probability.  How, you know, what
     the value or what the probabilities are that will be associated with
     them is still something we're still exploring.  We expect that we will
     find scenarios that will have a substantial probability and will, you
     know, lead to unsafe acts or core damage accidents or whatever.  Again,
     they go back to you look at the history of big accidents in industrial
     facilities, and you see these types of things occurring, so we're trying
     to match the event analysis with the real world, if you will.  In a
     sense, that's one of the key tests, I think, of how well this performs
     is that do we seem to be capturing what shows up as important in serious
     accidents?
         There are a couple of things that aren't on this slide that
     we've talked about this morning.  We discussed for a good while the
     issue of quantification, that that may be -- is that on there?  I can't
     read the thing; okay, improved quantification.
         DR. APOSTOLAKIS:  What is that?
         MR. CUNNINGHAM:  It's one of those bullets.
         DR. APOSTOLAKIS:  Full-scale HRA/PRA?
         MR. CUNNINGHAM:  No, the fourth one down, improved
     quantification tools.
         DR. APOSTOLAKIS:  I would say in degrading quantification.
         MR. CUNNINGHAM:  I'm sorry?  Okay; quantification tools
     comes up as an issue.
         DR. APOSTOLAKIS:  Why does the NRC care about whether
     ATHEANA applies to other industries?
         MR. CUNNINGHAM:  Because it gives us some confidence that
     it's capturing the right types of human performance.  As we've talked
     about many times or several times this morning, big accidents and
     complex technologies, we think, have a similar basis in human
     performance or are exacerbated or caused by similar types of events. 
     Given that we don't have many big accidents in nuclear power plants, I
     think it's important that we go out and --
         DR. APOSTOLAKIS:  Did we ever apply this to other industries
     to gain the same kind of lessons?  Let them use it.
         MR. CUNNINGHAM:  Again, it's not so much the --
         DR. APOSTOLAKIS:  In my years at the Nuclear Regulatory
     Commission, I don't know how much effort you plan to --
         MR. CUNNINGHAM:  Well, part of it, it's not a big effort,
     but it's also something where I think it's important to help establish
     the credibility of the modeling we have.
         DR. APOSTOLAKIS:  Like among pilots or airliners?
         MR. CUNNINGHAM:  Yes, the aircraft industry, over the years,
     we've had some conversations with NTSB and with NASA and places like
     that.  Again, it's complex industries where you have accidents and --
         DR. APOSTOLAKIS:  I think developing quantification tools
     and the team aspects in NNR will keep you busy for another 7 years, so I
     don't know about the other industries.  Again, that's my personal
     opinion.
         MR. CUNNINGHAM:  Well, you can take that in several ways. 
     One of them is do you consider those the highest priority issues on the
     --
         DR. APOSTOLAKIS:  I find them the most difficult, the most
     difficult, applying it to other industries.
         MR. CUNNINGHAM:  I don't think we'd disagree with you.
         DR. APOSTOLAKIS:  I mean, it makes sense to -- adds
     credibility to say, yes, we did it in this context and it's --
         MR. CUNNINGHAM:  Yes.
         DR. APOSTOLAKIS:  But I wouldn't put too much effort into
     it.
         DR. SEALE:  But the preferable thing would be to have
     someone else use ATHEANA, and then --
         DR. APOSTOLAKIS:  Yes.
         DR. SEALE:  -- you could get them to act as an independent
     reviewer of your work and vice versa.
         MR. CUNNINGHAM:  Sure.
         DR. SEALE:  That strikes me as a much more --
         MR. CUNNINGHAM:  In that context, maybe apply is the wrong
     word but interact with other industries --
         DR. SEALE:  Yes.
         MR. CUNNINGHAM:  -- complex industries on the -- for the
     credibility and the application of ATHEANA.
         DR. APOSTOLAKIS:  Well, you also have, it seems to me, a
     nuclear HRA community.  Why are the teams developing whatever processes
     or whatever?  Is it because they're not aware of ATHEANA yet?
         MR. CUNNINGHAM:  You're taking some of the next
     presentation, which is on the international work that we're doing.
         DR. APOSTOLAKIS:  I'm not sure that we're going to have that
     presentation.
         MR. CUNNINGHAM:  Okay.
         DR. APOSTOLAKIS:  I think we should conclude by discussing
     what we've heard, unless you really feel that -- I mean, I look at it. 
     It's not just really useful.
         MR. CUNNINGHAM:  No, no, I'm sorry; there's a separate
     presentation.
         DR. APOSTOLAKIS:  There is?
         MR. CUNNINGHAM:  Yes; remember this morning that we
     discussed -- one of the first things on the agenda was the work we're
     doing internationally.  We put that off until after that.
         DR. APOSTOLAKIS:  How many view graphs do you have on that?
         MR. CUNNINGHAM:  It's about eight or something like that. 
     We can cover it in 5 or 10 minutes.
         DR. APOSTOLAKIS:  I think we should do that right now.
         MR. CUNNINGHAM:  Okay; it's up to you.
         DR. POWERS:  I would hope you would be able to tell me that
     little -- the Halden program plays or could play in the ATHEANA
     methodology.
         MR. CUNNINGHAM:  Do you want to go ahead and go to the
     international?
         DR. POWERS:  Whenever it's appropriate.
         DR. APOSTOLAKIS:  It's up to you, Mark.  I think we're done
     with this.
         MR. CUNNINGHAM:  We're done with this; then, let's go ahead,
     and we'll cover the international thing.
         DR. APOSTOLAKIS:  I want to reserve at least 5 minutes for
     comments from the members.
         MR. CUNNINGHAM:  Okay.
         DR. APOSTOLAKIS:  Before we go on to the Sorenson
     presentation.
         MR. CUNNINGHAM:  Okay.
         Basically, as we've been doing this ATHEANA work and our
     other HRA work, we've had two principal mechanisms for interacting
     internationally with other developers and appliers of HRA methods.  One
     is through the CSNI principal working group five on PRA; in particular,
     there was something called the task group 97-2, which is working on the
     issue of errors of commission.
         DR. APOSTOLAKIS:  Who is our member?
         MR. CUNNINGHAM:  I'm sorry?
         DR. APOSTOLAKIS:  Who represents the NRC there, PWG-5?
         MR. CUNNINGHAM:  We have two or three different
     interactions.  Joe Murphy is the chairman of PWG-5; I'm the U.S.
     representative on 5; the chair of the 97-2 task group was Ann
     Ramey-Smith.  We also have our COOPRA programs.  One of the working
     groups there was established to look at the impact of organizational
     influences on risk.
         DR. APOSTOLAKIS:  Is that what the Spaniards are doing?
         MR. CUNNINGHAM:  Yes, that's where the Spanish come in. 
     It's the international cooperative PRA research program.  It doesn't fit
     the --
         DR. APOSTOLAKIS:  That is one of the Former Chairman
     Jackson's initiative papers.
         MR. CUNNINGHAM:  Correct; she wanted to -- she wanted the
     regulators to work more closely together, and there were a couple of
     research groups established as part of that.
         Anyway, okay, the PWG-5 task 97-2 had three general goals. 
     You want to look at insights, although perhaps that's no longer the
     right word to use; develop perspectives on errors of commission to apply
     some of the available methods which supposedly handle errors of
     commission and for quantitative and non-quantitative, more qualitative
     analysis of errors of commission and to look at what data would be
     needed to support types of analysis.
         DR. POWERS:  Have any of the technical fields -- I can with
     modest amount of effort, have you seen the database that -- is there
     someplace that I would go to find data that are pertinent to human
     reliability analysis?
         MR. CUNNINGHAM:  Do you want to answer that?  I'm going to
     have one of my colleagues come up and answer that a little more
     explicitly.  One of the people over here was shaking her head; I don't
     know.
         MS. RAMEY-SMITH:  No, that's a short answer.
         [Laughter.]
         DR. APOSTOLAKIS:  Would you identify yourself please?
         DR. POWERS:  Before she identifies herself as a major expert
     in the field that I noticed last year our first exposure to ATHEANA was
     on human reliability analysis, brand spanking new, put out by a book
     publisher, and so I immediately acquired a copy of this book; read it
     for an entire airplane flight from Albuquerque to Washington, D.C. and
     found not one data point in the entire book.  But there were 30-some
     papers on various human reliability analyses but not one data point.
         DR. SEALE:  We still need to know who she is.  For the
     record, please?
         MS. RAMEY-SMITH:  Ann Ramey-Smith, NRC.
         If I can recall, the question was is there a database that
     you can turn to, and the short answer from our perspective of the kind
     of analysis that -- and from the perspective that we think you should do
     an analysis, which is within the context of what's going on in the plant
     and performance shaping factors and so on, there is not a database that
     exists that we can turn to and go -- and make inferences based on
     statistical data.
         The fact is that we've developed our own small database that
     has operational data in it that we have analyzed.  There are various and
     sundry databases of various sorts.  The question comes down, and one of
     the questions that this PWG-5 is going to address is the fact that we
     have a lot of databases, none of which may serve the needs of the
     specific methods that people are trying to apply.
         DR. UHRIG:  Would there not be a lot of information
     available through the LERs?
         MS. RAMEY-SMITH:  Oh, if that were true.  Actually, there is
     quite a lot of information available on the LERs.  Unfortunately, it's
     difficult oftentimes in those writeups to understand fully what the
     context was, to understand why the operators did what they did and what
     were the consequences and what were the timing and so on and so forth. 
     One concern that some of the HRA folks have is that possible changes to
     the LER rule will even strip from the reports the little information
     that it had before, so we're concerned about that.
         The better source for information, actually, has been the
     AIT reports and some very excellent reports that were previously done by
     AEOD when they did studies of particular events that maybe didn't rise
     to the level of AITs but were very in-depth analyses, and we were able
     to make use of those, particularly early on when we were doing this
     iterative evaluation of operating experience.  It was quite helpful.
         DR. POWERS:  One of the issues that NRR is having to
     struggle with is these criteria in what actions should be automated as
     opposed to being manual.  How long does it take somebody to diagnose a
     situation and respond to it?  And there are several that they have,
     because they have some good guidelines; they just don't have any data.
         MS. RAMEY-SMITH:  I think this approach would be very
     helpful for understanding -- what is it? -- B-17, the safety-related
     operator actions.  I think that the agency would be wise to evaluate
     that issue within the context of PRA.
         DR. APOSTOLAKIS:  This looks to me like a benchmark
     exercise.  Is that what it is?
         MR. CUNNINGHAM:  No; the sense that I have is that someday,
     we might be able to get to a benchmark exercise, but the principal
     players weren't comfortable at this point in constraining the analysis
     to that degree.
         DR. APOSTOLAKIS:  So, oh, yes, because you're saying they
     apply to events of the --
         MR. CUNNINGHAM:  That's right; we have a variety of
     different methods, and what we were doing was trying to see what these
     methods were giving us, so we didn't try to constrain it to a particular
     method or a particular event.
         DR. APOSTOLAKIS:  Okay; thank you.
         MR. CUNNINGHAM:  As you can see on page 4, we have a number
     of different methods applied.  ATHEANA was applied by the U.S. group,
     the Japanese in people in the Netherlands; also different methods
     applied such as MERMOS, SHARP.  We have the Czech Republic spelled
     correctly today, so that was an advancement over yesterday.
         [Laughter.]
         MR. CUNNINGHAM:  And some other models that, as you can see,
     we go back to the Borsele theory.
         DR. APOSTOLAKIS:  Is SHARP really a model?
         Okay; let's go on.
         MR. CUNNINGHAM:  Okay; slides five and six are a number of
     the conclusions that are coming out of the task 97-2.  I'm not sure I
     want to go into any of the details today, but you can see the types of
     the issues that they're dealing with and what the report will look like. 
     The report has been by and large has been finished; the report of this
     group has been finished.  It's going to go before the full CSNI next
     month, I believe, for approval for publication.  So it's essentially --
     this part is particularly -- is essentially done.
         DR. APOSTOLAKIS:  The words are a little bit important here. 
     The rational identification of errors of commission is difficult.  What
     do you mean by rational?
         MS. RAMEY-SMITH:  That was the word that was chosen in the
     international community that everyone was comfortable with.  But the way
     you can think of it is it's as opposed to experientially, you know, so
     that it's more predicting to sit down and to be able to identify errors
     of commission a priori.
         DR. APOSTOLAKIS:  Do you mean perhaps systematic?
         MS. RAMEY-SMITH:  Yes, that could have -- I guess the point
     is to be able to I guess systematically analyze it, you know, a priori
     be able to identify an error of commission.  Systematic is a perfectly
     good word.  This was just the word -- we used on this slide the words
     that, in the international group that was working on this, they were
     comfortable with.
         DR. APOSTOLAKIS:  And what is cognitive dissonance?
         MS. RAMEY-SMITH:  Okay; perhaps Dr. Thompson would like to
     --
         DR. APOSTOLAKIS:  That was an international term?
         MS. RAMEY-SMITH:  No, cognitive dissonance is from the good
     old field of psychology.
         DR. APOSTOLAKIS:  Oh, okay.
         DR. BARTON:  It's Greek.
         DR. APOSTOLAKIS:  What?
         DR. BARTON:  It's Greek.
         [Laughter.]
         DR. SEALE:  Could I ask if this group of international
     experts had all of these different approaches, presumably, they would
     have a great deal of common interest in making certain things like LERs
     helpful about what's there.  Has anyone put together a sort of a
     standard format for what it would take to get an LER that had the
     information you needed in it be able to generate a database?
         MR. CUNNINGHAM:  Actually, one of the follow-on tasks of
     this work is for the HRA people here to go back and try to lay out what
     data do they need based on their experience with this type of thing.  So
     today, I don't think we have it, but I think over the next year or so,
     CSNI PWG-5 is going to be undertaking an effort to put that in the
     lifestyle.
         DR. SEALE:  It seems to me that should be something you
     could go ahead on, and whatever happens, at least now, you'll be getting
     information that's complete --
         MR. CUNNINGHAM:  Yes.
         DR. SEALE:  -- in some sense.
         MR. CUNNINGHAM:  Yes.
         DR. APOSTOLAKIS:  That would be a very useful result.
         MR. CUNNINGHAM:  And that's one of the things that PWG-5 is
     going to undertake.
         MR. SIEBER:  Does that mean that every LER a plant puts out
     here goes through the ATHEANA program?
         DR. APOSTOLAKIS:  No, no, no, no.  The ATHEANA has developed
     guidance about the LERs.  The guys who write the LERs don't need to know
     about ATHEANA.
         MR. CUNNINGHAM:  Okay.
         DR. SEALE:  Just what it takes to have all of that planning
     data and things like that in it so that you've got a picture.
         MR. CUNNINGHAM:  Just two clarifications.  One was this
     isn't the ATHEANA guys; it's the -- this international group of HRA
     people, so it's the MERMOS guys and all those guys are going to be doing
     it.  It's not an ATHEANA specific issue.
         The second, I was talking about data needs in general.  I
     wasn't trying to suggest that all of the data needs that we had would
     automatically translate into something at LER, a change in the LER
     reporting requirements.  I wasn't suggesting that.
         DR. APOSTOLAKIS:  There has been a continuing set of
     discussions on human liability, and as I remember, former member Jay
     Carroll was raising that issue every chance he had.  How can you
     restructure the LERs so that the information is useful to analysts? 
     Because the LERs were not designed -- they were designed for the PRA
     phase, right?  You don't need another review for that.
         MR. CUNNINGHAM:  The LERs have a particular role, and as
     that role is defined even today, it's not going to provide a lot of the
     detailed information.  Now in parallel, though, with the development of
     all of the LER generation, you have the NPO and NRC and industry work in
     EPIX, which will be collecting information that is much more relevant to
     PRA types of analyses.  So I wouldn't so much focus on LERs as EPIX.
         DR. APOSTOLAKIS:  It would be nice to influence what those
     guys are doing.
         MR. CUNNINGHAM:  Yes.
         DR. APOSTOLAKIS:  Okay; next.
         MR. CUNNINGHAM:  Okay; going on to slide seven on the COOPRA
     working group on risk impact of organizational influences, basically,
     we're trying to -- the goal of the working group is to identify the
     relationships between measurable organizational variables and PRA
     parameters so that you can bring the influence in and explicitly model
     the influence in PRAs.
         DR. APOSTOLAKIS:  Next.
         MR. CUNNINGHAM:  Overall, I don't think I need to go into
     the outcomes as much as -- I think it's understood as to what that is. 
     Right now, it's fairly early in the process.  We're trying to get a
     better understanding of what people are doing in this area.  You alluded
     to the Spanish work in this area.  The Spanish are one of the key
     contributors in here.  How many countries are involved in this?
         MS. RAMEY-SMITH:  It's about six or seven.
         MR. CUNNINGHAM:  Okay; about six or seven countries; the UK,
     France, Spain, Germany, did you say?
         MS. RAMEY-SMITH:  Yes, Germany.
         MR. CUNNINGHAM:  Argentina, Japan?
         MS. RAMEY-SMITH:  Japan.
         MR. CUNNINGHAM:  Japan.  They're trying to work together on
     this issue.  Basically, again, this is fairly early in the work here. 
     There's going to be another meeting early next year to basically take
     the next step forward in the COOPRA work.  That's --
         DR. APOSTOLAKIS:  That's it?
         MR. CUNNINGHAM:  That's the short summary of the
     international work.
         DR. APOSTOLAKIS:  Okay.
         DR. POWERS:  And so, the Halden program has no impact on
     your --
         MR. CUNNINGHAM:  I'm sorry?
         DR. POWERS:  The Halden program has no impact on your --
         MR. CUNNINGHAM:  The Halden program has traditionally -- Jay
     Persensky sitting back here knows far more about it than I -- but has
     traditionally been oriented towards not so much human reliability
     analysis for PRA but for other human factors issues.  There has been
     some ideas that Halden will become more involved in human reliability
     analysis.  That's at least, I guess, in the formative stages.
         MR. PERSENSKY:  Jay Persensky, Office of Research.
         Halden has proposed for their next 3-year program, which
     starts in November, the development of an HRA-related activity based
     primarily on input from PWG-5, because a number of the people that have
     been involved with the Halden human error analysis project also serve on
     that or have served on that task force.  The goal, as I understand it at
     this point, is aimed more towards trying to take the recommendations
     with regard to kinds of data and seeing whether or not they can play a
     role in that.  At this point, it is in the formative stage, but it's
     looking more at that aspect of data since they do collect a lot of data,
     at least simulator data in-house.
         Now, whether it can be used or not is another question.  And
     that's what they're looking at at this point.
         DR. POWERS:  Is cross-cultural data any good?  In other
     words, if I collect data on the Swedish or Norwegian operators on a
     Finnish plant, is that going to be any good for human error analysis,
     modeling or for American operators on American plants?
         MS. RAMEY-SMITH:  It has the same context.
         MR. CUNNINGHAM:  When you say data, it depends.  If you're
     talking about probabilities, I don't know that any of the particular
     probabilities will apply, because again, there's a strong context
     influence.  Can it provide some more qualitative insights?  I suspect it
     could but again --
         DR. POWERS:  Cognitive things?  What does it tell you about
     processing information, things like that?  Are there big enough cultural
     differences that it's not applicable?  I would assume that Japanese data
     would just be useless for us.
         MR. CUNNINGHAM:  I wasn't thinking of the Japanese, but
     there may be some cultures where it would be of real questionable use
     depending on the basic management and organization and how they do
     things and whatever, it could be and not be very applicable.
         DR. APOSTOLAKIS:  Okay; all right, why don't we go quickly
     around the table for the two questions:  Should we write a letter, and
     what is your overall opinion?
         Mr. Barton?
         DR. BARTON:  Yes; I think we need to write a letter.  But
     let me tell you what my opinion is first --
         DR. APOSTOLAKIS:  Okay.
         DR. BARTON:  -- and maybe we can figure out if my opinion is
     similar to others; maybe not.  I fail to see the usefulness of this tool
     for the work that's involved.  Maybe I need to see some more examples. 
     I mean, the fire example doesn't prove to me that ATHEANA is much better
     than existing processes I know when looking at EOPs and how I train
     people and how people use procedures or react to plant transients.
         I think that as I look at this process, I also see where a
     lot of some of these actions depend on safety cultures, conservative
     decision making, et cetera, et cetera, and those two tie into this to
     understand more help and more safety culture and conservative decision
     making also.
         I think the tool -- I don't want to poo poo the tool, but I
     think it's a lot of work, and I don't see that you get a lot of benefit
     out of going through this process to really make it something that
     people are going to have to use in their sites unless this is a
     voluntary thing.  I don't know what the intent of ATHEANA is, but I
     don't see that benefit with the amount of effort I have to put into it.
         DR. APOSTOLAKIS:  And you would recommend the committee to
     write a letter stating this?
         DR. BARTON:  Well I think that if everybody else feels the
     same way, I think we need to tell somebody, you know, maybe that they
     ought to stop the process or change course or whatever.
         DR. POWERS:  I guess I share your concern that what we've
     seen may not reveal the definite capability of this, because there seem
     to be a lot of people here who are very enthusiastic about it.  Based on
     what was presented on the fire, I come away with -- it just didn't help
     me very much.
         DR. BARTON:  It didn't help me either, frankly.
         DR. POWERS:  But putting a good face forward or seeing how
     it's applied I think is something we ought to do more of and more of a
     comparison to why is it so much better than the other, and I agree with
     you, the fire analysis just didn't help me very much at all.
         DR. APOSTOLAKIS:  Mr. Siebert?
         MR. SIEBER:  I will probably reveal how little I know about
     this whole process, but I did read the report, and I came away first of
     all with a nuclear power plant perspective -- it's pretty complex; for
     example and this reviews HRA, PSF, UA, HFE and HEM, all of those were
     used in this discussion.  For a power plant person, I have difficulty
     with all of those acronyms.  I had some difficulty in figuring out
     ordinary things like culture and background and training, and we
     struggled with that.  So it could be -- the writeup could be a little
     simpler as it is.  The only way I could read it was to write the
     definitions of all of these things down, and every time one would come
     up, I would look at what I wrote down.
         The second thing was the actual application.  In a formal
     sense, I think it's pretty good.  And it would be useful to analyze some
     events to try to predict the outcomes of some events from a quantitative
     standpoint.  That was left unreasoned.  It was sort of like you arrive
     at a lot of things without -- and to me, that's not quantification. 
     That's just a numerical opinion, and I'm not sure that that's -- the
     other thing that I was struck by was when I figured the cost to apply it
     would be with NUREG 2600 which was 10 to 15 people to do a level three
     PRA over a period of several months.
         If I add ATHEANA onto that, I basically add 5 people.  I add
     5 people over a period of a year or so.  That's a lot of people. 
     Several of the people are key people, like the SRA.  The training
     manager; the simulator operator; I mean, our simulators are running
     almost 24 hours a day at this point.  So I think that the ability to
     make that investment, they would have to decide who am I going to lay
     off?
         So there would have to be a clear description of why some of
     the somebody other than the NRC would be motivated to do this, and I
     can't find it in the fire scenario.  There would be an awful lot of
     places where it would be very, very difficult to describe, you know,
     where all of this decision making or lack of decision making is.  It is
     understandable and logical; it's complex to read.  It's the state of the
     art.  It would be expensive to apply.  If you could show how this
     benefits safety --
         DR. BARTON:  And improve safety?
         DR. THOMPSON:  And improve safety.
         DR. APOSTOLAKIS:  That's it?
         MR. SIEBER:  That's it.
         DR. APOSTOLAKIS:  Bob?
         DR. SEALE:  Well, I have to apologize first for not being
     here for the presentation on fire.  Mario and I were doing some other
     things on license renewal.  I was impressed with the fact that the
     information that was presented on ATHEANA seemed to be a lot more
     detailed and a lot more thoughtful than what we had heard in the past. 
     It's very clear that the staff has been busy trying to firm up a lot of
     the areas that we had raised questions about in the past.  At the same
     time, I think of the 7 years.  I seem to recall that it had something to
     do with the cycle on some things in the Bible.
         [Laughter.]
         DR. SEALE:  But it seems to me for all of the reasons that
     you've heard from these people here and which I'm sure that you'd hear
     from other people, including plant people out there plant inspectors;
     that is, NRC people at the sites and so forth that you very badly need
     some application to show where this process worked, and I don't know
     enough about it to make a dogmatic judgment on my own as to whether or
     not those applications are there, but I would advise you to look very
     carefully to see if you can find someplace where you'd have a gotcha or
     two, because you clearly need a gotcha.
         The other thing, though, is that in terms of the things that
     are in this international program, I do believe that whatever format the
     human performance problem takes in the future, you can make some
     recommendations as to what it takes to put our experience as we live it
     today in a form which would be more readily retrievable when we do have
     a human factors process that's a little more workable, and so, you know,
     I just think you need to look at examples and an application.  That's
     where you're going to find your advocates if you're going to find any.
         DR. BARTON:  George, they did a fire scenario, and, you
     know, if you find this thing to the Indian Point II or the Wolf Creek
     draindown, what would you learn from that plant?  Because I just left
     the plant yesterday, and one of the agenda items we had was human
     performance at the plant, and it's not improving.  And I look at how
     could ATHEANA really help?  And when you look at the day-to-day human
     performance events, this wouldn't do a thing for those kind of, you
     know, day-to-day errors.
         You know, you're doing control rod manipulation.  This is
     typical kind of stuff.  You're doing control rod manipulation.  You have
     the guy at the controls.  He's briefed; he's trained; he's licensed. 
     You have a peer checker.  You go through the store; you go through all
     of the principles.  You get feedback into your three-way communications;
     the whole nine yards.  You're going to move this rod two notches out,
     and you do everything, and the guy goes two notches in.
         Now, tell me how ATHEANA -- and this is the typical stuff
     that happens in a plant on a day-to-day basis.  Now, tell me how I go
     through the ATHEANA process, and it's going to help me do something
     different other than whack this guy's head off, you know.  And, see, Jay
     agrees with me.
         MR. PERSENSKY:  They didn't get to the part of cutting his
     head off.
         [Laughter.]
         DR. POWERS:  Well, it strikes me that they will find an
     approach that they could tackle exactly that question.  It strikes me
     that I came in here saying ah, they have a new way to do PRA, put human
     reliability analysis in total in this, and I see a nice package.  I
     think they're not.  I think they need to work on the way they tackle
     really tough reliability issues.
         For instance, you pretty much set up one where you could
     apply all of these techniques that we talked about here to that
     particular issue, and I bet you they would come up with a response.  In
     fact, that's the lesson I get.  There is enough horsepower on it that
     you will get something useful on it.  And what they don't have is
     something that allows me to go and do the entire human reliability
     portion of a safety analysis, you know, and just turn the crank.  This
     is more for working on the really tough issues.  It's perfect for my
     surge line issue.  I mean, they could really straighten Tray Tinkler
     out.
         [Laughter.]
         DR. POWERS:  Which would be a start.
         MR. CUNNINGHAM:  We don't want to promise too much.
         MR. SIEBER:  One of the things that's stated early on in the
     NUREG concept is that you don't blame people, and I'm sure you want to
     do that.  On the other hand, when I read that, I thought secretly to
     myself some people just mess up.  You pull records on operators, and you
     find some will make one mistake and some another, and when you move in
     instead of moving out, you know, there may be a lack of attention to
     detail or a lack of safety culture or a lack of attitude or what have
     you that is preventing that person from doing the right thing, and I
     think that you've
     missed --
         DR. POWERS:  The documentation used to be a lot worse.  I
     mean, earlier documentation was really anathema to dare say that
     somebody screwed up.
         [Laughter.]
         DR. APOSTOLAKIS:  Dr. Uhrig?
         DR. POWERS:  I'll take another shot at it.
         DR. APOSTOLAKIS:  Okay; Dr. Uhrig?
         DR. UHRIG:  A couple of things.  One, anytime I've ever been
     involved with a plant with a serious problem, there has always been some
     unexpected turn of events that actually changed the nature of the
     problem, and I don't know how you would approach that.  That's an
     observation.
         The second one is it strikes me that if you need data, a
     modification of the LER procedures is a pretty straightforward process. 
     It's not simple.  I don't think you go to rulemaking to get the
     information that you need.  I don't think so.
         MR. CUNNINGHAM:  It would require rulemaking, absolutely,
     and a major fight.
         DR. UHRIG:  Yes.
         MR. CUNNINGHAM:  And a major fight before that rulemaking
     every got very far.
         DR. POWERS:  I don't think that's the problem.  I really
     don't.
         DR. APOSTOLAKIS:  But if you convince people you have the
     right approach --
         DR. POWERS:  I don't think it's a question of approach.  You
     know, when I first came in, you need a bunch of data to prepare this,
     and I'm not sure.  I think you need a bunch of problems to solve --
         MR. CUNNINGHAM:  Yes.
         DR. POWERS:  -- more than they need data to verify.  I think
     if I were these guys, I'd be out looking for every one of these
     problems, and there's just one on the criteria for when they have to
     automate versus manual action that's been sitting over like a lump, and
     I think you guys could attack that problem and get something very useful
     out of it.
         DR. APOSTOLAKIS:  Anything else?
         DR. UHRIG:  That issue is another one that somehow needs to
     get addressed.  We have literally done what we can do with training.  I
     think we're asymptomatically approaching this problem, well, you can
     train people.  Maybe automation is the next step.  And I don't know
     quite how this would be done.
         DR. POWERS:  They have a very interesting kind of plan that
     would allow for people to accomplish -- you can't do it in that period
     of time, you have to automate.  How long do you have to rely on somebody
     to recognize; they've got to do something to do it, and then, you would
     surely have to -- you need those kinds of numbers, and we've got some,
     you know.  But there's no reason to think that it's real well-founded. 
     The database that they're based on is proprietary.  We can't even get
     it.  And this looks like a methodology that I think attacks that problem
     very well.
         DR. APOSTOLAKIS:  Dr. Bonaca?
         DR. BONACA:  Well, you know, thinking about what's being
     done here, one of the problems I always see is about operators, people
     are always writing about what the operators will do at the most distant
     -- and it's very hard to bring most of this together.  But, again, you
     know, I want to reemphasize the fact that where it is happening in that
     unique fashion was in the thinking-oriented procedure.  Any experience
     that has been in the industry, it was a massive experience.  Only when
     you put thousands of man hours when you have operators thinking together
     with engineers, with people who develop event trees, very specific trees
     with multiple options and so on and so forth; I think there has to be
     some opportunity to benefit by grounding some of the work in ATHEANA on
     comparison to what was done there, maybe just the EPGs, for example,
     taking some example, getting some of the people involved in those.
         I think the products will be people.  You have some model of
     verification.  You have some way to stand on some of the hypotheses of
     ATHEANA.  Everything is speculative.  It's probably correct, but we need
     to have some benchmark.
         And second, that may offer you some simplification process
     and some issues that already have been dealt with in those efforts; take
     a look at procedures that may -- may help you in simplifying the
     process.  But I can't go any further in speaking about it.  But again,
     the point I'm making is that that's the only place that I know operators
     and analysts and development of processes came together for a long time. 
     But I think that there will be a great benefit, actually, in trying to
     anchor ATHEANA on some benchmark, some comparison or some statement.
         DR. APOSTOLAKIS:  Well, I find it a bit disturbing that two
     of the members with hands-on plant experience are so negative.  I would
     like to ask the subcommittee whether we should propose to write a
     letter, whose form will have to be discussed and content.
         DR. POWERS:  I don't think we have to write a letter that's
     critical.  We need to have something that tells you to judge that data,
     and I don't think we need to write a letter on the external safety
     mechanisms.  Cultural data, for example, on an organization.
         DR. APOSTOLAKIS:  The letter may say that.
         DR. POWERS:  If it says that, then fine.
         DR. APOSTOLAKIS:  Express reservations for the present state
     and may urge the further application with the explicit wish that the
     thing become more valuable.
         DR. BARTON:  I would agree with that.
         DR. APOSTOLAKIS:  The letter doesn't have to say stop it. 
     In fact, I wouldn't propose such a letter.
         DR. POWERS:  Maybe we should say that these people should
     spend a year tackling three or four problems, visible, useful problems
     that -- and show the value of this technique, because I think it's not a
     technique that's going to get used.  It would be wrong to hurt this,
     when I think they're just getting to the point where they can actually
     do something.
         DR. APOSTOLAKIS:  The letter, the contents of the letter are
     to be discussed; I think I got a pretty good idea of how you gentlemen
     feel, and certainly, I didn't hear anybody say stop this, although Mr.
     Barton came awfully close.
         Yes, sir?
         MR. SIEBER:  I wouldn't want to be interpreted as negative,
     but I think things --
         DR. APOSTOLAKIS:  But you have been.
         MR. SIEBER:  No, I think things are needed.
         DR. APOSTOLAKIS:  Yes.
         MR. SIEBER:  I think simplification is needed; a good
     objective is needed; what we need to accomplish.
         DR. APOSTOLAKIS:  Does everyone around the table agree that
     a letter along those lines, which, of course will be discussed in
     December will be useful?
         DR. POWERS:  I have reservations about the simplification,
     because I know in the area -- we do have computer codes that are highly
     detailed, very complex things that we use for attacking the heart of
     very complex, tough problems; much more simplified techniques that we
     use for doing broad, scoping analyses, and I think there's room in this
     field, and I think maybe one of the flaws that's existed in the past in
     this human reliability area is that everybody was trying to make the one
     thing that would fit all hard problems, easy problems --
         DR. APOSTOLAKIS:  Right.
         DR. POWERS:  -- long problems, short problems, and maybe we
     do need to have a tiered type of approach in which you say, okay, I've
     got a kind of a scoping tool that --
         DR. APOSTOLAKIS:  No, I think --
         DR. POWERS:  I've got this one that's attacking the really
     tough, really juicy problems that have defied any useful resolution in
     the past.
         DR. APOSTOLAKIS:  I think the issue of screening, scoping
     the analysis, the raised approach that was mentioned earlier, all that
     part, I understand as part of this, and that was that you should have --
     there also is -- but you have to convince me first that this event
     deserves that treatment.
         DR. POWERS:  Yes.
         DR. APOSTOLAKIS:  And that's what's missing right now.  I
     would agree with Dana that you don't have to simplify everything, but
     I'm inclined to say that the majority of the events would deserve it.
         Now, naturally, when you develop a methodology, of course,
     you attack the most difficult part, but I think a clear message here is
     develop maybe a screening approach, a phased approach that would say for
     these kinds of events, do this, which is fairly straightforward and
     simple; for other kinds of events, you do something else until you reach
     the kinds of events and severe accidents that really deserve this
     full-blown approach that may take time, take experts to apply.
         You know, this criticism that plant people should be able to
     apply it, I don't know how far it can go, because if it's very
     difficult, they are known to hire consultants.  So this is the kind of
     thing that they have to think about.  We're not going to tell them how
     to do it, but that's what I understand by your call for simplification. 
     You're not asking for something that says do A, B, C, and you're done.
         Okay; so it seems to me that we have consensus, unless I
     hear otherwise, that a letter along these lines will be appropriate to
     issue, and I'm sure we'll negotiate the words and the sentences in
     December.
         Dana?  Your silence is approval?
         DR. POWERS:  No, my silence is that I'm encouraging at this
     point.
         DR. APOSTOLAKIS:  Yes; yes.
         DR. POWERS:  It's okay to have a methodology at this point
     that only Ph.D.s in human reliability analysis can understand very well.
         DR. APOSTOLAKIS:  I understand the concern about the tone,
     but I also want to make it very clear in the written record that these
     gentlemen have reservations and not random members.  I don't think Mr.
     Bonaca is going to express as extreme views as you, but I'm not sure
     he's far away from your thinking.  So if I have the three utility
     members thinking that way, I think the letter should say something to
     that effect without necessarily discouraging further development or
     refinement.
         DR. POWERS:  Yes.
         DR. APOSTOLAKIS:  But it's only fair; the letter will be
     constructive, but it will clearly state the concerns, and perhaps we
     should meet a year from now or something like that.  We can say
     something like that in the letter.  We look forward to have interactions
     with the staff.
         DR. POWERS:  I think I would really enjoy giving them some
     time to go off and think about some problems to attack and come back and
     say we think we're going to attack these two problems next time or
     something like that.  I think that would be really interesting, because
     I think there are some problems out there that line organizations really
     need some help on solving, and I'm absolutely convinced that the human
     element is going to become of overwhelming importance if we're going to
     have a viable nuclear energy industry in this country.
         The operators are asked to do so much, and it's going to be
     more and more with less and less over time, and we need to have
     something that constrains us saying, yes, the operators will do this,
     because right now, nothing constrains us from saying yes, the operators
     have to be trained on this; they have to know this; they have to worry
     about this and like that, and at some point, where that process has to
     be constrained a little bit.
         But I think I really come in much more enthusiastic about
     this than you thought I would.
         DR. APOSTOLAKIS:  Okay; I think I've heard enough.  I can
     draft a letter.  I'm sure it will be unrecognizable after --
         [Laughter.]
         DR. APOSTOLAKIS:  But at least I have a sense of the
     subcommittee.
         DR. SEALE:  Nobody overhead.
         DR. APOSTOLAKIS:  Yes.
         MR. SIEBER:  Can we see a copy of it before the meeting?
         DR. APOSTOLAKIS:  I'll do my best; I'll do my best, Jack,
     before the meeting.  I urge you to send emails with your concerns; yes,
     and I will do my best to include your thoughts.  I took notes here, but,
     you know, John, if you want to send me a fax or call me.
         DR. BARTON:  Okay.
         DR. APOSTOLAKIS:  Or Jack, because I'm particularly
     interested -- I mean, this is the way this committee has functioned in
     the past.  I mean, if cognizant members express reservations, their
     views carry a lot of weight.
         Is there anything else the members want to say before we
     move on to safety culture?
         [No response.]
         DR. APOSTOLAKIS:  I must say I was pleasantly surprised to
     hear again the same members talk about how they wanted to see safety
     culture addressed.  Miracles never cease, I must say.
         MR. CUNNINGHAM:  Could I ask a question?  I believe we're on
     the agenda for the full committee in December.
         DR. APOSTOLAKIS:  Yes.
         DR. BARTON:  I think it has to be.  I think after this,
     you're going to have to be.
         MR. CUNNINGHAM:  That's the question.  What would you like
     for us --
         DR. BARTON:  To brief the other members.
         DR. APOSTOLAKIS:  How much time do you have?
         MR. PERALTA:  Probably just 45 minutes?
         DR. BARTON:  How much?
         DR. APOSTOLAKIS:  Forty-five minutes.
         Would it be useful to talk about the fire scenario and in
     the context of the scenario explain ATHEANA?  I don't think they can do
     both.
         MR. CUNNINGHAM:  I would agree.  I don't think we can do
     both.
         DR. POWERS:  I think they ought to just explain ATHEANA.  I
     don't think they should try the fire scenario.
         DR. APOSTOLAKIS:  I thought the scenario, the members found
     extremely useful.
         DR. BARTON:  Well, I think yes, it is, because it shows how
     they tried to apply --
         DR. APOSTOLAKIS:  Right.
         DR. BARTON:  -- the principles to an actual situation.  I
     think that does help.  Are you sure we can't squeeze some more time off?
         DR. POWERS:  No.
         DR. BARTON:  No, we can't.
         DR. APOSTOLAKIS:  Let us ask if Mr. Cunningham can structure
     it in such a way that he has the scenario, and on the way, you are
     explaining the method?
         MR. CUNNINGHAM:  Mr. Cunningham will try in 45 minutes.
         DR. APOSTOLAKIS:  We are reminded here -- is the document
     going to be available before the meeting on the fire scenario?
         MR. CUNNINGHAM:  I'm sorry; the --
         DR. APOSTOLAKIS:  We don't have anything in writing on the
     --
         MR. CUNNINGHAM:  On the fire scenario?  Will we have that
     for the full committee?
         MR. KOLACZKOWSKI:  There is certainly a draft available.
         MR. CUNNINGHAM:  Okay.
         MR. KOLACZKOWSKI:  The NRC has not a chance to review it
     yet, so it certainly is subject to revisions.
         DR. THOMPSON:  It's still in development.
         MR. CUNNINGHAM:  Okay.
         DR. APOSTOLAKIS:  So we're not going to have it?
         MR. KOLACZKOWSKI:  I don't think you're going to have it.
         MR. CUNNINGHAM:  No, okay.
         DR. APOSTOLAKIS:  Will that be a factor?  We cannot comment
     on something that we don't have?  But we have a presentation.  We have
     view graphs with a comparison, so we can comment on those, right?  We
     can say that we didn't have a written document, but they have some nice
     statements.
         Mark, again, I don't want to tell you how to structure the
     presentation, but the figure you have -- well, the classic ATHEANA --
         MR. CUNNINGHAM:  Yes.
         DR. APOSTOLAKIS:  -- maybe you can use that one and explain
     the elements of the process and then jump into the scenario.
         MR. CUNNINGHAM:  Okay.
         DR. APOSTOLAKIS:  I don't know.
         MR. CUNNINGHAM:  Okay.
         DR. APOSTOLAKIS:  Okay?  And we will try to refrain from
     repeating the same questions that we have done here, right?  And I see
     some smiles on the faces of some of my colleagues.
         [Laughter.]
         DR. APOSTOLAKIS:  But we will try; we will try.
         I think in fairness to Mr. Sorenson, we should move quickly
     on to his presentation, and I must tell you that I have to disappear at
     3:30, so, Jack -- where is Jack?
         MR. CUNNINGHAM:  Jack is in the back.
         DR. BARTON:  He said 3:30.
         DR. APOSTOLAKIS:  But I want some discussion.
         DR. BARTON:  And you have to leave at when?
         DR. SEALE:  He has to leave at 3:30.
         DR. APOSTOLAKIS:  Who's leaving at 1:00?
         DR. BARTON:  No, I said you have to leave when?
         DR. APOSTOLAKIS:  3:30.  So we have about an hour and a
     half.  I think it should be plenty, yes?
         DR. SEALE:  I have to leave at about 3:30, too.
         DR. APOSTOLAKIS:  Okay; no problem.  3:30, 3:32.
         [Pause.]
         DR. APOSTOLAKIS:  Okay; this is an initiative of the ACRS. 
     We don't know yet how far it will go; for example, our last initiative
     was on defense in depth, and it went all the way to presenting a paper
     at the conference PSA 1999, writing a letter to the commission and so
     on.  That does not mean that every single initiative we start will have
     that evolution.
         This is the first time that members of this committee
     besides myself are being presented with this, and we also plan to have a
     presentation to the full committee at the retreat; then, the decision
     will be up to the committee as to what the wisest course of action will
     be.  We have asked members of the staff to be here, like Mr. Rosenthal,
     who left; he is coming back.  Jay is here, and we asked the ATHEANA
     people to stay.  They kindly agreed to do it.  So we'll get some
     reaction from experts to our initial thoughts here, and again, where
     this is going to go is up to the committee, and we'll see.
         Mr. Sorenson has been working very diligently on this, so I
     think he deserves now some time.
         Jack?
         MR. SORENSON:  Thank you; I am Jack Sorenson.  This
     discussion is based on a paper that George asked me to write earlier
     this year.  There is a draft on his desk for comment.  But getting to
     this stage took a bit longer, I think, than either one of us thought.
         What I've attempted to do is put together a tutorial that
     will help non-practitioners of human factors-related things to
     understand what the state of the art is and what all the pieces are. 
     This morning, you heard -- and early this afternoon -- a great deal of
     discussion on one piece of a picture that I would like to draw in
     somewhat larger terms.  There is no attempt here to advance the state of
     the art in safety culture; just to understand it.  There is no attempt
     to review or critique the NRC human factors program.
         What you will hear is undoubtedly a somewhat naive view, and
     I would encourage those of you who are expert in one or more aspects of
     the subject to offer, I hope, gentle corrections when you feel I have
     misrepresented something.
         DR. APOSTOLAKIS:  I wonder why anyone would ask this
     committee to be gentle?
         [Laughter.]
         DR. APOSTOLAKIS:  Aren't we always?
         MR. SORENSON:  I was not, of course, not referring to the
     committee as being ungentle.
         DR. APOSTOLAKIS:  Oh, I see.
         [Laughter.]
         MR. SORENSON:  The three questions that were posed, I think
     by the planning and procedures subcommittee relative to safety culture
     are what is it?  Why is it important?  And what should the ACRS and NRC
     do about it?  We'll find out that the middle question, why it's
     important, is probably easier to deal with than either what it is or
     what people should do.
         The term safety culture was actually introduced by the
     International Nuclear Safety Analysis Group in their report on the
     Chernobyl accident in 1986.  A couple of years later, they actually
     devoted a publication to safety culture, and in that publication, they
     define it as shown here:  safety culture is that assembly of
     characteristics and attitudes in organizations and individuals which
     establishes that as an overriding priority, nuclear plant safety issues
     receive the attention warranted by their significance.
         There are other definitions that may be useful, and we may
     get to them later if it turns out that they're important, but the main
     thing is that there are -- whatever definitions of safety culture you
     use, there are requirements established essentially at three levels. 
     There are policy level requirements and management level requirements,
     and those two things together create an environment in which individuals
     operate, and it's the interaction between the individuals and the
     environment that is generally understood to be important here.
         The framework is determined by organizational policy and by
     management action and the response of individuals working within that
     framework.  Go on to four, please.
         Just a quick preliminary look at why it's important.  To
     understand its importance, I think you can simply look at what James
     Reason refers to as organizational accidents that have occurred over the
     10 years following TMI.  Of course, within the nuclear industry, it was
     the TMI accident that focused everybody on human factors issues.  In the
     10 years following TMI, there were a number of accidents where
     management and organization factors, safety culture, if you will, you
     know, played an important role.
         The numbers in parentheses following each of these on the
     list are the number of fatalities that occurred.  There was an American
     Airlines accident, plane taking off from Chicago where an engine
     separated from the wing.  It was later traced to faulty maintenance
     procedures.  The Bhopal accident in India, where methylisocyanate was
     released resulting in 2,500 fatalities; the Challenger accident;
     Chernobyl; Herald of Free Enterprise; some of you may be less familiar
     with that.  This was the case of a ferry operating between the
     Netherlands and England that set sail from its Dutch port with the bow
     doors open; capsized with somewhere around 190 fatalities.
         And the last one was Piper Alpha; it was an accident on an
     oil and gas drilling platform where one maintenance crew removed a pump
     from service, removed a relief valve from the system, replaced it with a
     blind flange which was leaking and leaking flammable condensate, and the
     second maintenance crew, the second shift crew, attempted to start the
     pump, and there was an explosion and resulting fire.
         In the nuclear business, other than Chernobyl and TMI, we
     typically end up looking at what are called near misses or significant
     precursors.  Two that come to mind are the Wolf Creek draindown event,
     where the plant was initially in mode four, I believe; 350 pounds per
     square inch; 350 degrees Fahrenheit.  There were a number of activities
     going on; heat removal was by way of the RHR system.  There was a valve
     opened, and 9,200 gallons of water were discharged from the primary
     system to the refueling water storage tank in about a minute.  The cause
     was overlapping activities that allowed that path to be established.
         There were numerous activities.  The work control process
     placed heavy reliance on the control room crew.  There was the
     simultaneous performance of incompatible activities, which were boration
     of one RHR train and strobe testing of an isolation valve in the other
     train.  The potential for draindown was identified but was not acted
     upon.  Probably the most significant item here was that the test was
     originally planned, the strobe testing was originally planned for a
     different time and was deferred, and there was no proper review done of
     the impact of that deferral.
         More recent event, Indian Point II, trip and partial loss of
     AC power.  The plant tripped on a spurious overtemperature delta-T
     signal; off-site power was lost to all the vital 480-volt buses.  One of
     those buses remained deenergized for an extended period and caused
     eventual loss of 125-volt bus and 120-volt AC instrument bus.  All
     diesels started, but the one powering the lost bus tripped.
         This had a number of human factors related to it.  The trip
     was due to noise in the overtemperature delta-T channel that was known
     to be noisy, and the maintenance to fix it had never been completed. 
     The loss of off-site power was due to the fact that the load tap changer
     was in manual rather than automatic, and that resulted in the loss of
     power to the buses.  The diesel trip occurred because there was an
     improper set point in the overcurrent protection and an improper loading
     sequence, and after that, post-trip activities were criticized by the
     NRC for being more focused on normal post-trip activities and not enough
     on the state of risk that the plant was in in attempting to recover from
     that risk.
         One of the things that is worth spending just a minute on is
     the idea of culture as a concept in organizational behavior.  The
     International Nuclear Safety Advisory Group introduces the term safety
     culture pretty much out of the blue.  They make no attempt to tie it
     back to the rather substantial body of literature that exists in either
     anthropology, where culture is a common term, or in organizational
     development, where it has become somewhat more common in the last 20
     years or so.
         The term is not without controversy, if you will,
     particularly among the organizational development people.  The term --
     the idea of ascribing something called culture to an organization
     started to show up in the organizational development literature in the
     very early eighties.  The two best-known books are probably Tom Peters'
     In Search of Excellence and a book by Deal and Kennedy entitled
     Corporate Cultures, and they essentially set out to determine why it was
     that organizations or at least some organizations didn't behave in ways
     that were clearly reflected in their structures; they were looking for
     some other attribute of the organization, and they settled on the term
     culture.
         There are people in the literature who take exception to
     that.  The expectation is if you use the term culture in an
     organizational sense or in the sense of a safety culture that it carries
     with it some of the properties of its original use.  That may or may not
     be true in the case of organizational culture or safety culture, but the
     fact remains that it has found a place in the literature.  It is quite
     widely used, particularly with respect to nuclear technology.  You will
     also find it in other writings in other industries, such as the process
     industries and aviation.
         Having said that, you find that virtually everyone then goes
     on to define it in a way that suits their immediate purpose.  I would
     like to go back to an opening remark which I missed, and that's that I
     knew I was going to have some difficulty with this assignment when I ran
     across an INSAG statement that said safety culture was the human element
     of defense in depth, and having spent a couple of years in defense in
     depth, it just seemed unfair that --
         [Laughter.]
         DR. POWERS:  One thing that you have to remember about the
     origins of the concept is that it came up after the Chernobyl accident. 
     There was a strong effort among parts of some people in the IAEA to
     shelter the RBNK design criticism, and you had to criticize the
     operators, okay?  But criticizing the operators individually was not
     going to fly any better, okay?  Because if you had a bad operator
     individually, why did they allow it?  Why did the system allow this bad
     operator to be this?  You had to go to this safety culture, okay?
         [Laughter.]
         DR. POWERS:  That preserved the RBNK from being attacked,
     and at the same time, it led to protecting the operators individually.
         DR. BARTON:  You have to admit they were a poor example of
     safety culture?
         DR. POWERS:  What did you say?
         DR. BARTON:  Nothing.
         MR. SORENSON:  Well, that makes a good bit of sense,
     obviously.  Although the idea of employee attitude or management and
     worker attitude having a significant impact on safety of operations, you
     know, considerably predates Chernobyl.  You can find references back to
     the early 20th Century when industrial accidents started to become
     significant in some way.
         Okay; I think we can, yes, go on to -- the definition on
     organizational culture, which is a little easier to deal with than
     safety culture, that was offered by a critic of the Peters and Kennedy
     and Deal books is the definition here.  Organizational culture:  the
     shared values; what is important and beliefs, how things work that
     interact with an organization's structure and control systems to produce
     behavioral norms; the way we do things around here.  This one appeared
     in an article by Brill, Utah and Fortune in the mideighties, and you'll
     see it repeated in very much the same form in current literature.
         The last phrase, the way we do things around here, I
     actually tracked back to one of the managing directors of MacKenzie and
     Company.  It seems to be the most concise definition of culture that --
         DR. APOSTOLAKIS:  That's the best one I like.
         MR. SORENSON:  There are competing terms:  safety culture,
     organizational culture, management and organizational factors, safety
     climate, safety attitudes, high reliability organizations, culture of
     reliability, and they all mean more or less the same or slightly
     different things, depending on how they're used and what the
     investigator decides to do with them.
         So I think it's important to keep in mind that there are no
     -- there is no generally agreed upon definition.  We are dealing with
     the way organizations work and the way people within those organizations
     react, and at some point, you choose a definition that fits your use and
     then hopefully apply it consistently thereafter.
         Dr. Powers?
         DR. POWERS:  This one is the sixth sigma?
         DR. APOSTOLAKIS:  I've heard that, to.
         DR. BARTON:  Sick or sixth?
         DR. POWERS:  Sixth.
         MR. SORENSON:  That's one of the zero-defect --
         DR. POWERS:  Yes.
         MR. SORENSON:  -- cults, is it not?
         DR. POWERS:  Do everything right, yes.
         MR. SORENSON:  Yes; I've run across the term within the last
     few weeks and I --
         DR. POWERS:  There was a survey in the Wall Street Journal
     about a month ago.
         DR. APOSTOLAKIS:  Did this agency actually do a
     self-assessment of its safety climate a couple of years ago?
         MR. SORENSON:  There was a survey by the inspector general. 
     I've actually been through the slides that are on the Web on that.  I
     don't think I've ever seen the text of the report.  And they were
     looking for something a little different than I would have called --
     than what I would have termed safety culture.  They were looking for, I
     think, more of the focus of the organization on its mission and assuming
     that if people were focused on the mission of the organization that that
     is factory safety culture.  I may be misrepresenting that but --
         DR. APOSTOLAKIS:  They put climate.
         MR. SORENSON:  They used the word culture.
         DR. APOSTOLAKIS:  No, I remember the word climate, because I
     was impressed.
         MR. SORENSON:  Well, they may have used that also, but I
     think the survey was titled a safety culture survey.
         DR. APOSTOLAKIS:  The French are using climate as well. 
     Climate is supposed to be really culture.  Culture is more permanent,
     presumably.
         MR. SORENSON:  One of the better-known writers in this
     general field, James Reason, in his book on managing organizational
     accidents lists the characteristics of a safety culture as a culture
     that results in -- that encourages the reporting of problems and the
     communicating of those problems to everybody throughout the
     organization; a culture in which or an organizational climate in which
     the members, the workers, feel justice will be done; an organization
     that is flexible in the sense of being able to shift from a hierarchical
     mode of operation under normal circumstances to a different mode of
     operation during a crisis or an emergency and then shift back; and then,
     finally, a learning organization where the information that is made
     available is incorporated into the way things are done.
         DR. SEALE:  That clearly indicates, then, that a safety
     culture is not an evolving set of values but rather a break with the
     past; I mean, I can think of organizations you might characterize as a
     benevolent dictatorship, and that was the way in which safety was
     imposed.  I guess under those circumstances, you would have to say the
     old DuPont organization really didn't have a safety culture, although it
     had a remarkable safety record.
         MR. SORENSON:  Yes; I think that's a fair characterization,
     as a matter of fact.
         DR. APOSTOLAKIS:  And I think a lot of the old-timers in the
     U.S. nuclear Navy also dismiss all of this and say Rickover never needed
     it.
         Now, the question is was the culture of the Navy good
     because of one man?  And do you want that?  Or do you want something
     more than that?  Rickover certainly didn't think much about human
     factors.
         DR. POWERS:  If you don't have enough people to go around --
         DR. APOSTOLAKIS:  I don't know, but Rickover did a good job.
         DR. POWERS:  There are people who would take a different
     view on that.
         [Laughter.]
         DR. POWERS:  And I think you can fairly honestly show that
     there are good and bad aspects of his approach, of his tyranny.
         MR. SORENSON:  There were two boats lost.
         DR. POWERS:  The time it takes to put a boat to sea, the
     mission of those boats and things like that -- you can change your
     approach.
         MR. SIEBER:  We did a survey a number of years ago of the
     idea of safety culture.  About 700 people out of 1,100 responded.  They
     had the same list you have, except they added personal integrity to that
     and caring attitude to that.
         DR. BARTON:  There were other characteristics.
         MR. SIEBER:  And that seemed to really work.  It changed the
     attitude in that facility; it really did, just finding out the
     practices.
         DR. BONACA:  Although the attribute of flexibility, I think,
     goes a long way in the direction.  That's the key item that you
     described there of when you go to technical issues, the ability of
     flattening out organization and not having any more pecking order or a
     fear of bringing up issues.  Flexibility is very important.
         MR. SORENSON:  One can deduce from the literature a few
     common attributes that virtually every -- all of the investigators
     share:  good communications; senior management commitment to safety;
     good organizational learning and some kind of reward system for
     safety-conscious behavior, and the lists expand from that point, if you
     will.
         DR. BARTON:  Conservative decision making.
         MR. SORENSON:  I'd like to take a step back here just a
     little bit and try to put the safety culture issue into the context of
     the larger issue of human factors, and to do that, I think looking at
     the National Research Council report done in 1988 on the NRC human
     factors program is useful.
         The National Research Council identified five areas that
     they thought the NRC, the nuclear regulators, should address in their
     human factors research.  First was the human-system interface; second,
     the personnel subsystem; the third, human performance; the fourth,
     management and organization; and the fifth, the regulatory environment. 
     The first two items, human-system interface and personnel subsystem,
     deal primarily with the man-machine interface, the way the machines are
     designed and the way the personnel are trained.
         Human performance in the context of that report is intended
     to deal with what this morning was referred to as unsafe acts of one
     kind or another, the actions of the system and equipment operators, and
     the management and organization, what they call management and
     organization factors are part of what they called a culture of --
     fostering a culture of reliability.  That was their phrase rather than
     safety culture; and third, the regulatory environment which dealt with
     the issue of how regulatory actions impacted the way the licensees did
     business.
         The safety culture, as I'm attempting to deal with it today,
     is focused on the fourth item, management and organization.  It creates
     the environment that human actions are taken in, and it may contain the
     ingredients to create what James Reason calls latent errors, those
     things which change the outcome of an unsafe act, but the issue of
     safety culture deals with the management and organization factors and
     the climate it creates, the conditions it creates for the human to
     operate in.
         One of the difficulties I had in going through the
     literature was trying to understand what all the pieces were, and so,
     one of the things that I ended up doing that helped me, and I think
     could be generally helpful in putting some of the pieces together is to
     look at all of the things that go into the process of establishing some
     interesting relationship between something called safety culture and
     operational safety or ultimately some measure of risk, and this figure
     shows the first half-dozen steps in that process.
         But the idea here is if safety culture is interesting for me
     from an operational safety standpoint, you need to be able to establish
     something about those relationships.  The process typically starts off
     with defining some kind of an organizational paradigm.  Mintzberg's
     machine bureaucracy is very often used for nuclear power plants, and
     then, as soon as it's used, it's criticized for having several
     shortcomings.  The investigators need to have some idea of how the
     organization works, and they generally should start with some definition
     of safety culture, what it is.
         Having done that, then, they need to define some attributes
     of safety culture, and it might be the ones that I listed a few minutes
     ago:  good organizational learning, good communications and so forth,
     but there are somewhere between a half a dozen and 20 of those
     attributes that can be identified, and having done that, then to
     evaluate organizations, you need to look -- you need to have a way to
     measure those things that you've just identified, and you might put
     together personnel surveys or, you know, technical audits or whatever,
     but you need some kind of evaluation technique that involves looking at
     how the organization, how an organization actually works.
         Having designed the evaluation techniques, you need to
     collect data, and then, you need to have, once you have data, you need
     to have something to -- that tells you how to judge that data, and I've
     indicated that by choosing external safety metrics; if you collect
     cultural data, for example, on an organization, how do you decide that
     that organization is safe or not safe in judging the cultural data?
         In their simplest form, those external metrics might be a
     SALP score.  They might be the performance indicators that we're using
     now.  They might be earlier performance indicators.  But the
     investigator makes some choice of what he's going to compare his
     cultural parameters to.
         And typically, that correlation is done with some sort of
     regression analysis, and as a result of doing the, you find out that
     some number of the safety culture elements you started with, you know,
     correlate with your safety parameters, and some don't.  And the output
     from that first stage, then, is which of these safety culture elements
     turn out to be significant.
         The remainder of the process, then, if you want to carry it,
     you know, all the way to its logical conclusion is you would like to be
     able to use these significant safety culture elements to modify in some
     way your measure of risk, and the next figure -- if you can move that
     one over a bit; pick up the balance of that.  The bottom path there
     identifies, you know, relating the elements that you've decided are
     significant to the PRA parameters or models; box 11 finally modifying
     the PRA parameters and ultimately calculating a new risk metric.
         DR. APOSTOLAKIS:  So I guess ATHEANA, then, because you
     don't necessarily have to go to that, ATHEANA would be somewhere there
     in between 9 and 10, perhaps?
         MR. SORENSON:  I would put -- well, it doesn't work on
     performance indicators, as I understand it.  I would say ATHEANA covers
     8 and 11; is that a fair statement?
         DR. APOSTOLAKIS:  It definitely does, but perhaps to take
     advantage of the qualitative aspects, you need an extra box so you don't
     just make it PRA.  So before eight, you might have the qualitative
     aspects of ATHEANA, and then, at the start of eight, of course, you have
     to do the quantification.
         MR. SORENSON:  I would be delighted to get critiques on
     this, too.
         DR. APOSTOLAKIS:  Don't worry; don't encourage people.
         [Laughter.]
         DR. APOSTOLAKIS:  Susan, you wanted to say something?  You
     have to come to the microphone, please; identify Your Honor.
         MS. COOPER:  Susan Cooper with SAIC.
         I think with respect to interaction with ATHEANA, there are
     certainly two different ways.  Already, we're trying to incorporate some
     symptoms, if you will, of culture and some of the preparation for doing
     ATHEANA.  We'd like the utility people to try to examine what are their
     pre-operational problems as part of identifying what their informal
     rules or maybe some things that are, if you will, symptoms of a culture
     that, when they play it out through a scenario development and
     deviations, it would be organizationally-related, but we don't have what
     we see from some of the events, some of the other things that the
     organization can do that might set up a scenario, so we recognize that
     there may be some pieces missing, and we certainly need some kind of
     input to know not only what -- you know, what from the organization is
     going to cause things but then also, then, what is the impact on the
     plant?  There are a couple of different pieces.
         DR. APOSTOLAKIS:  Now, if just for a couple of historical
     purposes, we go to the previous one, no, yes; box four, collect and
     analyze data, that was essentially the reason why one of the earlier
     projects on organizational factors funded by this agency was killed. 
     The proposed way of collecting data was deemed to be extremely
     elaborate.  They implemented it at Diablo Canyon, and the utility
     complained.
         So, there is this additional practical issue here that you
     have to do these things without really --
         DR. POWERS:  I don't know why.
         DR. APOSTOLAKIS:  Dana's commentary here, I mean, certain
     things, by their very nature, require a detailed investigation.  I mean,
     I don't know where this idea has come from that everything has to be
     very simple and done in half an hour, but I think it's important to bear
     in mind that the utility complained, and the management of the agency
     decided no more of this.  I'm willing to be corrected if anybody knows
     any different, but that was my impression.
         MR. SORENSON:  Well, and we'll touch on that
     one --
         DR. APOSTOLAKIS:  Okay; sorry.
         MR. SORENSON:  -- a little in a couple of slides, as a
     matter of fact, but you're right:  one of the results early on was that
     people did try to look for non-intrusive ways to collect data.  One
     possibility is to look at the way the organization is structured, which
     you can deduce from, you know, organizational documents, if you will.
         DR. APOSTOLAKIS:  Yes, but the attitudes, you would never
     get that.
         MR. SORENSON:  You don't pick them up and --
         DR. APOSTOLAKIS:  These attitudes, you don't pick that up.
         MR. SORENSON:  And interestingly enough, the people that
     started down that path after a few years started to pull in something
     that they called culture, the way an organization worked.
         Yes; I will, time-permitting, go through at least one
     example that sort of traces through those boxes, if you will.  I would
     like to comment on the upward path on slide 16.  The -- what you would
     really like to do is to be able to identify some number of performance
     indicators that were indicative of the safety culture elements and that
     you could translate, in turn, into modifications of the PRA parameters,
     and the idea there is if you can identify those performance indicators,
     then, you don't have to go back and do the intrusive measurements once
     you've validated the method.
         And so, in the best of all possible worlds, you know, one
     would, you know, have processes that follow that upward path.  Now, I
     would hasten to add in summarizing on this figure that there is a lot
     that goes on inside every one of those boxes, and, in fact, when I was
     discussing this with Joe Murphy -- I guess he's not here today -- and at
     one point, we pointed at one box in particular, and I asked him a
     question about it, and he said, well, of course, in that box, miracles
     occur, and that's still --
         DR. APOSTOLAKIS:  Did he also tell you that there's a NUREG
     from 1968 whose number he remembered that addresses it?
         [Laughter.]
         DR. APOSTOLAKIS:  I mean, Joe usually does that.
         [Laughter.]
         DR. APOSTOLAKIS:  PNL published a report in 1968 in March --
         [Laughter.]
         DR. APOSTOLAKIS:  -- that is relevant.
         MR. SORENSON:  So the -- anyway, the summary here is that
     this path is neither short nor simple.
         DR. APOSTOLAKIS:  Yes.
         MR. SORENSON:  There are a lot of pieces that go into
     establishing a relationship between safety culture or other management
     and organizational factors and some risk metric.
         Let me see what we might need to do here.  How much time do
     you want to leave for discussion, George?
         DR. APOSTOLAKIS:  Well, you are doing fine.
         MR. SORENSON:  Okay.
         DR. APOSTOLAKIS:  I think people can interrupt as they see
     fit.
         MR. SORENSON:  Okay.
         DR. APOSTOLAKIS:  So, you're doing fine.
         MR. SORENSON:  What I'd like to do now is go through some of
     the boxes and some examples of some work that has been done referring
     back to figures 15 and 16.  As the figure indicates, the process starts
     out somehow with a model of the organization you're interested in, and
     my conclusion as a layperson was that you can look at essentially the
     way an organization is structured; the way it behaves or its processes
     or some combination of those things.
         If you look at slide 18, this was an attempt to look at
     structure only.  The work actually started at, I believe, Pacific
     Northwest Laboratories and was continued by the same investigators,
     although at different places, over the next several years, and here,
     they attempted to look strictly at what they could deduce from the way
     the organization described itself, if you will.  It does not involve
     culture.  If you look at the literature referenced by these folks versus
     the literature referenced by organizational culture people, it's a
     different body of literature.  There's very little cross-referencing.
         This was designed to be non-intrusive.  It has an obvious
     difficulty right up front, and that is that there are a lot of factors
     to try to correlate.  They made an attempt to correlate with things like
     unplanned scrams, safety system unavailabilities, safety system
     failures, licensee event reports and so forth.  There was other work
     sponsored by the NRC that began at, I believe, at Brookhaven; Sonia
     Haber and Jacobs and others, not all at Brookhaven, I would hasten to
     add, and this was a slightly different perspective on the same thing. 
     They came up with 20 factors that included something they called
     organizational culture and safety culture, and this was the -- where the
     -- one where the data gathering, if you will, did become very intrusive. 
     They made up surveys and went out and talked to a bunch of plant people
     and shadowed managers and so on and so forth, and they probably got
     pretty good data, but it was not an easy process.
         Then, there is another process developed by -- I was going
     to say that eminent social psychologist.
         DR. APOSTOLAKIS:  I would like to add that Mr. et al. is
     here.
         MR. SORENSON:  Yes, good.
         DR. APOSTOLAKIS:  His first name is et; last name is al.
         [Laughter.]
         DR. APOSTOLAKIS:  We call him Al.
         MR. SORENSON:  Anyway, one of the contributions here was to
     reduce the 20 factors to half a dozen, which makes the process more
     tractable, if you will, but it's a little different also in the sense
     that it focuses on the work processes of the organization and how those
     are implemented, and, in fact, the next figure, I believe, is an example
     of their model of a corrective maintenance work process, and the
     analysis includes looking at the steps in the process and identifying
     the -- what they call barriers or defenses that ensure that an activity
     is done correctly, and you can map these activities back onto the
     earlier list of six attributes, if you will, to determine the
     relationship between the organization and the work processes.
         DR. APOSTOLAKIS:  One important observation, though:  these
     six are not equally important to every one of these.  This is a key
     observation.  For example, goal prioritization really is important to
     the first box, prioritization of the work process, whereas technical
     knowledge, for example, means different things for execution and
     different things for prioritization.  So that was a key observation that
     Rick made on the factors that Haber and others proposed to deal with the
     work process.  Then, it meant different things than he proposed.
         And most of the latent errors of some significance were the
     result of wrongful prioritization.  That is, we will fix it at some
     time, when it breaks; unfortunately, it breaks before you could --
         MR. SORENSON:  Okay; moving on to the next box in the
     activity diagram, coming up with some way to measure safety culture or
     whatever organizational factor you are concerned with, there is, you
     know, the obvious candidates:  document reviews, interviews,
     questionnaires, audits, performance indicators.  But I think the thing
     that struck me here is that regardless of what list of safety culture
     attributes you start with, in this process, you're going to end up with
     some questions that you hope represent those attributes in some way, so
     when you get done, you don't have just, you know, a direct measurement
     of organizational learning; you have answers to a set of questions that
     you hope are related in some way to organizational learning.
         DR. POWERS:  The difficulty in drafting the questionnaire
     that gives you the information that you're actually after must be
     overwhelming.  I mean, the problems that they have on these political
     polls, they can get any answer they want depending on how they construct
     the question.  I assume that the same problems affect the
     questionnaires.
         MR. SORENSON:  I would assume so, but this is also to assume
     what psychologists -- the organization --
         DR. APOSTOLAKIS:  Never rely on one measuring instrument.
         MR. SORENSON:  Would anybody like to comment on the
     difficulty that goes on within that box?
         DR. APOSTOLAKIS:  It's hard.
         MR. SORENSON:  It's hard.
         DR. POWERS:  That's a separate field of expertise,
     formulating questionnaires, is it not?  I'm really concerned that you
     asked too much to be able to formulate a questionnaire that allows
     somebody to map an organization accurately when you have this difficulty
     that I can get any answer that I want depending on how I construct the
     questions.
         MR. SORENSON:  Of course, part of the way round that is --
     well, there are ways of designing questionnaires so that the same
     question gets asked six different ways, and you can check for
     consistency and poor wording.
         DR. POWERS:  What do you do when they're inconsistent?  Do
     you throw it out?
         MR. SORENSON:  That's what you pay psychologists for.
         DR. POWERS:  I mean, I don't see that you're out of the game
     here.  I mean, I had enough to do with employee opinion poll taking and
     what not that it's been known that there is a culture or a discipline
     doing these things, and there are well-known principles, like the second
     year of the employee opinion poll, the results are always worse than the
     first year; the people filling out the questionnaires have gotten better
     at filling out questionnaires, so they can be more vicious in their
     evaluations.  I mean, it just strikes me as a flawed process.
         MR. SORENSON:  Well, I think part of the answer to that is
     you try to measure enough things that if your measure is flawed on one
     or two or three of them, you can still get the -- an indication of the
     attribute that you're really trying to measure.
         DR. SEALE:  It's interesting, because so many organizations
     now have been convinced that their organization has to be a
     participatory autocracy, and so, they ask these questions in the
     questionnaires, and as you say, they deteriorate almost invariably, but
     they also systematically ignore the results, so that --
         [Laughter.]
         DR. SEALE:  But, you know, in the name of, as I say,
     participatory autocracy, they do it.
         DR. POWERS:  I am intimately familiar with one organization
     who is absolutely convinced that the fact that they conducted a
     questionnaire on a particular aspect of behavior excuses them from ever
     again having to attend to that.
         [Laughter.]
         DR. APOSTOLAKIS:  Why didn't you include the behaviorally
     anchored rating scales?
         MR. SORENSON:  I didn't intentionally exclude it.  I didn't
     see it as different from -- in a process sense from what's here.  I may
     have misread that.
         DR. APOSTOLAKIS:  Anyway, okay, that's another of the
     instruments that's available.
         But let's go.
         MR. SORENSON:  Okay; selecting external safety metrics:  I
     mentioned that briefly earlier, you know, one can rely on performance
     evaluations, performance indicators, do some sort of expert elicitation
     to evaluate the organization.  In some industries, which we'll touch on
     in particular, process in aviation, actually, I have accident rates that
     you can use as a metric, where there is good statistical data on
     accident rates.  But again, the point I'm trying to make here is that
     the investigator chooses that as part of the evaluation process, and
     sometimes, that is lost sight of.
         In the chemical industry, process industries in particular,
     they tend to use the audit techniques.  They don't have the same
     reluctance to gather field data that seems to exist in the nuclear power
     business.  They tend to use the terminology safety attitudes and safety
     climate versus safety culture, and the studies that I've looked at used
     either self-reported accident rates or what they call loss of
     containment accident rates, you know, covering relatively large numbers
     of facilities.  One study covered, I think, 10 facilities managed by the
     same company, for example; 10 different locations.
         And these studies in the process industries have resulted in
     very strong statistical correlations between the attributes of safety
     culture that we've been talking about here and accident rates, and you
     can show that the low accident rate plants, you know, show strong safety
     culture attributes.  The typical correlation they might start out with,
     you know, 19 or 20 attributes as the Brookhaven people did and find out
     that 14 or 15 of those correlate and five don't for some reason.
         DR. SEALE:  Jack, how much of that, though, is due to the
     fact that the elements of positive numbers on the accident rate are the
     inverse or one minus the numbers on the safety culture?  I mean, they're
     almost -- the way you characterize your safety culture almost certainly
     is painted by the idea that one of the worst things that can happen to
     you is an accident.
         MR. SORENSON:  Well, certainly, you've got to look at how
     the measurement is done.  I don't have a quick answer.
         DR. SEALE:  No, I mean, what if you had just for instance or
     just for the fun of it, let's say we had two plants, and both of them
     didn't have any accidents; one of them had a good safety culture and one
     of them didn't.  I don't know if your questionnaire would actually
     detect or make that distinction.
         MR. SORENSON:  In that case, I think you're absolutely
     right, but precisely the point I'm trying to make here is that in this
     case, we are not looking at plants with zero accident rates.  We're
     looking at plants that have very low accident rates and very high ones.
         DR. SEALE:  Yes.
         MR. SORENSON:  So we've got statistics here that we don't
     have in the nuclear power business.  The ratio of the best performing to
     the worst performing in terms of accident rates is typically about 40,
     the factor is.  And, in fact, I'll come back to that later.  The reason
     that one of these folks makes the point is in aviation --
         DR. APOSTOLAKIS:  PSA is one minus the -- you know, that's
     my problem.
         MR. SORENSON:  The aviation business, which presumably uses
     roughly the same equipment and roughly the same training methods
     worldwide for commercial passenger airlines, there's a difference of
     about a factor of 40 between the best and worst performing airlines.
         DR. SEALE:  Yes.
         MR. SORENSON:  So the point here is precisely that in those
     areas where you've got data, you can correlate these safety culture
     elements, if you will.
         Which brings us to, you know, the areas of weakness or
     discomfort, most of which have been touched on here earlier.  One of
     them is that at this point, nobody pretends to understand the mechanism
     by which the thing we call safety culture affects operational safety. 
     Second area was what you just touched on, Bob.  There is a lack of valid
     field data in the nuclear power business in particular.  First, the
     actual accident rates are low, but there's even a lack of data on the
     safety culture side in general.
         And the third area is there are no good performance
     indicators that have been identified at this point; clearly an area that
     needs additional attention, not only in the nuclear power business.
         DR. BARTON:  I think you're looking at too high a level for
     the field data to be looking at accidents.  I think you don't have to
     look at accidents.  Go look at lower levels of performance in the
     organization; go look at industrial safety events.  Go look at human
     performance or look for operator errors.  Go look at maintenance people
     not following procedures.
         If you go look at a whole bunch of those things and relate
     that, you'll find out that the culture is different at that plant than
     it is at the other plant that hasn't had a major accident either but
     doesn't have the same numbers of those types of --
         DR. SEALE:  You could probably use LERs just as easy of
     that.
         DR. APOSTOLAKIS:  Or any number of attributes --
         DR. BONACA:  The trouble with LERs is there are not enough
     LERs written.  These plants write three or four LERs a year.  I don't
     know if there's enough data there.
         DR. BARTON:  Or whatever the correct level of --
         DR. SEALE:  Yes.
         DR. BONACA:  There are corrective action systems at the
     plants --
         DR. SEALE:  Yes, yes.
         DR. BONACA:  Because there are 20,000 inputs per plant.
         DR. SEALE:  Yes.
         DR. BONACA:  Probably, that's the biggest window that you
     have.
         DR. APOSTOLAKIS:  So you are saying that it would be perhaps
     worthwhile to see if some performance indicators can be formulated using
     this kind of evidence?
         DR. BARTON:  I think so.
         DR. APOSTOLAKIS:  Instead of going to models?  That's a good
     idea.
         DR. BARTON:  Think about it.
         DR. APOSTOLAKIS:  It would be extremely tedious to go
     through those records.
         DR. BARTON:  Oh, yes.
         DR. APOSTOLAKIS:  But it would probably be worthwhile.
         MR. SIEBER:  A lot of plants.
         DR. POWERS:  You can find people within an organization
     oftentimes who know those records surprisingly well.  If you have a lot
     more, then it's a lot easier.
         DR. BONACA:  I mean, an example of performance indicators at
     IAEA and all places, one could ask whether or not they should be nine or
     whatever.  But have they had those elements that were --
         DR. SEALE:  They weren't accidents.
         DR. BONACA:  No, incidents.
         DR. APOSTOLAKIS:  But that is a necessary assumption that
     this really is a good indication of what will happen if there is a need
     for an ATHEANA kind of system, but it may be very good when it comes to
     a major -- when they pay attention.  In fact, we had a guy call
     maintenance people; more than 50 percent, to my surprise, thought that
     the procedure was useless; they never followed them.  They thought they
     were for idiots.
         Now, those guys probably are very good, but if you are
     blind, you say oh, they don't use the procedures; my God, bad, bad boy. 
     Yes; they're probably doing a better job than somebody else who goes
     with --
         DR. BONACA:  Even there, that's another issue.
         DR. APOSTOLAKIS:  So I think there is this presumption,
     although I like the idea, because at least you get something concrete,
     but maybe that's something else to think about:  how much can you
     extrapolate from these fairly minor incidents, because there is this --
     Jack didn't mention, but people also distinguish between the formal
     culture and the informal culture, the way things really get done.  And
     do they take shortcuts?  They do all sorts of things.  And these are
     good people usually.  I mean, they're not -- but I think that's a good
     idea.  It's a good idea.  It's just that, I mean, they have -- you know,
     whenever anybody proposes anything here, you have to say something
     negative about it.  So, there it goes.
         Alan, you have to come to the microphone.
         MR. KOLACZKOWSKI:  Alan Kolaczkowski, SAIC.
         George, that's the very reason why, in the ATHEANA part, I
     think we're looking at both the EOPs and the formal rules, but then, you
     saw we also look at tendencies and informal rules.
         DR. APOSTOLAKIS:  Right.
         MR. KOLACZKOWSKI:  That's where we're trying to capture some
     of those -- part of the culture, if you will:  how do they really do it? 
     What are the ways they really react when this parameter does this?  What
     are their tendencies?  I think we're trying to capture some of that.  We
     use the terminology informal versus formal rules, but I think we're
     talking about the same kind of thing.
         DR. APOSTOLAKIS:  Yes.
         MR. SORENSON:  By the way, though, not all investigators
     agree that let me call them near misses or incidents extrapolate
     properly to accidents.
         DR. APOSTOLAKIS:  Yes, you have to make some assumptions.
         MR. SORENSON:  And also, the people who question that also
     question whether the human performance information or models in the
     nuclear business translate to those in other hazardous industries. 
     That's not a given.
         DR. APOSTOLAKIS:  Go ahead.
         DR. SEALE:  But the point may be, though, that the extent to
     which the organization has the capability of absorbing near misses in
     such a way that they do not propagate to major accidents may be the
     thing that's the measure of safety culture.
         MR. SORENSON:  Well, Reason would agree with that very
     precisely, because his definition of safety culture, you know, is, in
     effect, that culture which leads to a small incidence of latent errors
     that go undiscovered.  And it's the latent errors that translate, you
     know, a single unsafe act into a disaster.
         DR. SEALE:  And then, but the ability to correct for the
     error in other parts of the organization so that it doesn't grow --
         MR. SORENSON:  Right.
         DR. APOSTOLAKIS:  But I think another measure of goodness
     which is really objective is to see whether they actually have work
     processes to do some of these things.  Rick is working with -- Rick Weil
     is trying to develop organizational learning work.  So what you find is
     that yes, everybody says, boy, organizational readiness, sure, yes, we
     do that.  But how do you do it?  And that's where he gets stuck.  We do
     it.  Somehow, we do it.  There is no work process; they have no formal
     way of taking a piece of information, screening it, because that's the
     problem there:  they get too many of those.
         DR. BARTON:  How many do they get a week or about a year?
         DR. APOSTOLAKIS:  About 6,000 items a year; I mean, here,
     you're not going to be producing power just to study 6,000 items.
         [Laughter.]
         DR. BARTON:  I hope not.
         DR. APOSTOLAKIS:  So there is no formal mechanism for
     deciding what is important, which departments should look at it, and I
     think that's an objective measure.
         DR. BARTON:  Yes, it is, because you can prioritize those
     6,000.
         DR. APOSTOLAKIS:  But they don't.
         DR. BARTON:  You can put them in buckets.  Well, I know
     plants that do.
         DR. APOSTOLAKIS:  I'm sure; and those have a better culture.
         DR. BARTON:  I don't necessarily agree with that.
         [Laughter.]
         DR. APOSTOLAKIS:  All right; no, but it is an objective
     measure of the existence of the processes themselves.  It is a measure
     of some attempt to do something.
         DR. BONACA:  But it is also a measure of the way the work is
     getting accomplished or not accomplished that gives you some reflection
     on potential initiators.  For example, a process that is overwhelmed
     that is unable to accomplish work on a daily basis, something is going
     to happen out there, because we're starting an item; you are closing it. 
     You're delaying items, and something is going to start in a new activity
     before you close the other one at some point.
         And so, if you look at that, you have a clear indication,
     and we're trying to begin to correlate that.  So you have some
     indication of really what kind of a story.  Now, the question is are
     they going to affect the unavailability of a system?  See, we don't know
     that.
         DR. APOSTOLAKIS:  It may, but -- but there is something to
     the argument that -- not just nuclear.  But it seems to be consensus of
     organizational learning is a key characteristic of good organizations. 
     Now, if I see that, I really don't need to see real data to prove that. 
     I mean, those guys are not stupid.  They know what they're talking
     about.  And, in fact, I remember there was a figure from a paper in the
     chemical, whatever; it was a British journal, comparisons of good
     organizations, excellent organizations.  The key figure that
     distinguished excellent from everybody else was this feedback loop,
     organizational learning, from your own experience and that of others,
     and it's universal.
         Anyway, let's have Jack continue.  He's almost done, I
     understand.
         MR. SORENSON:  Yes; there are a couple more slides here, and
     I did want to touch on what we've just been discussing, you know, the
     evidence that a safety culture is important to operational safety. 
     There is an overwhelming consensus among the investigators; if there is
     a subculture that thinks an attitude doesn't matter, I didn't find it in
     the literature in any event.
         The accident rate data is pretty convincing.  I confess
     obviously to not being an expert, but the writing, again, supports that. 
     People outside of the field seem to think they have good statistical
     information there.  And the little bit of nuclear power plant field data
     that there is, some of what the Brookhaven people did, Hauber and her
     colleagues and the little bit that was done in the Pacific Northwest
     Laboratory work confirmed a correlation between safety culture elements
     and operational safety as they defined it.  There are not enough data,
     but what's there was positive.
         I'm going to, on the last slide, relate my impressions again
     as a non-practitioner as to what is missing from the literature.  Some
     of this, I've deduced from what other people have written and some just
     from my own feelings on the papers that I review.  There is a lack of
     field data relative to nuclear power plant operations.  There might be
     easy ways to get it, but right now, it's not there.
         One needs to understand the mechanism by which safety
     culture or other management and organizational factors affect safety. 
     We need performance indicators for safety culture or related things.  We
     need to understand the role of the regulator in promoting safety
     culture, and we need to know something about the knowledge, skills and
     abilities of the front line inspectors in a regulatory environment where
     safety culture is important.  One of the things that struck me in doing
     the research on this work is that we are -- we, the NRC -- are right in
     the middle of attempting to change the way we do regulation.  We are
     embarking on and evaluating a new reactor oversight process.  We are
     trying to convert our regulatory basis to something we're calling
     risk-informed and maybe performance-based, and other regulators
     elsewhere in the world, particularly in the UK, are observing that.  If
     one is going to make this kind of a change, then you probably cannot do
     it within the kind of prescriptive regulatory framework that the U.S. is
     using at the moment.
         That being the case, something called safety culture and how
     one fosters it becomes very important relative to the new regulatory
     process that we are expecting to implement.  There is certainly a
     reluctance on the part of the NRC to, you know, venture into anything
     that would smack of regulating management and an even stronger
     reluctance on the part of the industry to, you know, allow any small
     motion in that direction, but it seems to me that in the context of this
     new regulatory regime, that management is terribly important, and at a
     minimum, the agency needs to understand in what ways is it important,
     and how does the agency best foster this ownership of safety amongst its
     licensees, and I don't think we know that right now.
         That's all I have.
         DR. APOSTOLAKIS:  Yes; the big question is really what is it
     that a regulator can do without actually managing the facility.  That's
     really the fear.
         Dennis Bley, please?
         MR. BLEY:  My name is Dennis Bley.  I'm with Buttonwood
     Consulting.  I have to leave in just a minute --
         DR. APOSTOLAKIS:  Sure.
         MR. BLEY:  -- so I thought I would say a couple of words
     quickly.  The last 5 years, I've been on the National Academy committee
     overseeing the Army's destruction of chemical weapons, and the program
     manager for chemical weapons destruction has sponsored a lot of digging
     into this area, and I think maybe they would be willing to share what
     they've found.
         We've had people on our committee from DuPont, and, you
     know, the strong view from DuPont, coming back to what you were talking
     about earlier, is that if you get the little things under control, the
     industrial accident rates, those things, you won't have a bad accident. 
     A lot of people don't believe that.  They do very strongly.  Jim
     Reason's book you were talking about, I think the last chapter, tenth
     chapter, he goes into that in some detail.
         I kind of think from NRC's point of view, it gets difficult,
     because the expertise the Army has brought together to help them look at
     this in many places has all argued strongly that strong regulation and
     compliance don't get you where you want to be with respect to safety; it
     has to be the individual organization taking ownership, and all the way
     through, certain things are unacceptable, certain kinds of behavior are
     unacceptable by anybody, and that has to get buried into the whole
     organization.
         Just an aside on ATHEANA, it would be -- where you pointed
     out where they would fit together, I think that's about right, and we've
     actually got, if you look at some of our examples, a little of that
     coming in but nothing like a real solid process for trying to find all
     of it.  But I think you can -- there has been so much work in this area
     by so many different people, including studies in industrial facilities,
     that it probably doesn't make sense to do it all over again.
         But I'll just leave it with that.  That's the one source
     I've seen where people have -- they've really tried to draw a broad
     range of expertise together to help them with the problem, which they
     haven't solved.
         DR. APOSTOLAKIS:  I believe the fundamental problem that we
     have right now is that people understand different things when the issue
     of culture is raised and so on.  There was a very interesting exchange
     between the commissioners and the Senators.  Senator -- I don't
     remember; Inhofe?
         DR. SEALE:  Inouye?
         DR. APOSTOLAKIS:  No, no, no.
         DR. SEALE:  Inhofe, yes.
         DR. APOSTOLAKIS:  He was told by Former Chairman Jackson
     something about -- it was somebody else; not the chairman about culture
     and organizational factors and boy, he said I've never heard -- he said
     I'm chairing another subcommittee of the Senate where we deal with
     Boeing Corporation and all of those big -- and I've never heard the FAA
     trying to manage the culture at Boeing and this and that, and how dare
     you at the NRC think about that?
         And then, of course, we have our own commission stopping all
     work, you know, overnight a year or so ago, and I think it's this
     misunderstanding; you know, I really don't think it's the role of the
     regulator to go and tell the plant manager or vice president how to run
     his plant.  On the other hand, there are a few things that perhaps a
     regulator should care about.  I don't know what they are, but for
     example, the existence of a minimum set of good work processes, in my
     opinion, is our business, and especially if we want to foster this new
     climate that I believe both Dennis and Jack referred to.  In a
     risk-informed environment, some of the responsibility goes to the
     licensee.  Now, we are deregulating electricity markets and so on, so
     that's going to be even more important.
         But I guess we never really had the opportunity to identify
     the areas where it is legitimate for a regulatory agency to say
     something and the areas where really it is none of our business, and
     it's the business of the plant.  And because of the fear that we are
     going to take over and start running the facility, we have chosen to do
     nothing as an agency.
         DR. SEALE:  Well, that goes to the question of where is it
     we ought to butt out?  Where should we butt out?  What are the things
     that we do that are counterproductive?
         DR. APOSTOLAKIS:  Absolutely right; absolutely right.
         DR. BONACA:  But again, I think if you want to talk about
     culture, management up there, it's very, very hard, and again, we're
     struggling with looking at an indication of an organization that works
     or doesn't work.  At the industrial level, there are indications all
     over the place.  But those indicators have to do with does the work
     process work, for example?  Is the backlog that people perceive they
     have overwhelming them?  What kind of -- absolutely.  And again, there
     is work that is being done inside these utilities to look at those
     indicators there, and they don't even measure management per se; simply
     something is wrong with the organization.  When you have something wrong
     with the organization, you go to the management, and you change it,
     because you expect that you will be able to manage that.
         But I'm saying that it's probably feasible to come down to
     some of these indicators, and I think that the utilities are trying to
     do that.
         MR. SIEBER:  I would sort of like to add:  I've been to some
     regional meetings for clients of mine where the plants have been having
     problems, where the regional administrator or his staff has asked
     questions about performance indicators on productivity, and for example,
     a lot of these processes are just a bunch of in-boxes, you know, like
     your work process.  Which one is the in-box that has big holes in it? 
     Why isn't work getting done?  I've seen the NRC ask those questions.  I
     think they're legitimate questions, and on an individual basis, I think
     that they're appropriate questions, but I have not seen an initiative to
     ask them across the board.
         DR. BARTON:  They all do relate to cultural issues.
         MR. SIEBER:  That's right.
         DR. POWERS:  Yes.
         MR. SIEBER:  Each one of them by itself is an indicator, and
     I think industrial safety is a prime indicator.  You know, if you --
         DR. BARTON:  If it wasn't, they wouldn't spend so much time
     looking at it.
         MR. SIEBER:  Yes, and we actually hired DuPont, who is very
     good, to help us with ours, and our record, our accident rates, went
     down by over 90 percent.  I mean, it actually worked, and that's part of
     the culture.  If you can't make yourself safe, how can you make a power
     plant safe?
         DR. BARTON:  There are things you can look at without really
     getting into the management, so to speak, of the company.  I think you
     have to draw that line, because the industry is going to get nervous as
     heck.  They're just going to say -- they'll start looking at the safety
     culture and management's confidence and all that stuff.  I think there
     is a set of things that you can look at objectively and determine what
     is the culture of this organization.  You just have to figure out how to
     package it.
         DR. APOSTOLAKIS:  That's the problem.
         DR. BARTON:  How to package it.
         DR. APOSTOLAKIS:  That's the problem.
         DR. BARTON:  Expect that if you're looking at a bunch of
     indicators right now that I would tell you would fit into a box called
     culture.  Look at it right now.
         DR. BONACA:  Well, I mean, again, there have been efforts;
     I've been participating in one, and I believe that if you look at other
     people who do it, they're finding out the same points.  Now, again,
     you're going down to opinions for objective readings of certain boxes of
     work being accomplished or not accomplished.
         DR. BARTON:  And that's the problem.  It's what you can do
     when you take this data, and you get it back to the region, and that's
     where people really get nervous now.
         DR. BONACA:  But I was talking about trying to correlate,
     for example, working efficiencies of backlogs, actual outcomes that you
     can measure somewhat for using PRA.  That's -- I mean, that's probably
     something that you can do.
         MR. SIEBER:  One of the problems is that the boxes from
     plant to plant are not standardized.  The thresholds that differ from
     plant to plant.  So interplant comparisons are not very accurate.  On
     the other hand, you know, something is better than nothing.  And that's
     what plant managements use to determine the state of culture and how
     safe they are and how safe they aren't and how well their processes
     work.  That's how you run the plant.
         DR. POWERS:  One of the things that I find most troublesome
     right now is taking the DuPont experience, and this attitude I hear all
     the time, the Mayer approach toward safety; you take care of all the
     little things, and the big things will take care of themselves versus we
     want to focus on the most important things in risk assessment.  We seem
     to be dichotomizing opposite views.  I'm wondering if we really want the
     outcome we're going to get going to risk-informed.
         DR. SEALE:  I'm not so sure.
         DR. POWERS:  It seems like it's worth thinking about,
     because these things have been very successful in another industry.
         DR. SEALE:  The thing, though, is that the things that are
     getting ruled out, if you will, on the basis of not contributing to risk
     are not the little things that show up in the plant performance things. 
     They're truly the -- they're the not even on the radar screen things. 
     At least that's my impression.  It's a good point, but I don't think
     you're talking about the same population when you say risk versus low
     risk on the one hand and little things versus big things on the other
     hand.
         DR. APOSTOLAKIS:  Anyone from the staff or from the audience
     want to say anything?
         [No response.]
         DR. APOSTOLAKIS:  Okay; any other comments?
         [No response.]
         DR. APOSTOLAKIS:  Thank you very much.  We will adjourn. 
     So, this meeting of the subcommittee is adjourned.
         [Whereupon, at 3:10 p.m., the meeting was concluded.]