U. S. Food and Drug Administration
Center for Food Safety and Applied Nutrition
National Advisory Committee on Microbiological Criteria for Foods
December 8-10, 1999


National Advisory Committee on Microbiological Criteria for Foods

Meeting on Fresh Citrus Juice

Transcript of Proceedings

Volume I: Wednesday, December 8, 1999
Volume II: Thursday, December 9, 1999
Volume III: Friday, December 10, 1999


Volume III
Friday, December 10, 1999

PARTICIPANTS

COMMITTEE MEMBERS

James D. Anders
Dane T. Bernard
James S. Dickson
Stephanie Doores, Pennsylvania State University
Michael P. Doyle
Mel W. Eklund
Daniel L. Engeljohn, Ph.D.
Michael G. Groves
Michael L. Jahncke
John M. Kobayashi
John E. Kvenberg
Earl G. Long
Roberta A. Morales DVM, Ph.D.
Marguerite A. Neill
Michael C. Robach
Leon H. Russell, Jr.
Skip Seward II
William H. Sperber
Bala Swaminathan, Ph.D.
Robert B. Tompkin

AGENCY REPRESENTATIVES

Janice Oliver, Deputy Director, Center for Food Safety and Applied Nutrition, FDA
Arthur P. Liang, MD, MPH, CDC Liaison
LeeAnne Jackson, FDA Liaison
Dr. Karen Hulebak, Executive Secretary

ALSO PRESENT

Dr. Paul Mead
Dr. Dale Hancock
Dr. Colin Gill
Dr. Isabel Walls
Dr. Charles Haas
Dr. Nancy Stockbine
Dr. Bonnie Rose
Dr. Mark Powell
Dr. Eric Ebel
Dr. Wayne Schlosser
Dr. Tanya Roberts
Ms. Peg Coleman

CONTENTS

AGENDA ITEM

PROCEEDINGS

        MS. OLIVER:  Good morning.  Once again, my name is
Janice Oliver, and I'm Deputy Director for FDA's Center for
Food Safety and Applied Nutrition.
        I don't have to keep Bob Buchanan in place today.
I just have to keep Dane in place.
        DR. BERNARD:  Yes, ma'am.  What is this, "pick on
Dane" day?  Bruce is over here giving me all kinds of grief.
        MS. OLIVER:  I've got to pick on somebody, Dane.
        I'll be chairing the meeting again today.  Dr.
Kaye Wachsmuth is not able to be with us.  However, Dr.
Hulebak is here this morning from FSIS and will be assisting
me in chairing the meeting.
        This morning FSIS is going to be presenting risk
assessment models for E. coli 0157:H7.  What they'll be
doing is there will be various presentations throughout the
day.  The presentations are geared at about 45 minutes each,
allowing 15 minutes afterwards for questions.  The
questions, as in the previous day, will be primarily for the
Committee and the invited experts to ask questions.  If
there's still available time, then we'll ask those others
who are here at the meeting if there is time to ask
questions also.
        First, the Committee is supplemented today by a
number of experts that FSIS has invited.  I would like to
turn it over to Karen Hulebak to introduce those, and then
after that, I will ask the entire Committee and the experts
to introduce themselves again for the record.
        DR. HULEBAK:  Good morning, everybody.  Thank you
all for being here to listen to this presentation by our
risk assessment team of our risk assessment for E. coli
0157:H7 in ground beef.
        In order to assist the Committee and to add to its
expertise, especially in view of the fact that David Acheson
and Alison O'Brien can't be here today, we've invited a
number of experts to take part in this discussion, some of
whom have arrived and, I believe, some of whom have not.
        Here with us presently are Dr. Isabel Walls, NFPA;
Dr. Colin Gill of Agriculture and Agrifood Canada; Dr. Paul
Mead of CDC; Dr. Nancy Stockbine of CDC, expected; Dr. Chuck
Haas of Drexel University; and Dr. Dale Hancock of
Washington State University.
        I'd like next to introduce the risk assessment
team, the FSIS risk assessment team.  They are seated at the
back of the room there.  Most of these folks are from the
Food Safety and Inspection Service.  The team is headed by
Dr. Mark Powell, and the members of the team include Dr.
Eric Ebel, Dr. Wayne Schlosser, Dr. Peg Coleman, and Dr.
Tanya Roberts, who's with USDA Economic Research Service.
        We have a full day of presentation and discussion
for you.  We'd like to take this day to make sure that you
hear from us in an appropriate level of detail, that is,
enough detail to inform you about our assumptions and the
model parameters and the outputs of the model with a degree
of detail that informs you and doesn't overwhelm you or,
worse yet, bore you.
        There are several questions that we'd like you to
keep in mind as you listen to the presentation.  With these
questions, we hope to focus your thinking about particular
aspects of the model, but please don't assume that this is
all we'd like you to--that this is all we seek your comment
on.  We would like you to consider these particular
questions, but we welcome your comment on other aspects of
the model or other questions of the model that you might
have.
        You have them before you.  The question about
resolution we will leave for later.  We may not get to it at
all today, but we would like you to consider the second
bullet there:  Is there evidence that would allow us in this
model to adjust for the specificity of microbial analysis?
That's our major cross-cutting question.
        With respect to the production section of the
model, can the Committee recommend a better way to link live
cattle to contaminated carcasses, the link that we try to
make in this model?
        Are there data or methods currently available that
would improve the quantitative links among fecal, hide, and
carcass contamination?  With respect to slaughter, what
evidence would be necessary to satisfactorily quantify the
link between hide and carcass contamination?
        Second, with respect to slaughter, we've attempted
to develop a model, a mechanistic model that follows product
through the slaughter plant.  Would it be preferable to
develop a strictly data-anchored model which does not
attempt to model processes between monitoring plants?  If
that were possible, what data would be required to develop
such a model?
        Regarding preparation of product, rather than
modeling beyond the last point where validation is currently
possible for raw ground beef, would it be preferable to
consider simply a proportional relationship between the
prevalence of 0157:H7 in raw ground beef and the incidence
of 0157:H7 illness due to consumption of ground beef?
        Next, for preparation, how do we define a
plausible frequency distribution for extreme
time/temperature handling conditions in the absence of data?
        And then, finally, for dose-response, are there
sufficient data and methods available to develop a separate
dose-response relationship for the susceptible sub-
population?  How might we validate such a curve?
        Is the basic envelope approach sound?  And you
will hear more about that, of course, during the discussion
of dose-response.
        Is it appropriate to anchor the most likely value
for the dose-response, the beta plus one envelope.  The
envelope describes the various assumptions made about dose-
response covering the range of what we know.
        Again, please think about these questions as the
ones that we would most like to hear from you on.  Do not
limit your questions or your commentary to these particular
questions.
        Also, while this is the one day we have to present
this full model to you, we hope you take the opportunity in
the coming couple of months to give us whatever suggestions
you have and ask us whatever questions you have.  There's
some work we have yet to do on this model, and we have time
to incorporate any thoughts that you might have.
        Any questions at this point?
        [No response.]
        DR. HULEBAK:  All right.  Then let's dive right
in.
        MS. OLIVER:  Let me just ask the Committee before
we go further to introduce yourself for the record since
several members are not here that were here earlier, and
I'll start to my right, please.
        DR. WALLS:  Isabel Walls with the National Food
Processors Association.
        DR. GILL:  Colin Gill of Agriculture Canada.
        DR. RUSSELL:  Leon Russell, Texas A&M University.
        DR. JAHNCKE:  Mike Jahncke, Virginia Tech.
        DR. GROVES:  Mike Groves, LSU.
        DR. DICKSON:  Jim Dickson, Iowa State University.
        DR. SPERBER:  Bill Sperber, Cargill.
        DR. ROSE:  Bonnie Rose, FSIS.
        DR. SWAMINATHAN:  Bala Swaminathan, CDC.
        DR. MORALES:  Roberta Morales, Research Triangle
Institute.
        DR. ANDERS:  Jim Anders, North Dakota Health
Department.
        DR. LIANG:  Art Liang, CDC.
        MS. JACKSON:  LeeAnne Jackson, FDA CFSAN.
        DR. ENGELJOHN:  Dan Engeljohn, USDA FSIS.
        DR. DOYLE:  Mike Doyle, University of Georgia.
        DR. DOORES:  Stephanie Doores, Penn State
University.
        DR. ROBACH:  Mike Robach, Conti Group Companies.
        DR. KVENBERG:  John Kvenberg, Food and Drug
Administration.
        DR. NEILL:  Peggy Neill, Brown University,
Providence.
        MR. SEWARD:  Skip Seward, McDonald's Corporation.
        DR. LONG:  Earl Long, CDC.
        DR. TOMPKIN:  Bruce Tompkin, ConAgra.
        DR. BERNARD:  Dane Bernard, NFPA.
        DR. HANCOCK:  Dale Hancock, Washington State
University.
        MS. OLIVER:  Okay.  Thank you very much.
        With that, Mark Powell will now give the
introduction and scope of today's meeting.
        DR. POWELL:  Thank you.  Can everyone hear the
level fine?  Bring it in closer?  There, is that good?
Okay.
        Well, thank you.  On behalf of the USDA Food
Safety and Inspection Service E. coli 0157:H7 risk
assessment team, I'd like to thank the participating
agencies, members of the Committee, and the other invited
experts for providing us this opportunity to present the
draft FSIS risk assessment of E. coli 0157:H7 in ground
beef.  The agency views your input as a key element of the
scientific peer review process that underpins informed food
safety decision-making.
        Today we will be presenting the draft baseline
process risk model, that is, we will be presenting the model
of the as-is scenario that reflects the existing range of
practices and behaviors regarding the production, slaughter,
processing, preparation, and consumption of ground beef in
the U.S.  The baseline model does not include any assessment
of the potential public health impacts of alternative risk
mitigation measures, and our purpose in presenting the draft
model is for scientific peer review, not for discussion of
the risk management options or the policy implications of
the draft model.
        Next slide?
        The full risk assessment team consists of members
in addition to today's presenters.  The team has also
received significant contract support as well as input from
IFRAG, the Interagency Food Risk Assessment Group, which is
convened by the USDA Office of Risk Assessment and
Cost/Benefit Analysis, and we'd like to take this
opportunity to recognize their contributions.  In the
interest of time, the presenters will refer to E. coli
0157:H7 simply as 0157.
        Next?
        I will lead off today's presentation with some
background and a definition of the scope of the assessment.
Eric Ebel will then summarize the outputs of the exposure
segments of the model, and Wayne Schlosser will present our
efforts to correlate the exposure segments of the model with
surveillance data.  After a brief break, Eric Ebel will
present the production segment, and Tanya Roberts will
present the slaughter segment before lunch.
        Wayne Schlosser will begin the afternoon session
with the preparation segment, followed by Peg Coleman with
the dose-response analysis.  I will conclude the
presentations with a summary and then a comparison of the
model's predictions with an epidemiologic estimate of the
annual number of cases of 0157 due to ground beef.  For most
of these segments, we have budgeted 45 minutes for the
presentation and 15 minutes for questions and discussion.
        Next?
        This slide places the assessment into context.
Since 1994 FSIS has treated raw ground beef with 0157 as
adulterated under the Federal Meat and Inspection Act unless
it is further processed in a manner that destroys the
pathogen.  Most recently, several news sources of
information have begun to emerge suggesting that the
prevalence of 0157 is higher than previously reported.
Recently, FSIS issued a draft white paper on 0157 indicating
that the agency is considering its policy in light of this
emerging information.  The production segment of the draft
risk assessment incorporates some of this new information
regarding herd and within-herd prevalence estimates.  But
many of these studies have not yet been finalized or
reported in the scientific literature.  Future iterations of
the model could incorporate new data as it becomes
available.
        Next?
        The 0157 risk assessment project began taking form
in March 1998 when I formed a resource group during the
formulation stage of the assessment.  In October 1998, a
public meeting was held to solicit input at an early stage
of the process and to release a preliminary document
describing the overall modeling approach and summarizing the
data that had been acquired by the team to date.
        Next?
        We have received peer input during the development
phase of the assessment through presentations at the Society
for Risk Analysis, or SRA, and IAMFES, and by convening a
week-long interagency workshop on microbial pathogens in
food and water that involved microbial risk assessment
practitioners from USDA, FDA, EPA, the UK, and New Zealand.
        The peer review process began earlier this week
with a presentation of the draft model at the 1999 SRA
meeting and continues today with the presentation before
this Committee.
        Next?
        Development of the E. coli 0157:H7 process risk
model, or ECOPRM, is intended to address multiple goals, and
at this point we have made the most progress towards
satisfying the first two goals of developing the baseline
model and comparing the predicted results to epidemiologic
estimates.
        Next?
        The scope and nature of the risk assessment is a
function of the questions that decision-makers could pose to
the analysis.  If the only objectives of the assessment were
to estimate the magnitude of the problem of 0157 in ground
beef or, alternatively, to establish a risk-based standard
for ground beef products at the point of consumption, then
it would be sufficient to conduct an analysis of the
epidemiologic data or to analyze only the dose-response
relationship.  The process risk model, however, is intended
to provide a broader decision-making tool; therefore, the
bulk of the model is the exposure assessment, which contains
the analysis of occurrence, growth, and decline of the
pathogen from farm to table.  Our aim for the baseline model
is to be as consistent as possible with the observed data so
that we can use the model to identify potential critical
control points, evaluate public health impacts of
alternative mitigations, and identify key areas for
research.
        Next?
        The 0157 process risk model covers all aspects of
ground beef production and consumption from farm to table.
In the remainder of my presentation, I'll discuss the scope
of the assessment and the range of public health outcomes
associated with 0157 in ground beef.  The exposure
assessment consists of three sequential segments.  The
production segment outputs the prevalence of 0157 in live
cattle.  The slaughter segment outputs the prevalence and
levels and 0157 in beef trimmings that are destined for
grinding.  The preparation segment outputs the prevalence
and levels of 0157 in consumed ground beef servings.  This
final output of the exposure assessment feeds directly into
the dose-response analysis, and then the final output of the
model is the annual number of 0157 cases due to ground beef
in the U.S.
        Next?
        The scope of the assessment is limited to ground
beef as a vehicle of infection and, therefore, does not
include cross-contamination to or from ground beef or a
person-to-person secondary transmission.  The scope of the
present assessment is also limited to 0157 and, therefore,
does not include all entero-hemorrhagic E. coli.  However,
the paucity of reported outbreaks due to non-0157 EHECs,
combined with the higher isolation rates of serotype 0157:H7
in prospective studies indicates that the other EHECs may
not attain the public health importance of 0157 in the U.S.
        The scope of the assessment is also annual and
national.  Although data are available at some points to
model seasonal or regional scale, insufficient data are
available to model slaughter, processing, preparation, and
other processes at seasonal or regional scales.
        Next?
        The scope of the draft assessment includes cooked
ground beef products.  The present draft assessment does not
include products containing ground beef that are prepared by
means other than cooking, for example, fermented sausages.
We also have not included raw ground beef consumption, which
is a very uncommon practice in the U.S., but the ingested
doses would be analogous to very undercooked ground beef,
and this is considered.
        Intact steaks and roasts are excluded because
potential surface contamination would very likely be
eliminated during cooking.  The present draft assessment
does not cover other non-intact cuts of beef such as steaks
or roasts that have been blade tenderized or injected with
needles that may introduce surface contamination into the
interior muscle tissue.  However, FSIS does plan to address
the other non-intact products in a subsequent iteration of
the risk assessment.
        Next?
        Infection with 0157 is associated with a variety
of public health outcomes ranging from asymptomatic carriage
to, in a minority of cases, death.
        Next?
        The primary risk assessment endpoint is the annual
number of cases of 0157 illness due to ground beef in the
U.S.  This total can be disaggregated into cases of bloody
and non-bloody diarrhea; severe cases, defined as cases of
bloody diarrhea in which the patient seeks medical care;
hospitalizations; cases of hemolytic uremic syndrome or TTP,
HUS or TTP; and, finally, the annual number of deaths in the
U.S. due to 0157 in ground beef.
        Next?
        This table characterizes our uncertainty regarding
the magnitude of the 0157 problem from all sources and that
attributable to ground beef.  I'll return later this
afternoon to the derivation of these figures from the
epidemiologic data, but our best estimate is that about 21
percent of all cases are due to ground beef.  Note that
there is considerable uncertainty regarding this
epidemiologic estimate derived independently from the
process risk model.  We will correlate this epidemiologic
estimate with the results of the baseline model.
        Next?
        Before concluding, I'll draw your attention to the
project's Website.  We can provide that to you later so you
don't have to copy it down if there's insufficient time.
This is where we'll post the risk assessment report and
model and other project-related information to make it
electronically accessible.  In addition, hard copies of the
report will be placed in the FSIS docket, and we invite all
interested and affected parties to submit comments on the
draft risk model and the relevant data to the FSIS docket.
        Unless they're brief, I'd ask in the interest of
time, since we're running a little late, that we hold any
questions or comments regarding the scope of the assessment
until the discussion period that immediately precedes our
lunch break.
        Now I have the pleasure of turning the podium over
to Eric Ebel to present the overview of the exposure
assessment outputs.  Eric?
        DR. EBEL:  Thanks, Mark.
        As we've progressed through this risk assessment
process, we've had occasion to present interim reports on
the model.  Feedback from these presentations has suggested
the need for something up front that ties things together
and gives the audience a feeling for the big picture of the
model.  Therefore, we want to begin our discussion of the
model with the end in mind.
        In this segment, we'll present a general overview
of summary outputs from the model as well as how these
summary outputs correlate with observed data generated
outside the model.  I'll be presenting the overview section
of this presentation, and Wayne Schlosser will present the
correlation section.
        Risk assessments are generally broken down into
exposure assessments and dose-response assessments.  In food
safety risk assessment, the exposure assessment models the
occurrence of doses of harmful pathogens in servings of a
commodity.  For this overview, we'll concentrate on the
exposure assessment of the 0157 in ground beef model.
        An important--sorry, go back to--I'm sorry.  There
you go.  Okay.
        An important principle in resource management is
separation of variability from uncertainty.  We'll discuss
this principle before presenting our results.  As we present
summary outputs of the model, we will describe the
variability in these outputs and the associated uncertainty.
We'll consider outputs from the production, slaughter, and
preparation segments of the model as all part of the
exposure assessment.  We won't go into any detail as to how
these distributions were derived at this time.  Each of the
model segments will be discussed in excruciating detail
later today.
        Variability describes naturally occurring
differences that we note within populations or between
populations.  Variability also results from sampling
something less than the whole population.
        In the model, frequency distributions represent
variability in the system.  For a given scenario of the
model, we consider these frequency distributions fixed.  For
example, within-herd prevalence varies from one infected
herd to another.  A frequency distribution describes the
proportion of affected herds at any given time that have,
let's say, 1 percent or 10 percent within-herd prevalence.
The number of organisms per square centimeter on a
contaminated carcass also varies from carcass to carcass.
But a frequency distribution describes what proportion of
contaminated carcasses have an average of, say, 0.1 CFUs per
cm2 or 1 CFU per cm2.
        The temperature that ground beef is exposed to
when handled out of compliance with the model food code
varies from instance to instance of noncompliance.  The
frequency distribution of the population of noncompliant
handling episodes descries this variability across the
population.
        DR. POWELL:  I just wanted to make the Committee
aware that there aren't handouts if you're looking to track
this presentation.  We have handouts just for the segments
that will be production, slaughter, preparation, dose-
response.  Just for clarification.
        MS. OLIVER:  Mark, can I ask you to introduce
yourself?  And I'd just remind everybody that for the
recording and for the transcription, if everybody could just
reintroduced yourself for the record each time you speak.
        DR. POWELL:  I apologize.  This is Mark Powell of
FSIS.  And I'll turn the podium back over now to Eric Ebel.
        DR. EBEL:  In contrast to variability, which is
simply a reflection of nature, the concept of uncertainty
refers to our confidence in the true value or true frequency
distribution of something.  Probability in most of our model
refers to a measure of confidence.  Probability is
equivalent to the likelihood of something occurring or being
correct.  So if we know that variability in the model is
represented by a frequency distribution and we are not
completely certain of which frequency distribution is the
true or correct distribution, we model several different
distributions to account for our uncertainty.
        Examples of uncertainty in the model include the
prevalence of fecal-shedding cattle at slaughter in a given
year.  There is some fixed prevalence, but we are uncertain
of the true fraction.  We also know that CFUs per cm2 on
contaminated carcasses can be described by a frequency
distribution, but we are uncertain as to the true frequency
distribution.  Similarly, the frequency distribution
regarding product temperature when out of compliance is
uncertain.
        As we propagate uncertainty through the different
stages of the model, we must consider whether our
uncertainty is independent or dependent.  Uncertainty
describes the likelihood that something is correct.  If we
are incorrect at the high end of one input, are we more or
less likely to be incorrect at the high end of another
input?  If the answer is no, then the uncertainties in model
inputs are independent.  Otherwise, they are dependent to
some degree.
        One technique for modeling independence and
uncertainty is called second-order modeling.  Basically this
involves taking random samples from all uncertainty
distributions and evaluating the results conditioned on
these random draws.  Another technique for handling
uncertainty is called boundary analysis.  Underlying this
approach is the assumption that uncertainty may or may not
be correlated.  We have chosen this approach for describing
uncertainty in the model for this presentation.
        Therefore, we've defined three scenarios to
propagate through the model:  a lower bounds, a most likely,
and an upper bounds scenario.  The most likely scenario uses
averages for uncertain inputs.  When considering frequency
distributions, we selected the central distribution from the
family or curves available.  The lower and upper bounds use
10th and 90th percentile values for all uncertain inputs, or
the extreme frequency distributions for those cases where a
family of curves is available.  These boundary scenarios
clearly represent a case where our uncertainty is positively
and completely correlated, but the interval between the
boundaries includes every other possible correlation,
including the assumption there is no correlation in our
uncertainty.
        We modeled ground beef production and consumption
from the farm to table.  We're dealing with a product that
originates from different classes of animals and changes
form as it moves from farm to table.  Furthermore, the
environmental conditions that the products and the 0157
organisms contained within them are exposed to depend on the
transportation, storage, and handling of the products.
        We modeled two general types of cattle operations.
Breeding operations are relatively small.  About 20 percent
of all cattle slaughtered in the U.S. are culled breeding
cattle.  On average, we assume that cattle culled from these
operations are slaughtered independent of one another.
        Feeding operations tend to be larger operations.
About 80 percent of the cattle slaughtered in the U.S. are
feeding-type cattle.  Cattle from these operations are more
likely to be shipped to slaughter with others from the same
operation and cannot be considered to be slaughtered
independent of one another.  Cattle in these feedlots are
usually shipped in lots of 40-head capacities.  We use the
40-head truckload as a basic unit for comparing live cull
and feeder cattle at slaughter.
        This is a model output from the production segment
of a risk assessment.  It is a frequency distribution for
the number of culled breeding cattle that are shedding 0157
in their feces.  As this graphic shows, the number of
shedding culled cattle within a 40-head sample varies.  This
frequency distribution is the most likely scenario result.
        This graph overlays the upper and lower bounds
scenarios with the most likely scenario distribution from
the previous slide.  As these distributions show, the lower
bound predicts there are higher frequencies of smaller
numbers of infected cattle per 40-head truckload.
        This graph shows the same results for feeding
cattle.  Again, this graph overlays the lower and upper
bounds scenarios on the most likely distribution.  It is
clear from this analysis that there is less uncertainty
associated with feeding cattle than breeding cattle.
        The slaughter segment of the model comprises two
basic types of slaughter plants.  We model one plant type
that slaughters feeding cattle.  Ground beef is a by-product
of this model plant type.  We also model a plant type that
slaughters culled breeding cattle.  Ground beef is a primary
product of this model plant type.
        Overall, about two-thirds of all ground beef in
the U.S. is generated from feeding cattle, while the other
one-third is generated from culled breeding cattle.  For
each slaughter plant type model, two forms of meat trimmings
are aggregated.  Combos are modeled as 2,000-pound
aggregates of meat trimmings, while boxes are modeled as 60-
pound aggregates.
        This chart shows the log of CFUs in contaminated
combo bins generated from fed cattle.  As you can see, when
combo bins are contaminated, they are usually contaminated
with relatively low numbers of 0157 bacteria.  Note that
these represent total organisms in a combo.  The
concentrations per gram of contaminated combo bin would be
quite low since these bins contain about 1 million grams.
        Here's the same graph with the upper and lower
bounds overlaid.  This graph also shows the log CFUs in
contaminated combo bins, but these combo bins are generated
from culled breeding cattle.
        This is the same graph then with the upper and
lower bounds overlaid.
        Combo bins and boxes of meat trimmings are
composed of different ratios of lean to fat.  During the
mixing and grinding of trim, different numbers of combo bins
and/or boxes are combined to generate grinder loads of
ground beef.  The mixing and grinding of trimmings occurs in
large commercial operations or smaller retail settings, and
there's a wide variability in how trimmings are combined.
        Overall, about 92 percent of ground beef is
generated from grinding combo bins of trim.  The other 8
percent is generated from grinding boxes of trim or retail
trim.  Many products are generated from the grinding of meat
trimmings.  These varied products are also handled in many
different ways during distribution and preparation.
        The output from the preparation model is an
exposure distribution.  The most likely exposure curve is
shown here.  In this graph, the x axis is in log CFUs per
contaminated serving, while the y axis is in log number of
servings.  The shape of the curve suggests that contaminated
servings are most frequently contaminated with small numbers
of organisms.
        This is the same exposure distribution with the
upper and lower bounds overlaid.  These boundaries suggest a
great deal of uncertainty regarding the true exposure
distribution.
        This is our last slide in this overview
presentation.  It summarizes average model output across the
three exposure segments.  It's a bit busy, so let me explain
it.
        All of the numbers here are averages.  We've
weighted breeding and feeding output by the production of
cattle and product generated by each of the types.
Furthermore, the concentration data is represented in all
cases on a per-gram basis.  Finally, these results reflect
the most likely scenario for the model's outputs.
        The bars show the prevalence at each stage.
Starting at the left, we see that an average of 11 percent
of all live cattle enter slaughter plants shedding 0157 in
their feces to some degree.  The average prevalence of
contaminated carcasses just after dehiding is 4 percent.  As
we aggregate trim from carcasses into combo bins, we see the
prevalence of combo bins with at least one CFU of 0157 in
them is 23 percent.  As we aggregate combo bins into grinder
loads, the average prevalence of contaminated grinders
generated from combo bins is 81 percent.
        Finally, after preparation and cooking of ground
beef meals, the model predicts that about 2 in every 100,000
servings contain one or more 0157 organisms.  The line in
this graph shows the average log CFUs per gram of
contaminated material.  Although we don't explicitly model
the number of 0157 organisms per gram of feces, we use an
average of 2.5 logs from published data here.
        On carcasses, the model predicts an average of
negative 1.5 logs per gram of trim generated from
contaminated carcasses.
        As trim from multiple cattle are aggregated into
combo bins, the average concentration per gram of combo bin
decreases to minus 4.5 logs.  Because there is some
possibility for multiplication of 0157 within combo bins,
the concentration increases slightly in grinder loads.
        Finally, because the average serving size is
around 100 grams, the concentration per gram of contaminated
serving increases to about minus 1 logs, or about 10 0157
organisms per contaminated serving.
        Now, this finishes our overview of the model.
We'll proceed now directly then to the correlation of model
outputs.
        DR. SCHLOSSER:  I'm Wayne Schlosser from FSIS.
        Models should reflect the state of the world to
the extent data is available to describe it.  Consequently,
we attempt to correlate the model output with the state of
the world by either anchoring the model to real data within
the model or by validating the model with data external to
the model.  This correlation offers assurance that the model
does reflect the state of the world to the extent possible.
        Where possible, we've considered the implications
of surveillance data within the structure of our model
inputs.  In some cases, we needed to develop intermediary
empiric models to analyze the surveillance data.  These
empiric models then apply particular inputs for the final
model.  Comparison of the model output to real-world data is
known as validation.  In general, data used to validate the
model is not included during construction of the model.
This data thus provides an independent benchmark for
comparison.  In some cases, independent data is not
available for validation.
        Data for correlation purposes needs to be
representative.  Fortunately, FSIS has analyzed samples for
0157 from a cross-section of the slaughter and processing
industries.  For example, year-long baseline studies of
carcass contamination were conducted prior to implementing
HACCP.
        FSIS also routinely collects ground beef samples
for 0157 analysis.  Recently, a study in Canada was
published which surveyed cattle status at the slaughter
plant.  We compared the implications of these three sources
of data with our model outputs for the exposure segment of
the model.  And, of course, human case number estimates are
also available for comparison with our model's predictions.
Mark Powell will discuss those comparisons later today.
        Whatever the surveillance data might be, it
usually needs to be adjusted to account for uncertainty.
Point estimates of percent positive will not suffice in
describing our confidence in the results.  In some cases, we
need to account for the sensitivity of methods used.  We
must also recognize the effect of sample size, both number
of samples and the quantity of sample collected in these
surveillance data.  Therefore, surveillance data is
represented in our analysis with attendant uncertainty.
        As we mentioned previously, our model output
uncertainty is represented by lower and upper bounds.  For
comparison with surveillance data, we represent modeled
output as confidence bars extending from the lower to upper
bound, with the most likely output indicated between these
extremes.
        The first point in the model where data exists for
comparison is the frequency of live cattle that are fecal
shedders at the slaughter plant.  This Van Donkersgoed study
was conducted in a Canadian slaughter plant during a one-
year period.  Since we did not use this data in developing
our estimates for the production segment of the model, this
comparison can be considered strictly as validation.
        Overall, the study found 12 percent of steers and
heifers were 0157 positive at slaughter, while 2 percent of
culled cows were positive.  The study used very sensitive
fecal sampling and culturing methods, so a little adjustment
for sensitivity was needed to compare these results with the
output of the production segment of the model.
        This graph compares the uncertainty distribution
for the Canadian study's culled cows to the model's output
for cows and bulls just before slaughter.  The red line
represents the range between the upper and lower bounds of
the model, with the green diamond representing the most
likely value.  The blue line is the likelihood distribution
for prevalence derived from the Canadian data.
        While there is some overlap between this
surveillance data in the modeled output, the model is
predicting slightly greater prevalence relative to the
Canadian study.  In this graphic, the Canadian data has been
adjusted for test sensitivity, and the relative likelihood
of prevalence has been calculated using the binomial
distribution.
        This graphic shows how the Canadian data match up
with the modeled output for steers and heifers just before
slaughter.  In this case, the data and the model clearly
overlap.
        Moving on, we considered the FSIS baseline
sampling data collected prior to HACCP implementation.
Samples representing three separate areas of approximately
300 square centimeters were collected from carcasses of cow
and bulls and steers and heifers.  In the steer and heifer
baseline study, approximately 0.2 percent of carcasses were
positive for 0157.  Cow and bull carcasses yielded no
positive results.
        Enumeration of the positive samples revealed that
the most probable number of organisms on the positive
sampled areas ranged from 0.03 CFU per cm2 to 3 CFUs per
cm2.
        This sampling data was used to construct a simple
empiric intermediary model.  In this model, we assumed the
amount of carcass surface area contaminated could range from
300 square centimeters, the areas sampled in the baseline
study, to about 30,000 square centimeters, or the entire
surface area of the carcass.
        If we assume the entire surface area of the
carcass is contaminated, then we would expect that FSIS
sampling methods, given the number of bacteria present,
would identify 77 percent of all contaminated carcasses.  On
the other hand, if only 300 square centimeters were
contaminated, the sensitivity of the sampling procedure
drops to about 25 percent.  These bounds on sensitivity thus
allow us to predict the prevalence of positive carcasses to
be from about 0.25 percent to 0.75 percent.
        We next constructed simulated combo bins, each
holding trim from 75 cattle.  The resultant frequency
distribution for contamination in combo bins allowed us to
predict the frequency and extent of contamination in grinder
loads.  The model then simulated ground beef sampling and
testing in accordance with the FSIS procedures.
        When we tested our upper bound assumption that the
entire surface area of the carcass was positive, the model
predicted that about 0.14 percent of 25-gram samples would
be positive and about 1.4 percent of 325-gram samples would
be positive.  FSIS ground beef sampling data for 1995
through 1997, however, yielded only 0.08 percent positive
25-gram ground beef samples.  In 1998, with a larger sample
size of 325 grams, FSIS found 0.33 percent of ground beef
samples positive, still well below the upper bound predicted
by the model.
        The lower bound assumption of 300 square
centimeters of contaminated area significantly
underpredicted the number of positive samples that would be
found.  Thus, a value for contaminated surface area
somewhere between these extremes seemed likely.
        When we assume a contaminated surface area of
3,000 square centimeters, which is the log midpoint between
assuming the entire surface area is contaminated, and
assuming only 300 square centimeters are contaminated, the
predicted number of positive ground beef sample is
consistent with both the 25-gram and 325-gram sample size
results reported by FSIS.  Thus, we anchor the contaminated
surface area in our full slaughter model at 3,000 square
centimeters.  So let's look at the output generated from
that slaughter model.
        This chart compares the prevalence of positive
carcasses from cows and bulls predicted by the model with
FSIS sampling data.  The dark blue line represents the
likelihood of different prevalence levels, given the
sampling data.  The model tends to slightly overpredict the
number of positive carcasses when compared to the sampling
data.  Please note that the range of uncertainty from the
model is due to the cumulative effect of all the uncertain
inputs that contribute to this output as well as the method
we are using to communicate our uncertainty.
        This chart is similar to the previous one, except
steers and heifers are compared, and as in the previous
chart, the model tends to slightly overpredict compared with
FSIS sampling data.
        As you saw earlier, we used FSIS ground beef
sampling data in constructing our intermediary models.  From
1995 through 1997, FSIS used a sample size of 25 grams to
represent a grinder load and found four positive samples out
of 4,999 collected.  In 1998, FSIS began using a sample size
of 325 grams and found 12 positive samples out of 3,597
collected.
        This chart shows the overlap of ground beef
sampling predicted by the model with the actual likelihoods
calculated from FSIS testing of 25-gram samples.
        This chart shows the same overlap for 325-gram
samples.
        In conclusion, the model is anchored in observed
data as we look at live cattle, carcasses in the slaughter
plant, and samples of ground beef leaving the grinder.
Unfortunately, there is no data available that directly
measures the number of humans that are actually exposed to
0157 from ground beef.  Also, as we noted, the model output
boundaries tended to be wider than the confidence limits of
the data.  This is to be expected considering all of the
uncertain inputs to our model and the type of uncertainty
analysis performed.  This analysis propagates increasing
uncertainty as we progress from farm to table.
        We'd be glad to answer any questions you might
have regarding both the correlation analysis and the
overview.
        DR. GILL:  Colin Gill, Agriculture Canada.  Aren't
the observed data numbers so small that you can't really
make any correlation at all between your model and the
observed numbers?  I mean, if somebody licked their finger,
you would get--it would throw your correlations right out.
        DR. SCHLOSSER:  Well, we didn't think so.
        DR. EBEL:  Which data are you talking about?
        DR. GILL:  The number of positive samples in the
observed data are so small that I can't see how you can
correlate anything with your model.
        DR. EBEL:  Is that concerning carcasses or ground
beef or--
        DR. GILL:  The whole lot.
        DR. EBEL:  --or all of it?
        DR. GILL:  All of it.
        DR. EBEL:  Well, the data is what the data is.
Certainly, as--data increases your confidence in what the
data is saying is narrowing down and certainly suggesting
higher likelihood in the implied prevalence or
concentration, or whatever it is we're measuring, but it
certainly reflects what those results were and the
distributions in terms of the uncertainty are reflected, I
mean, as objectively as we can reflect them.  To add
increased uncertainty beyond what the data implies doesn't
seem warranted in this case.
        MS. OLIVER:  Dane?
        DR. BERNARD:  Thank you.  Dane Bernard.
        Your consideration was fed cattle, steers and
heifers, and culled breeders, and I'm not a professional in
the beef industry by any measure, but I expected some culled
dairy animals possibly to be included.  Is this not a
significant portion of meat that goes into ground beef comes
from culled dairy animals, or am I mistaken there?
        DR. SCHLOSSER:  We've included both culled dairy
and culled beef animals in the breeders.
        DR. BERNARD:  Okay.  And another question.  The
Canadian study that you referred to, that study also
included culled animals?  Was it targeted to culled animals?
Because I noticed you compared the outputs from the Canadian
study with the calculations that you'd made on culled
animals in the States.
        DR. EBEL:  They actually stratified their results
based on culled breeding cattle and feeding cattle.  So we
actually have those results summarized.  I don't know if we
can page up to that.  Maybe we can.
        Those results there at the bottom of that slide
are the reported results from the Canadian study, so 12 out
of 593 culled cattle were sampled and found positive.
        DR. BERNARD:  Not to roll back the clock two days,
but I just wanted to make sure we're comparing apples to
apples here.
        In addition, I'm assuming that the cattle that
would have been in the Canadian study would have come from a
climate somewhat northern than most of the cattle that would
be in the U.S. study.  I have seen papers that seemed to
relate geographic areas with prevalence of certain pathogens
and related to climate.  Is there an effect there that
should be compensated for or considered?  I notice we had,
you know, some uncomfortably large uncertainty bars there,
and, again, I'm not a professional with that, but I'm
wondering how much might be due to factors that may not have
been compensated for.
        Thanks.
        MS. OLIVER:  Jim?  And if I could ask the
presenters when you're speaking, since you have two of you,
to identify yourself before giving responses.  Thank you.
        DR. DICKSON:  Jim Dickson, Iowa State University.
I think it's a general question.  Is this information, are
these graphs available on your Web page?  Because I had some
specific questions on the data which I'd really--I'd like to
have the graphs in front of me rather than trying to watch
them on the screen as they go by.  Is there an opportunity
to see all this on your Web page or where would we get
copies of this?
        DR. POWELL:  When the draft report is produced,
we'll place that on the Web page, and a copy will be
submitted to the docket.  This is Mark Powell responding.
        DR. DICKSON:  But as we sit here today, there's
not an opportunity to get a hard copy of this, then?
        DR. EBEL:  Do we have a hard copy of this
presentation available?
        DR. POWELL:  Do we have the capability to do that,
Karen?
        DR. HULEBAK:  I think you do, yes.
        DR. POWELL:  Yes, we'll have the disk taken over
and get hard copies made.
        DR. DICKSON:  It doesn't necessarily have to be
today, but if we could get copies of it for--
        DR. HULEBAK:  Okay, sure.  Any one of you who
wants more information, which is absolutely available, about
any section of this discussion today, let us know and we'll
send it to you forthwith.
        DR. DICKSON:  Okay.  Thank you very much.
        DR. HULEBAK:  I'd also like to acknowledge the
recent arrival of Dr. Chuck Haas, Drexel University, and Dr.
John Kobayashi.
        MS. OLIVER:  Thank you.  Mike Doyle?
        DR. DOYLE:  Mike Doyle, University of Georgia.
I'm a bit unclear as to where you come up with some of these
numbers.  For example, you've got 11 percent of the cattle
shedding E. coli 0157.  What's the basis for that?
        DR. EBEL:  Well, I guess, as we started off, we
are going to be presenting each of the segments of the model
in sequence, but we wanted to give sort of the results up
front, and we hope that some of these distributions will
become clearer as the day goes on in terms of how they were
derived.
        DR. DOYLE:  All right.  I'll wait.  Thank you.
        MS. OLIVER:  Skip?
        MR. SEWARD:  Skip Seward, McDonald's.  If I read
your one slide correctly on the combos when you were
predicting contamination levels, then your comparison on
that was to data which you had for ground beef.  Is that
because--if I saw that correctly, is that because you just
didn't have data on combo contamination and that's why you
used that as a comparison?
        DR. EBEL:  Yes, we don't have any data available
that represents a good cross-section of combo bins.  The
comparison was actually at the grinder load levels, which,
you know, represents an aggregate, and each grinder
represents two or more combo bins that have been combined to
be ground.  So the actual comparison is at the grinder load
level.
        MR. SEWARD:  So your levels would be higher there
based on what you showed earlier.
        DR. EBEL:  Right.  The prevalence of contaminated
grinder loads is higher than the prevalence of contaminated
combo bins coming out of our model.  But the actual sample--
the comparison is really at a sample level, a sample taken
from a grinder load, what's the likelihood of it being
positive, which incorporates both the prevalence of
contaminated grinder loads, but also how many organisms are
in there that are even available to be detected.  So as you
see from this, the raw prevalence data suggests very low
frequencies of draws would be contaminated based on just
going out and randomly sampling contaminated--or across the
population of grinder loads.
        MR. SEWARD:  Thank you.
        MS. OLIVER:  Mike Jahncke?
        DR. JAHNCKE:  Mike Jahncke, Virginia Tech.
        Getting back to a comment that Dane made, is there
any attempt here to split out your culled dairy from your
culled cattle, or are they lumped together?
        MS. OLIVER:  Can you please identify yourself
again for the record?
        DR. EBEL:  Eric Ebel.  We'll go into that in the
production segment, but just to say that we did not separate
dairy from beef cow/calf cull animals.  We considered them
combined because that's typically how they're managed at the
slaughter plant level, and statistics are available at that
level of aggregation.  So we haven't separated dairy from
beef industry cull animal, but consider them together.
        DR. JAHNCKE:  Is there a possibility at some point
that you can split those out, or is it just a function of
insufficient data to be able to split them out?
        DR. EBEL:  I guess that's a justification at this
point.  We don't have very much evidence on the beef
cow/calf side.  As we get into the production segment,
hopefully some of that evidence will come out.
        DR. JAHNCKE:  Thank you.
        MS. OLIVER:  Swami?
        DR. SWAMINATHAN:  Bala Swaminathan, CDC.  I just
needed a clarification.
        On the comparison of the Canadian surveillance
data with the model output, the model prediction was higher
than what the surveillance data indicated, and you made a
statement--also the Canadian study apparently used a more
sensitive method.  You made a statement that the Canadian
data were adjusted for test sensitivity.  Could you clarify
that, please?
        DR. EBEL:  We adjusted for the sensitivity, but we
did want to point out that the methods that were used up
there represent more sensitive methods than have been used
maybe--I don't want to say "traditionally"--I'm sorry,
again, Eric Ebel--have been used traditionally, but we still
needed to adjust that data because what we're trying to
represent in the model would be what we would call a true
prevalence of, you know, cattle that are shedding 0157
organisms, and the data that is presented on the Canadian
research still is going to have some false negative results
in there.  So we wanted to adjust for that, and I believe we
used a sensitivity of 96 percent, so that 96 out of every
100 infected cattle would actually be detected in the
Canadian study based on that assumed test sensitivity.  But
we still needed to make that adjustment as we made the
comparison.
        DR. ROSE:  Bonnie Rose, FSIS.
        Wayne, for the 1992 to '94 steer heifer and
cow/bull baseline studies, did you indicate that the sample
size was 300 square centimeters, or did I hear that
correctly?
        DR. SCHLOSSER:  Three separate areas of 300 square
centimeters.
        DR. ROSE:  I believe the total area sampled was 60
square centimeters by the excision method, 20 square
centimeters at each site.
        DR. SCHLOSSER:  Okay.
        DR. ROSE:  The actual analytical unit was 60
square centimeters.
        DR. EBEL:  Yeah, sample area sampled.
        MS. OLIVER:  Mike?
        DR. ROBACH:  Mike Robach, Conti Group.
        My question also relates to the sampling, and I
was wondering when you were considering the 300 centimeters
versus the whole carcass assumptions, were you looking at
the 300 centimeters randomly or was this a site-directed
sample?
        DR. EBEL:  Eric Ebel.  We assumed basically a
random sample in our analytic approach.  We didn't use a
targeted approach, obviously.  In the baseline study there
were targeted areas that were actually samples, so this was
a simplification in our analysis that we made.
        DR. ROBACH:  Well, just so I understand, so the
300 square centimeter sample in the model would have been a
random 300-centimeter site; is that correct?
        DR. EBEL:  Right, times three.
        DR. ROBACH:  Times three.
        MS. OLIVER:  Are there any other questions?  Leon.
        DR. RUSSELL:  Leon Russell, Texas A&M University.
        You mentioned early prevalence of fecal shedders
per year.  How as that data collected?  Was that purely
prevalence or was that cumulative incidents?
        DR. EBEL:  I believe the context--this is Eric
Ebel--the context that was in the example of uncertainty,
that if we could know what the prevalence of fecal shedders
across all slaughter plants in the US that are killed in a
given year, you know, we would obviously need to sample with
100 percent accuracy, but whatever estimate we get we're
going to have some uncertainty about that number.  We
haven't measured--and as far as I know, nobody has measured
prevalence of fecal shedding in cattle across all slaughter
plants in the US, but at some point, when that data becomes
available, there will be attendant uncertainty simply
because we can't sample all the cattle, and as a
consequence, any estimate has some measurement error in it.
        DR. RUSSELL:  Thank you.
        MS. OLIVER:  Peggy.
        DR. NEILL:  Peggy Neill.
        I'm not sure if I should direct the question to
you all or to the Committee at large.  Is cattle slaughter
equally distributed or equally frequently conducted across
all months of the year?  Because I think you can see where
I'm going, is that if cattle slaughter does not occur
equally across months of the year, and frequency of fecal
shedding is not equal across months of the year, then the
model might have to take that into account.
        DR. EBEL:  Well, we'd be glad for somebody else to
comment.  As far as we know, we believe that in general
there's a uniformity.  Certainly within the culled breeding
cattle part of it, there's some seasonality or cycles that
go on as a result of seasonal patterns in breeding and so
forth, but in general, we would believe that the cattle
slaughtered on a monthly basis are relatively constant, as
far as we can remember.
        MS. OLIVER:  Dan, do you have a response to that?
        DR. ENGELJOHN:  Dan Engeljohn, USDA.
        Yes, I would say that it is reasonably uniform
throughout the year, and we certainly have access to that
information, so we can come up with it.
        Can I follow up with a question?
        MS. OLIVER:  Sure.
        DR. ENGELJOHN:  I think this is directed towards
Wayne.
        You had made a statement that with regard to the
combo bins, that contained 75 cattle or the product of 75
cattle.  I'm just curious.  Is that something that you knew
or is that an assumption that you had made?
        DR. SCHLOSSER:  That was an assumption just for
that basic model that we used, trying to correlate as we
were going along.  As we go into the slaughter model you'll
see that we have a range to that, varying from just a few
cattle if it's cows and bulls, to perhaps more cattle than
that if it's steers and heifers.
        DR. ENGELJOHN:  This is Dan Engeljohn, the follow
up.
        Just to let you know, we do have information, or
we have received information about what is expected to be in
a combo bin in terms of what that represents, so I'm curious
to hear what you have to say later.
        DR. POWELL:  This is Mark Powell.  And again, I'd
just like to follow up that the intermediate empiric model
that Wayne presented was simply designed to try and get our
best estimate for the surface area that would be
contaminated on a carcass between the bounding estimates.
Okay.  So it is an input into the slaughter model, and the
model that Wayne elaborated was simply designed to identify
the most likely within those bounds as an input, not as an
output to the model.
        MS. OLIVER:  Thank you.  With that, we'll take a
15-minute break and come back at about 9:35.  Thank you.
        [Break from 9:16 a.m. to 9:38 a.m.]
        MS. OLIVER:  The first thing I would like to
announce is that we didn't say anything about public comment
before, and the agenda didn't have anything, but if somebody
wants to sign up for public comment, we'll have an
opportunity for that later.  You can sign up at the desk
here or at the table outside, and we'll allow that.  We'll
see how many people sign up and find an opportunity for
that.
        Our next presenter is --
        DR. EBEL:  Eric Ebel.
        MS. OLIVER:  Right, Eric Ebel on production and
Karen Hulebak is going to introduce the section on the
questions that FSIS wants answered.
        DR. HULEBAK:  Thanks, Janice.
        Just to reiterate, as you listen to this section
on production, keep in mind the following two questions.
Can you, the Committee, recommend a better way to link live
cattle to contaminated carcasses?  And second:  Are there
data or methods currently available that would improve the
quantitative links among fecal, hide and carcass
contamination?
        DR. POWELL:  This is Mark Powell.
        I have a housekeeping comment, and that is that
there have been some typos and other modifications made to
the handouts that were sent out to you earlier.  We
apologize for that, but we will get a final set to you of
what is presented on the screen.  I don't think there should
be any problem following the rest--the remainder of today's
presentations, and we will be submitting the final copy to
the docket and distributing it to the Committee and the
invited experts.
        MS. OLIVER:  Thank you.
        I'd just like to make one more announcement, and
that is that Paul Mead and Nancy Strockbine from CDC have
now joined us as the invited experts, so welcome.
        And now we'll continue on with Eric Ebel.  Excuse
me.  Jim Dickson had a question.  I'm sorry.
        DR. DICKSON:  This is just a general comment.  Jim
Dickson at Iowa State.
        I don't know how the rest of the Committee feels,
but I would be more than happy to take these in electronic
format, as opposed to getting another stack of handouts to
take home or carry with me in the mail.
        MS. OLIVER:  Sure.
        DR. POWELL:  Thank you.  And we can accommodate
that.  We have it electronically available in PowerPoint
format.  For those of you that operate in another
environment, I apologize.  We would be more than glad to
send the data electronically.  It's much easier on us.  So,
yes, we'll do that, we'll send them electronically, and if
you could just let the Secretariat know if that's going to
be inconvenient for you, and we can make arrangements to
have them sent via hard copy.
        MS. OLIVER:  Right.  I think that's best.  We'll
send--our default will be to send electronically.  If you
want hard copy, please let me know.  Okay.
        DR. EBEL:  Thank you.
        Undoubtedly 0157 contaminated or infected cattle
entering the slaughter process influence the contamination
of ground beef.  Yet, our understanding of a quantitative
association between incoming status of slaughter cattle and
outgoing status of meat harvested from the cattle is
limited.
        At this point the quantitative link between pre-
harvest and post-harvest contamination is only established
for those cattle that are fecal shedders of 0157, and that
link is tentative.  Consequently, we will limit our modeling
of live cattle status to fecal shedding.  We expect that
data linking hide contamination to carcass contamination is
forthcoming, however.
        The production segment is the first part of a
farm-to-table model.  Its purpose is to simulate the
proportion of live cattle at slaughter that are 0157
infected.
        There's a lot of data pertaining to the occurrence
of 0157 in live cattle.  Therefore, our challenge in this
segment is to coalesce this sometimes conflicting evidence
into a cohesive picture of what we think the true occurrence
is.   I will present information on the  development of the
production model and the data used to estimate its
variables.  I will also present some provisional results, as
well as discuss data gaps for this model that could be
filled through additional research.
        The 0157 Process Risk Model--doesn't want to stay
up, does it--they say the process risk model begins where
the production of beef begins, at the farm.  Most of the
information available on the occurrence and distribution of
this organism in US livestock has been collected during
surveys of farms and feedlots.
        Many risk factors hypothesized to influence 0157
status in cattle are factors that apply to whole herds.  Am
important reason for incorporating the farm in the process
risk model is that reductions in the prevalence of affected
cattle entering slaughter plants will be accomplished
through actions on the farm of feedlot.
        The production segment separated culled breeding
cattle from feeding cattle.  We do this because the
slaughter, processing and distribution of meat from these
types of cattle is different.  Feeding cattle are defined as
cattle sent to slaughter from feedlots.  Typically, these
cattle are steers and heifers.  Steers and heifers comprise
about 80 percent of all cattle slaughtered in the US
annually.  Culled breeding cattle are defined as cattle sent
to slaughter from dairy or beef cow calf herds.  These
cattle are typically mature cows or bulls.  Cows and bulls
comprise about 20 percent of all cattle slaughtered in the
US annually.
        The three general states of the production segment
are:  on-farm, transportation and slaughter plant.  The on-
farm stage estimates the within-herd prevalence of 0157
infected cattle and the herd prevalence of 0157 affected
herds.  In the transportation stage we considered the effect
of transit time and commingling on the transmission and
amplification of 0157 infections.  Yet, all the
observational evidence suggests that there is no substantial
difference in fecal prevalence between the farm or feedlot
and the slaughter plant.  Therefore, we model no change in
prevalence between the farm or feedlot and the slaughter
plant.  In the slaughter plant stage we consider the effect
of cattle clustering as they enter the slaughter plant.
        Whether they originate from feedlots or breeding
herds, cattle destined for slaughter must be shipped to a
slaughter plant.  During shipment transmission of 0157 may
theoretically occur.  Alternatively, some infected cattle
may clear their infection during shipment.  The available
evidence shown here does not imply there are dramatic
differences in fecal prevalence between the farm and
slaughter plant.  Transit between the farm and slaughter
plant may not affect prevalence of infected cattle, but it
may be important in causing changes in hide prevalence.
Studies of hide contamination with Salmonella suggests an
increase in prevalence of hide contaminated cattle between
the farm and slaughter.  Unfortunately, there is no data on
0157 hide contaminated cattle at the farm, and only limited
data concerning hide prevalence at the slaughter plant.
Therefore, inclusion of the effect of transit time on hide
contamination in this model awaits the availability of such
data.
        Culled dairy and beef cows and bulls arrive at the
slaughter plant from their farms of origin after transit on
trucks.  The majority of these cattle arrive after first
being shipped to one or more livestock markets, where they
are auctioned to the highest bidder, then shipped to
slaughter.  The combined average herd size for beef and
dairy herds is approximately 300 cows.  According to survey
statistics, approximately 25 percent of cows in dairy herds
and 11 percent in beef hers are culled each year.  These
culling percentages imply that the average herd would market
about 1 to 1-1/2 cattle per week.  Given the low number of
cattle contributed per herd and the commingling of cattle in
livestock markets, it is reasonable to assume random mixing
of culled breeding cattle at slaughter plants.  Such an
assumption implies that the probability of infection is
independent between cows at slaughter.
        Output from the production segment is generated
using Monte Carlo simulation techniques.  For culled
breeding cattle we simulate the number of infected cows and
bulls in a group of 40 such animals that would be presented
for slaughter.  We use 40 head as a convenient count because
that's the capacity of most trucks that ar used to haul
cattle to slaughter.  Each cow and bull is simulated as an
individual.  The probability of infection is equal to the
product of herd prevalence and within-herd prevalence.
Within-herd prevalence varies according to an exponential
distribution.  The only parameter in the exponential
distribution  is the mean within-herd prevalence among all
infected herds.
        Simulation of the production segment, when herd
prevalence and within-herd prevalence are set at their
lower, most likely, and upper bounds, produces the three
output distributions shown here.  This output feed in to the
slaughter segment.  Each of these distribution explain the
number of--that the number of fecal-shedding cattle varies
in any group of 40 head.  On certainty regarding the true
distribution is reflected by the three different
distributions.
        Looking at the most likely distribution, the
middle one of those three, the underlying true prevalence is
4 percent.  For the lower-bound distribution the underlying
true prevalence is 3 percent.  For the upper-bound
distribution the underlying true prevalence is 6 percent.
        Greater than 90 percent of steers and heifers are
shipped directly from feedlots to slaughter plants without
going through livestock markets.  Furthermore, these cattle
are usually slaughtered together in a lot, although they may
be mixed with one or more truckloads of cattle from the same
or another feedlot.  The manner by which feedlot cattle are
marketed suggests they are more likely to be processed at
the slaughter plant in a clustered pattern.  Clustering
implies that the infection status of a steer or heifer in a
slaughter plant is dependent on the lot it is in.
        If we simulate the number of infected cattle per
truckload using the equation shown here, each truckload is
independently determined to be from an affected or non-
affected feedlot based on the herd prevalence.  If the truck
is from an affected feedlot, then the number infected in the
truckload is determined based on the within-herd prevalence.
Again, within-herd prevalence varies according to the
exponential distribution.
        This figure is the output for infected steers or
heifers in a truckload of 40 such animals presented for
slaughter.  Upper and lower bounds result from uncertainty
regarding within feedlot and feedlot prevalence.  In
contrast to the distribution for breeding cattle, this
distribution is skewed.  It's most-likely value is zero
infected cattle in a truckload.  Zero cattle can result
either because the truck originates from a non-affected
feedlot, or given that the truck originates from an affected
feedlot because the sample of 40 head from that feedlot
failed to contain any infected cattle.
        Looking at the most likely distribution in this
graph, the underlying true prevalence of fecal shedders is
13 percent.  For the upper and lower-bound distributions,
the underlying true prevalence of fecal shedders are 11 and
16 percent, respectively.
        Herd or feedlot prevalence is assumed to be a
fixed but uncertain input to the production segment.  In
other words, we assume there is some steady-state proportion
of herds at any given time that are affected.  The lack of
evidence suggesting there are changes in the proportion of
the affected herds in the US over time supports this
assumption.
        Seasonal changes in herd prevalence have been
reported, but these changes are probably the result of
seasonal changes in the within-herd prevalence for infected
herds.  Herd or feedlot prevalence is a function of herd
sensitivity and the sampling data.  Herd sensitivity is the
proportion of herds that test positive given the number of
samples collected per herd and the apparent within-herd
prevalence.
        We used 5 studies to estimate the prevalence of
infected breeding herds.  These studies were selected
because sampling was conducted across multiple states.
National studies on the occurrence of 0157 in breeding herds
have not shown any differences in prevalence between regions
of the country.  Therefore, inferences drawn from the
selected studies are thought to be representative of US
breeding herd prevalence.
        The Garber study is the largest study.  It was
part of a national survey of the US dairy industry conducted
by the USDA.  This survey collected fecal samples from 91
dairy herds across the US.  Sampling was stratified for herd
size.  To account for seasonal bias in sampling and
differences in sample size, we separately analyzed large and
small herd results from the survey.
        The Hancock studies sampled dairy herds across
three northwestern states.  In general, several monthly
sampling visits to each herd over 3, 6, or 12 months, were
made in these studies.
        The final studies samples 15 cow/calf beef herds
across 5 midwestern states.  This study was completed by
USDA-ARS researchers.  In each herd 60 fecal samples from
weaned calves were collected.
        The prevalence of affected feedlots is estimated
using these three studies.  These studies include feedlots
that were sampled from multiple states.  Because the
occurrence of 0157 in feedlots is assumed to not be
geographically clustered, inferences drawn from these
studies are also considered representative of the US feedlot
prevalence.
        The largest study of 0157 occurrence in US
feedlots was conducted by USDA and reported by Dargatz.  In
this study 100 feedlots with greater than a thousand-head
capacity were randomly selected throughout the US.  In each
feedlot 120 fecal samples were collected for a determination
of apparent prevalence.
        Another survey of 6 feedlots in Idaho, Oregon and
Washington was completed by Hancock.  On average 180 samples
were collected from each feedlot.
        Smith has recently reported results from
intensively sampling 5 midwestern feedlots.  Over 600 fecal
samples were collected in each of these feedlots.
        Herd prevalence is dependent on herd sensitivity.
We calculated herd sensitivity based on apparent within-herd
prevalence and the number of samples collected per herd.  In
this figure p stands for the apparent within-herd prevalence
variable, and n stands for the average number of sample
collected in each herd.  As implied by this equation, herd
sensitivity equals 1 minus the average or expected value of
the probability that no infected cattle would be detected in
a sample of n cattle.  A herd sensitivity was calculated for
each of the studies we analyzed.
        The herd and feedlot prevalent sampling evidence
implies apparent herd prevalence.  To estimate the true herd
prevalence we used base theorem, which is the top equation
shown here.  The likelihood function in base theorem is the
equation at the bottom of this slide.  It shows that the
likelihood of herd prevalence is a function of the sampling
evidence and the herd sensitivity.
        We derived these likelihood distributions for the
5 studies concerning herd prevalence.  The Garber study was
stratified so there was actually 6 curves in this figure.
For each study the likelihood distribution reflects
uncertainty in true herd prevalence.  Again, this
uncertainty is driven by the number of herd sampled in each
study, the number found positive, and the herd sensitivity.
The broadest and therefore more uncertain likelihood
distribution is that for the distribution labeled "1a."
This broad distribution results because herd sensitivity in
this case was calculated at 21 percent, which is so low that
a wide range of true herd prevalence levels are nearly
equivalently feasible.  In contrast, the other curves
reflect increased certainty regarding herd prevalence levels
because herd sensitivity was much larger, ranging from 76 to
96 percent in these other studies.
        The 5 likelihood distributions from the previous
slide are combined using base theorem.  The resulting
distribution for herd prevalence is shown here.  For our
most likely scenario, herd prevalence was set equal to the
expected value or average of this distribution.  For the
upper and lower-bound scenarios, hers prevalence is equal to
the 90th or 10th percentiles of this distribution.
        Likelihood distributions were also derived for the
three feedlot studies.  In this analysis the herd
sensitivity was calculated as 77 percent, 86 percent and 99
percent for the Dargatz, Hancock and Smith studies,
respectively.
        The Dargatz study's likelihood curve suggests the
most likely feedlot prevalence is somewhere around 85 to 90
percent.  The other two studies, which sampled only 5 or 6
feedlots, suggest that the feedlot prevalence is most likely
around 100 percent.
        The three likelihood distributions for feedlot
prevalence are also combined to from the distribution shown
here.  Again, the most likely feedlot prevalence was set
equal to the average of this distribution.  Upper and lower-
bound scenarios used feedlot prevalence levels equal to the
10th and 90th percentiles of this distribution.
        Cattle sent to slaughter represent a special
subset of their respective herd populations.  For example, a
cow culled from a dairy or beef herd may have a different
probability of being infected than a calf in that same herd,
or the prevalence of 0157-infected cattle about to be sent
to slaughter from a feedlot may be different from the
prevalence in cattle that have just been assembled to begin
feeding in that feedlot.  In general, the research suggests
that there is a declining prevalence of cattle infection
with increasing age of cattle.
        In our model we applied the within-herd prevalence
evidence that is most specific to cattle being sent to
slaughter.  True within--or feedlot prevalence, is a
function of the sensitivity of the test used to diagnose
fecal shedding of individuals, as well as the apparent
prevalence observed in our studies that were presented.
        The average breeding herd size is about 300 cows
per herd.  Therefore, we assume a lower limit to within-herd
prevalence of one infected cow in 300.  As a conceptual
model of 0157 infection in cattle herds, we assumed infected
cattle are colonized for a defined period.  Research has
shown that a carrier state for 0157 in cattle is unlikely.
Nevertheless, there is evidence suggesting that cattle are
susceptible to reinfection following clearance of
colonization, and that cattle can be infected with one or
more strains of 0157 concurrently.
        The average capacity of US feedlots is about 6,000
cattle per feedlot.  Therefore, we assume the lower limit of
within-feedlot prevalence is one infected steer or heifer in
6,000.  Additional assumptions introduced for breeding herds
also apply to feeding herds.
        We used four studies to estimate the average
within-herd prevalence of infected breeding cattle in US
herds.  These studies varied in their design, sampling and
laboratory methods.  In combination, these studies' results
are assumed to represent a cross-sectional seasonally
average set of evidence for within-herd prevalence in US
breeding herds.
        The Garber study was the USDA survey of the dairy
industry in which 22 positive herds were detected.  The
Besser study was a year-long monitoring of 10 dairy herds in
Washington.  Sampling detected 3 cow herds as infected in
that study.  The Rice study took a convenient sample of cows
about to be culled from dairy herds enrolled in an Idaho,
Oregon and Washington survey.  And the last study was a
survey of 25 cow/calf herds conducted by Hancock in
Washington, of which four were positive.
        Four studies were used to estimate within-feedlot
prevalence.  The first study was conducted by USDA and 63
positive feedlots were detected.  That study was reported by
Dargatz.  In a study of fecal prevalence in steers and
heifers at four slaughter plants, Hancock reported finding
5.8 percent of 240 cattle positive when sampling was done
just after the cattle were stunned in the slaughter plant.
In another study of feedlots in three northwestern states,
Hancock found all 6 feedlots sampled to contain at least 1
positive steer or heifer.  These first three studies used
the same lab methods, and the most likely test sensitivity
was assumed to be 58 percent.
        In the final study, Smith evaluated 5 midwestern
feedlots and found them all to contain a high proportion of
infected cattle.  This study collected more samples in each
feedlot, collected larger samples of feces, and used a more
sensitive laboratory technique than the other three studies.
Test sensitivity was assumed to be 96 percent for this
study.
        Within-herd prevalence varies from one infected
herd to the next.  Furthermore, if we were to follow one
infected herd over the course of several months, we would
find that the prevalence of infected cattle within that herd
would vary.
        The top graph of this slide is a histogram of
apparent within-herd prevalence from a study of post-weaned
heifers in 36 dairy herds.  This graph implies an asymmetric
distribution for within-herd prevalence with the mode equal
to the lowest detectible level.  Such a distribution shape
is consistent with a variable that fits in exponential
distribution.  In the bottom figure the cumulative
probability distribution for this data and that predicted by
the exponential distribution are compared.  A chi square
goodness of fit statistics supports the hypothesis that the
data conform to an exponential distribution.
        In a national survey of milk cows and culled cows
in the US conducted by Garber, 22 infected herds were
infected.  The cumulative probability distribution for
within-herd prevalence is depicted in this graphic.  In this
case, goodness of fit analysis also supports the hypothesis
that these data fit an exponential distribution.  The
exponential distribution has only one parameter, the mean or
average.  By assuming that within-herd and within-feedlot
prevalence can be modeled using an exponential distribution,
we are left with the less difficult task of estimating the
average within-herd prevalence from the available data.
        The preceding tables of data report apparent
within-herd or within-feedlot prevalence.  To estimate a
distribution for the average, we used a method similar to
that already presented for herd prevalence.  The only
difference for within-herd prevalence is that we used test
sensitivity, rather than herd sensitivity in the likelihood
function.
        Lab methods varied between studies because of
different quantities of feces analyzed, different enrichment
broths, and different culture media used.  Sanderson
evaluated lab methods and relative sensitivities were
presented.  In this table we've interpolated and extended
Sanderson's results to incorporate methods not directly
studied in that study.
        We used these results to model test sensitivity in
our analysis.  Uncertainty regarding test sensitivity was
incorporated by inserting these data into a beta
distribution.  We used the 10th and 90th percentiles from
these beta distributions as the lower and upper-bounds of
test sensitivity for the corresponding boundary analysis.
We took the means of these distributions for our most-likely
estimate of test sensitivity.
        Test sensitivity is a function of lab methods and
the quantity of sample collected.  To evaluate the Sanderson
sensitivity data further, we performed the analysis shown
here.  In the two left-hand columns are displayed fecal
concentrations and their estimated frequencies among known
infected cattle.  The three right-hand columns display the
probability that fecal samples of varying sizes would not
contain any organisms for the given fecal concentration.  At
the very bottom of each of these columns then is the
probability that a sample of a given size would contain one
or more 0157 organisms.
        From the Sanderson results we can compare the
relative sensitivity for the 0.1 and the 10-gram samples,
where the enrichment and plating media--the same enrichment
and plating media were used.  For 10-gram samples 79 percent
of 24 positive cattle were found positive.  Yet from the
analysis shown here, we expect 95.7 percent of samples from
infected cattle would contain at least one organism if 10-
gram samples were collected.
        But by dividing 79 percent by 95.7 percent, we
find that this enrichment and plating media correctly found
83 percent of the samples containing at least one organism
to be positive.  Similarly, for 0.1-gram samples, Sanderson
says that 58 percent of positive cattle were detected using
that sample size.  Yet only 73.3 percent of positive fecal
samples would contain one or more organisms with that
sample.  Therefore, 79 percent of the samples with one or
more organisms were in fact detected.
        The reported sensitivity for this culturing system
is 80 percent for experimentally inoculated samples, that
the relative sensitivities measured for 0.1-gram and 10-gram
samples are consistent with this sensitivity after
adjustment for the probability that a sample contains at
least one organism is reassuring, in that it suggests that
the differences in relative sensitivity reported by
Sanderson for naturally-infected cattle incorporate the
effect of some samples not containing any organisms.
Therefore, we determined that no adjustments to the
Sanderson data seem necessary.
        We derived these likelihood distributions for the
four studies of within-breeding-herd prevalence.  The
likelihood distributions displayed here assume that the test
sensitivity is the average for the size of sample and lab
methods used in each study.
        The likelihood distributions for the Garber and
Besser study are not much different.  The Rice and Hancock
studies represent small data sets, and their likelihood
distribution suggests higher average within-herd prevalence.
The Hancock study cited here used the least sensitive
sampling methods, which increases the likelihood that many
test-negative cattle were theoretically infected.
        The middle curve in this graph showed the
uncertainty regarding average within herd--breeding-herd
prevalence for culled breeding cattle in the most-likely
case.  It was derived by combining the four likelihood
distributions from the previous slide.  Lower and upper-
bound distributions were constructed similarly by changing
the test sensitivity for each study.  The expected values of
these three distributions were used as the average within-
herd prevalence in the three scenarios we modeled.
        We derived these likelihood distributions for true
within-feedlot prevalence for each of the four feedlot
studies.  The outlier here is the Smith study.  This study
includes a substantial amount of data.  Consequently, this
likelihood distribution strongly influences our estimated
distribution for average within-feedlot prevalence.
        The middle curve here is the most likely
distribution for average within-feedlot prevalence.  It was
derived by combining the four likelihood distributions on
the previous slide.  The upper and lower-bounds
distributions  were similarly derived after changing the
test sensitivity.  The expected values for each of these
distributions were used as the most likely and lower and
upper-bounds for average within-feedlot prevalence.
        Statistics concerning the uncertain parameters of
this model are then summarized here.  We estimate that the
great majority of breeding herds and feedlots contain at
least 0157-infected cattle.  As you can see, herd
prevalence, our most likely estimate is 72 percent.  For
feeding-herd prevalence, it's 88 percent.
        DR. HULEBAK:  Excuse me.  If you're having trouble
following, it should be on page 19 of your handout.
        DR. EBEL:  We estimate that the great majority of
breeding herds contain at least 1 infected cattle.  Also,
average within-feedlot prevalence is over twice as great as
average within-breeding-herd prevalence, a result which may
support that as cattle age, their likelihood of infection
does decline.
        A quantitative link between prevalence of 0157 in
live cattle and the occurrence of contamination on carcasses
or in ground beef is limited.  We are aware of only one
study, conducted in Great Britain, which has managed to show
an association between live cattle status and carcass status
for 0157.  This study involved a limited number of animals
and much uncertainty attends its results.
        Therefore, we believe quantifying the connection
between live cattle and carcass status is a critical
research need.  The necessary research will serve to clarify
the importance of pre-harvest control in this food safety
problem.
        The available evidence on the occurrence of 0157
in US cattle is substantial, but still limited.  Moreover,
the results of studies on the occurrence and distribution of
this organism are in some cases different.  The approach
we've used in handling this data is to incorporate
uncertainty about prevalence within each individual study
and between different studies.  Additional uncertainty
regarding herd prevalence enters our model through
sensitivity parameters.  These three elements of
uncertainty, within-herd, between study and sensitivity
combine to demonstrate our lack of complete comprehension of
0157 occurrence in US cattle populations.
        Uncertainly regarding prevalence could be reduced
through additional large surveys of dairy cow/calf and
feeding herds.  These additional surveys could improve on
those surveys cited here by increased sample sizes to
account for the low within-herd prevalence levels and
quantification of concentrations of 0157 in positive samples
to explain the levels of shedding detected.  Nevertheless,
it is expected there will always be some uncertainty
regarding prevalence because definitive field surveys are
expensive and difficult to perform.
        A great deal of speculation surrounds the role of
contaminated hides in the contamination of carcasses with
0157.  Very little data is available on the proportion of
cattle whose hides are 0157-contaminated, and the
concentration of organisms on those hides.  The reliability
and sensitivity of hide-testing methods needs to be
researched.  Studies should also explore possible changes in
hide prevalence during transportation from the farm to
slaughter.  Research on Salmonella has suggested that
prevalence increases dramatically during transportation.
Research is also needed on possible risk factors associated
with high contamination.  Pen and/or housing design,
environmental sanitation practices and feed management are
all possible correlates.
        There's considerable uncertainty regarding the
prevalence of cattle whose hides are contaminated with 0157.
In one study 1.7 percent of 240 feedlot cattle at four
slaughter plants had hair samples that were 0157 positive.
Paired fecal samples were collected from the animals in this
study, and no correspondence between fecal and hide status
was found.
        Some researchers have hypothesized that the degree
of visible soiling of cattle hides or hair with mud, manure,
and/or bedding is correlated with microbial contamination of
carcasses, but this research has shown that the
concentration of generic E. coli organisms on carcasses
changes very little, whether the lot was composed of cattle
that had substantial hide soiling or the cattle were
relatively clean.  The implication of this research is that
the role of 0157 hide contamination and carcass
contamination may not be correlated with visible clues.
Nevertheless, there is some indication in the research that
wetter cattle may result in carcasses with greater levels of
contamination.
        Many studies of 0157 have tested the association
of hypothetical risk factors with the occurrence of 0157-
infected cattle.  These studies have furthered our
understanding of the epidemiology of 0157 in cattle.
Nevertheless, there are still gaps in our knowledge.  For
instance, factors which explain why some herds do not
contain 0157 await discovery.  Risk factors that explain
seasonal patters in 0157 prevalence are still being
investigated.  Also, the role of feed and water
contamination needs further study to be clarified.
        Because risk factors will typically affect either
the herd or the within-herd prevalence of 0157, their
influence can be modeled by adjusting the prevalence
variables in this model relative to the baseline
distributions after we account for the frequency of the risk
factor among the population of herds or cattle.
        There is a substantial amount of evidence
concerning the occurrence of 0157 in live cattle.  In this
model our challenge was to coalesce this data into estimates
of herd and within-herd prevalence.  As it's developed, the
model allows separation of variability from uncertainty.
Such a treatment is a significant improvement.  As the
variability and uncertainty in this model's outputs are
propagated through the other segments of the risk
assessment, we will be capable of evaluating the importance
of the production segment and the occurrence of human
illnesses within the context of this uncertainty.
        This is the end of my presentation.  I'll be glad
to answer questions.
        MS. OLIVER:  Does the Committee have any questions
or comments, and all the experts too?
        DR. HANCOCK:  This is Dale Hancock, Washington
State University.
        I wanted to ask a question about the herd
prevalence, particularly looking at the feedlot level.  Just
on theoretical grounds, if the feedlot prevalence were--in
this estimate--80 something percent, as I recall, how would
there--since feedlots get cattle from a large number of
sources and feed, a number of loads of feed, that would be
the two logical primary ways of getting E. coli-0157 into a
feedlot, how would there be negative feedlots?  Couldn't we
assume that the feedlot-herd prevalence is 100 percent, on
theoretical grounds?
        DR. EBEL:  I think that's reasonable as an
assumption.  Empirically, that's more difficult to argue,
but theoretically that would be a reasonable argument or
hypothesis.
        DR. HANCOCK:  This is again Dale Hancock,
Washington State University.
        Maybe the heterogeneity within feedlots that
wasn't modeled--and maybe it was, but you tell me--in the
cattle on feed study, the largest study that you reported
that had a 63 percent within-herd prevalence--or 63 percent
feedlot prevalence, excuse me--those cattle were clustered.
Those samples were clustered, because there were four pens
in each feedlot with 30 samples per pen, and at least the
empirical distribution in that stud of within-pen prevalence
was strongly skewed to the right, suggesting a big-pen
effect.
        Could that account for the empirical estimate from
your models of the feedlot prevalence?  I mean, was that
adequately modeled?
        DR. EBEL:  Well, I actually think it probably was
because we are looking at--we handled herds the same way--I
should say all the feedlots the same way, the distribution
of the sampling.  Our determinant in estimating herd
prevalence is what was the apparent within-herd prevalence,
and of course, that's a sort of weighted estimate based on
using the results of both those pens that were shortest on
feed, the two that had a random draw from sort of the
middle, and then another sample from those that were on the
longest feed.  So that if there's a bias in there it would
be in our inability to say that the estimate of within-herd
prevalence or apparent within-herd prevalence from those
feedlots is not weighted correctly, and to some degree, that
might be in the data because of the higher within-herd
prevalence in those pens that were shortest on feed.  But
again, they represented one of the pens, and then we had two
that represented random draws, and then another from the
largest, so possibly--I should say the longest--so possibly
the longest and shortest had some canceling effect in terms
of our estimate of apparent within-herd prevalence, but to
some extent that might be true.
        We did not explicitly try to account for that
clustering because our argument was or our assumption was
that across the four pens that were sampled on each herd, we
probably had a good estimate of apparent prevalence across
the entire feedlot.
        DR. HANCOCK:  This is Dale Hancock again.
        To me, that's a decision that needs to be made, is
whether or not this very high--all the studies there
estimated very high feedlot prevalence, a very high percent
of feedlots had it, and to me, it is justifiable to assume
that the feedlot prevalence is 100 percent, but that's just
something to think about.
        Before I quit her, I wanted to ask a question also
about the breeding herd prevalence, or the percent of
breeding herds that had it.   There's an extreme
heterogeneity there, and I just want to make sure that we're
modeling that adequately.  Just to give you a sense of that,
in that year-long study, '94, using relatively insensitive
methods, admittedly, over half of all of the positives
detected in that year-long study where they were sampled
monthly were detected on the single sampling date with the
most positives, and over 80 percent in the two sampling
dates, out of the roughly 12 per herd, with the most
positives, and generally in the warm months of the year.
        And so it's extremely temporally clustered in
these herds, and in fact, over two-thirds of the sampling
dates in positive herds were associated to no positive
samples, herds that were eventually positive.  Presumably it
was missed, or in the environment or not in cattle there.
        So is there adequate modeling for this very
extreme level of temporal heterogeneity within these
positive herds?
        DR. EBEL:  Well, as we pointed out in the scope
presentation, at this point we are not incorporating
seasonality into the model, and our rationale is that
although there is some evidence in the live cattle research
concerning a seasonal pattern, we don't have the
corresponding evidence right now at the detail to sort of
link it up and evaluate its importance in the subsequent
segments: slaughter, preparation.  So that's our
justification at this point.  It's basically a simplifying
assumption.
        To that extent then, the results from, say, a
year-long study, of course, represent our apparent look at
what the prevalence in those herds might be.  Having made
the adjustment we have for sensitivity, we feel like we've
got a good picture, at least of average within-herd
prevalence on a seasonally average basis, but I think we
would all like to incorporate and feel like it is very
feasible to incorporate seasonality into the model.  Our
precaution at this point has been basically that we don't
have data downstream of live cattle to really establish that
there in fact is a correlation, and as you will see as we
model into slaughter, we have sort of a proportionality
constant between live--incoming live prevalence and carcass
prevalence.  And that if that's constant and isn't adjusted
for any sort of seasonal issues, it clearly is going to push
through a seasonal pattern into ground beef contamination
which may or may not be something we can empirically
demonstrate.
        So until we get that data, that's been our reason
for being cautious and operating on sort of a seasonally-
average basis.
        MS. OLIVER:  Dane?
        DR. HANCOCK:  And can I make one final comment?
        And this is for the record.  I think what you've
shown here is accurate on the breeding-herd prevalence
versus feedlot prevalence, and your reasons for that, in my
view, are accurate, the age difference between those
animals.  But it's important, I think, to make, for the
record, that those--it would be inaccurate to automatically
assume from that prevalence data the feedlots had to two to
three-fold higher prevalence, as I recall in your estimates,
that something about feedlot management is causing that
higher prevalence.  Obviously, that's a good hypothesis that
needs to be looked at, but it has been looked at to a
certain degree, and there are several levels at which you
can look at it, but within a dairy herd, for example, we
have all age animals, and the--although the overall within-
herd prevalence is low because we have mostly older animals,
the prevalence within young stock within those herds is very
similar to prevalence within feedlots.  And when we have
looked at dairy heifers in a dry-lot setting, because many
of the western dairies basically raise them in a feedlot
setting, their dairy heifers, compared to those that put
them on pasture, the prevalences are extremely similar.  And
so we need to make certain that we don't infer that that two
to three-fold higher prevalence in feedlots is an effect of
feedlots rather than age, because it's very similar to the
age differences within dairy herds.
        MS. OLIVER:  Thank you.  Dane.
        DR. BERNARD:  Thanks.  Dane Bernard.
        I'm glad Dale asked all those questions because
those were confusing to me as well, but I'm sure I'm the
only one in the group that is not a modeler, but in your
summary comments you mentioned  that variability
uncertainty would be propagated throughout the model.
        For my benefit, can you enlarge on what that means
and what its effect is?
        DR. EBEL:  Okay, thanks.  What we've tried to do,
because this whole issue of variability and uncertainty is a
real large issue within the risk assessment community, but
isn't necessarily a similarly important issue for those
outside, is to try to find a compromise that we think works
for us, but basically we're taking and running throughout
the model three scenarios.
        The first scenario is the most likely scenario,
and it's based on our best estimates of elements of--I
should say--yes, variability in the various segments of the
model.  So the output we showed here for the most likely
scenario represents the output based on our best estimates
of what the average within-herd prevalence is, what we think
the best estimate is with regard to herd prevalence in the
corresponding parameters for feedlots.  And we generate that
output at distribution, which, as we showed, is the number
of infected animals in say a group of 40, and that's the
output that then goes into slaughter for the most likely
scenario.
        Then correspondingly we run two other scenarios
which we'll also put into slaughter.  One is the lower-bound
and the other is the upper-bound, and they correspondingly
have higher estimated numbers of infected cattle in 40-head
or lower, depending on the bounds.
        From the production, we are going to take those
lower bounds and put them into corresponding lower-bounds
for slaughter and upper-bounds, so that we'll end up having
three scenarios that sort of trail out and demonstrate
increasing uncertainty as we move progressively through the
model.
        At the end we'll describe the upper and lower
bounds probablistically of what we might expect based on our
uncertainty in those inputs.  And, again, that's the intent
of it.
        DR. BERNARD:  Dane Bernard, again.  In layman's
terms, the greater the uncertainty at the beginning of the
model, that's going to affect the next analysis and the
uncertainty there as factored into the total uncertainty at
that point and right on through the model.
        DR. EBEL:  Right.
        DR. BERNARD:  Thank you.
        DR. EBEL:  It appears to increase as we go along.
        MS. OLIVER:  Mike Doyle?
        DR. DOYLE:  Thank you.  This is Mike Doyle,
University of Georgia.
        Eric, are we going to see any more about
production data?
        DR. EBEL:  Today, probably not.
        DR. DOYLE:  Okay.  Well, back to my original
question.  You came up with 11 percent shedding, and I
haven't seen any numbers that come up to 11 percent in this
presentation.  So how do we get to 11 percent and a 102 to
103 per gram number of E coli being shed?
        DR. EBEL:  Okay.  Well, again, as we pointed out,
the 102 or 103 was just actually taken from sort of an
expected value from I think the work you had done in calves
long ago.  But it was just a place holder.  We aren't
actually modeling contamination load per gram out of these
cattle.  We're primarily interested in what's the prevalence
of cattle shedding.
        But to get back to your first question about 11
percent, let's go to Slide 26 to show you the data that's
driving up estimates for feedlot cattle.
        Anyway, the Smith data is certainly in excess of
11 percent.  It turns out that as we combine this evidence,
make the adjustments for sensitivity, as you see the column
there listing average apparent prevalence, those would be
without making any adjustments for sensitivity of the test.
So those are all going to go up in addition to the Smith
data.  As we go through the algorithm that we are obviously
just briefly touching on, that's the data that generates an
11 percent average within-herd prevalence.
        Because we are modeling within-herd prevalence is
an exponentially distributed variable, however, that 11
percent is actually greater than what the median or the 50th
percentile of that distribution would be, because an
exponential is going to have a higher frequency at the lower
within-herd prevalence levels.  That's just a function of
that distribution.
        So I caution you to assume that that's the 50
percent break point, that 50 percent are greater than 11
percent and 50 percent or less than.  It's actually 50
percent are going to be greater than some number less than
11 percent.
        DR. DOYLE:  Have you included the data that have
been reported in the press recently from USDA which has
these very high levels of carriage of 0157 by cattle?
        DR. EBEL:  Well, yes, I think we have to some
extent, although we're never quite sure what, you know, is
being referenced to what.  But the Smith data is some very
recent data, and it is part of the information that's coming
out that's demonstrating much higher than previously report
prevalences.
        Also, the Lagreid study out of ARS is a recently
published study, and their work continues.  As they complete
things, we try to get that information.  But as yet, some of
the information is not yet incorporated.
        DR. DOYLE:  Thank you.
        DR. EBEL:  Thanks.
        MS. OLIVER:  I'll take one more question now, and
that'll be from Mel Eklund, and then after that go to the
next presentation.  If there's still more questions after
than and the Committee wants to during discussion, you can
ask them then.
        DR. EKLUND:  This is Mel Eklund from Seattle,
Washington.
        Most of the questions I had have already been
answered by Dr. Hancock, but I have one other one that I
would like to ask.  Since Dr. Hancock is here, maybe he
could answer it.
        Have studies been done on cattle from rangelands,
like in Montana, where it takes--I grew up on a cattle ranch
there, and it takes 10 acres to raise one cow, and you have
a very widespread--and most of these animals in these areas
are--the breeding stock stays there, except sometimes bulls
are brought in, so you don't have a lot of influx of other
animals from these--have studies been done in these areas?
And there are feedlots that come from--in the Montana area
that come from these herds.  I was just kind of curious what
the incidence might be in this type of environment.
        DR. EBEL:  Yes, as a matter of fact, the Lagreid
study, which, again, was recently reported--and I wanted to
flip to that to see if I can--there were 15 cow/calf herds.
Those were primarily range-type cow/calf herds were studied.
They didn't do any sampling of cows, fecal sampling of cows
in that study, so the data weren't appropriate for us to
bring into the within-herd prevalence estimate, but they do
show a fairly high prevalence of 13 out of the 15 herds that
they sampled--and, again, that was across five Midwestern
States, I believe--were found to contain at least one 0157
infected animal.  But they sampled at-weaning calves and
that's the basis of their sample in that study.
        DR. EKLUND:  This is Mel Eklund again.  Sometimes
you get into the Midwest areas, these are smaller acreages.
Some of the farms in Montana, you can drive 18 square miles
on them.  I was just kind of curious what the incidence
might be in this environment.
        DR. EBEL:  When I say Midwestern, I mean--I grew
up in Illinois, and I call that Midwestern.  But I guess I'm
thinking west of there.  But they didn't--
        DR. EKLUND:  But that's small farms compared to
Montana.
        DR. EBEL:  Right, right.  And yet the ones that
Lagreid worked in, they were in the Nebraska, Kansas type
area.  But I don't know that they incorporated any Montana
herds in that study.  Do you?
        DR. HANCOCK:  This is Dale Hancock.  I don't know
about that.  We've really only done one study where we
looked at cattle on range, and that was our earliest study
where our methods were the most primitive.  But we did find
a really quite similar prevalence in range herds as in
cattle herds, and we reported on one instance actually in
West Texas where--and that's certainly an extensive type
system--where cattle and deer shared common sub-types of E.
coli 0157.  And there's some recent work from Kansas, I
believe, on surface water transmission, and certainly we're
working on water trough transmission.  And so there are
opportunities for transmission in that setting, it appears,
but there is a need for more data in the range setting.
        MS. OLIVER:  Thank you.
        Our next presenter is Dr. Tanya Roberts, and she
will talk on the issues of slaughter, and Karen Hulebak will
discuss the questions that FSIS wants you to take into
consideration.
        DR. HULEBAK:  All right.  When you listen, as you
listen to Dr. Roberts, please keep in your mind the
following questions:
        What evidence would be necessary to satisfactorily
link, quantify the link between hide and carcass
contamination?
        And, second, we have attempted to develop a
mechanistic model that follows product through the slaughter
plant.  Would it be preferable to develop a strictly data-
anchored model that does not attempt to model processes
between monitoring points?  If so, what data would be
required to develop such a model?
        Excuse me.  We're also going to try to help you
track along in your handouts with the overheads that we use
in these presentations.  It's clear they don't track exactly
point to point, but we'll give you some guidance on where to
find a handout that more or less matches the projected
figure.
        DR. ROBERTS:  Actually, I have a few extras.  A
lot of them have to do with some of the results we were able
to put in at the last minute.
        It's a pleasure to be here to talk about the
slaughter segment of the E. coli 0157:H7 model.  This slide
identifies who my other collaborators have been on the team
over the year and a half we've been working on it:  Clare
Narrod, Scott Malcolm, Jennifer Kuzma, Bob Brewer, and Peter
Cowen.  And we've also had comments from the other members
of the E. coli team that were working on other segments of
the model.
        The outline is that I will discuss first the
overview of the model structure, that we're looking at what
kind of processes actually occur in slaughter plants.
Second, we'll go into a description of the kinds of pathways
that occur in the slaughter plant for 0157 contamination.
        I'm going to discuss the event tree model
assumptions and the data that we used, and let me just take
a brief aside here that we tried to use in-plant data
wherever possible and not the laboratory studies, because we
were concerned that they wouldn't reflect actual operating
conditions.  Whenever possible, we used national data, but
we did use some international data.  We preferred E. coli
0157:H7 data rather than generic E. coli.  And then, last,
I'm going to give you some final conclusions about the model
results and what the output is then to the next segment,
preparation.
        In the slaughterhouse, as most of you know--but
not all of you work for the meat industry--live cattle enter
the slaughter plant from the farm.  They go to the knock box
where they're stunned and bled, and they're hung on an
overhead rail.  They go to the next part of the main floor
of the plant where the hide is removed, both mechanically
and manually.  Then they go through the first
decontamination procedure to remove large fecal spots that
are on the carcass, and sometimes they have a carcass wash.
        Evisceration is the next step in the procedure
where the gastrointestinal tract is removed.  The next step
is the carcass is split with a large chain saw.  You'll
notice that both this box and the knock box and stunning are
not color-coded.  That's because we did not include them in
the model because the limited data that are available in the
literature show that they were relatively low risk.  That's
something that we would welcome further data from, and we
would be happy to add them.
        The next step in the process is the second
decontamination procedure.  This is after the carcasses are
coming off the line and ready to go into the chiller, and in
the U.S., two processes are used.  Mostly the larger plants
use a steam pasteurizer, and the smaller plants tend to use
various kinds of hot water carcass washes, with or without
the addition of various compounds.
        Then the carcass goes into the chiller for one to
two days.  It's taken out to the fabrication room where it's
cut up into steaks and roasts and chops, and the trim is put
into a combo bin or boxes, which then becomes the output to
the preparation segment.
        You don't have this--no, I want to talk about
this.  You don't have this slide in your handout, but I
thought maybe it would be useful to give you a little bit of
an overview of the kind of a structure that we used in the
slaughterhouse.  We're using an event tree model, and we're
building it for each step in the slaughter process I showed
you on the previous slide.  And we're looking at--each step
we ask:  Can contamination occur during this procedure?  And
this is a yes or no.  If it can occur, then what are the
possible levels of possible contamination?
        For each one of these events where you have
contamination and the levels of contamination, we ask what's
the probability that this will occur, and we use a
probability distribution to capture the variability and
uncertainty associated with that.
        Then, finally, we use a Monte Carlo simulation to
take a random draw for each event in the tree, and we do
5,000 to 10,000 simulations depending on when you start to
get stable results.
        So what we want to end up with is being able to
identify what the risk level is associated with different
pathways that we are developing in our event tree model, and
I'll discuss some of those pathways toward the end.
        Next slide?
        You also don't have this slide, but the point of
this slide is to give you sort of an overview of the kinds
of things that can go wrong in the slaughter process.  These
are the things that we're trying to capture in our event
tree.
        You could have a procedural failure, just a flawed
plan for a process.  There is some new evidence that hasn't
been taken into account in an old operating procedure or
just an oversight.  There could be an operator failure, and
those generally are of two kinds.  One is the error of
commission, you do the wrong thing.  You don't clean your
knife when you slit open the hide.  Or it could be an error
of omission, you just forgot to do something.  You
overlooked maybe one piece of equipment that you were
cleaning the night before in the sanitation procedure.
        You could have equipment failure.  An example of
this could be you could have a compressor that might fail in
the chiller, and normally you have a back-up, but maybe the
back-up failed.  There could be a possibility here of
equipment failure.  Or, as you heard in the previous talk,
you could have contaminated incoming product.
        You do have this slide in your handout.  For each
step in the slaughter plant, we model the process and the
pathway that could contribute to the risk, the sources of
data for the input, and then the model.  So this is going to
be the similar structure that we're going to be talking
about for each one of the segments as we go through it.
        As cattle enter the slaughter plant, they're
trucks in, and as you heard reports, there's a possibility
that they could have gastrointestinal contamination.  They
could be a fecal shedder.  So the model is broken into two
segments.  We have one for steers and heifers and one for
breeding cattle, and the steers and heifers, the feeding
cattle, are modeled as one to five truckloads of 40 animals
that come from one feedlot, and they have similar GI tract
status on that feedlot.  The breeding cattle, cows and
bulls, are modeled as independent animals with GI tract
status that's randomly picked from a national distribution
of the prevalence.
        These are the slides that Eric showed you.  This
is the one for steers and heifers, and looking at a
truckload of them, what's the probability that they'll be--
you know, how many fecal shedders are they likely to have in
that truckload.  And he had the most likely, the lower-
bound, and upper-bound scenario.  And this is the slide
you've seen before for a truckload of cows and bulls.
        This summarizes the data on the previous two
slides, and it's the percent of cattle that are likely to be
infected by cattle type:  the breeding herd with a 4 percent
most likely prevalence, and these are the upper and lower
bounds, and steer/heifer, 13 percent most likely prevalence.
We've got to get together on this, Eric, because I think you
said 11 was the most likely.  We need to make some minor
adjustment in these numbers.
        DR. HULEBAK:  Tanya, excuse me.  Is this in your
slide?
        DR. ROBERTS:  No.  I don't think so, at any rate.
No.  You do have this next one, though.
        DR. POWELL:  Mark Powell.  This was an effort to
go back, review, and summarize the production that was being
outputted into the slaughter model.
        DR. HULEBAK:  So we don't have this slide.
        DR. POWELL:  It's already been shown in the
previous segment.
        DR. HULEBAK:  To the extent you can make note of
that, it would be helpful.
        DR. ROBERTS:  For each one?  Okay.
        This is the event tree for steers and heifers.
You have this truckload of 40 steers and heifers coming into
the slaughter plant, and they could either come from a
contaminated herd, so they have a possibility of the animals
on that truck being contaminated, and so you would get to
the individual animal basis so it has some probability of
going--of staying--of being contaminated and going up this
part of the event tree, or if the particular animal that's
being slaughtered isn't contaminated, it will continue down
this track.  If the truck comes from an uncontaminated herd,
then no animals on the truck will be contaminated, and it
continues down this part of the event tree.
        Once the animal is in the slaughter plant, then,
the first part that we include in the model is the dehiding,
and this is where the animal who has already been stunned
and bled and is now dead enters the main part of the plant.
It's upside down hanging from an overhead rail.  Its hocks,
or feet, are removed.  The bung, or rear end, is tied off.
The hide is cut down the midline, and the hide is pulled off
manually and mechanically with a variety of side pullers, up
pullers, and down pullers.
        The pathway that could allow contamination can
occur via contact as the hide is removed with the
contaminated hide itself slapping back on the carcass, with
the worker's gloves or knives contaminating the carcass, or
you could have aerosol contamination that could be created,
especially if the hide puller moves rapidly and jerks the
animal around.
        You have this slide, but it's been changed a
little bit.  In the model part of the dehiding, we're going
to be looking at three things.  One is the area that's
contaminated, the level of contamination, and on the next
slide, we'll be talking about the probability of
contamination.
        The most likely scenario is that there are 3,000
cm2 of the carcass that can be contaminated during the
dehiding process, and this was the distribution--then we
used a distribution to characterize our uncertainty about
the exact size, and we have upper-bound and lower-bound
scenarios.
        The level of contamination is 1 to 3 logs of
colony-forming units per carcass, and a Poisson distribution
was used to characterize the uncertainty.
        The data that this is based on comes from the
combination of the FSIS carcass monitoring data that was
discussed earlier and the FSIS ground beef sampling data.
        Next slide, please?
        The third component of this dehiding model is this
probability of contamination, and here we relied on two
English studies.  The one that we relied on the most is
Chapman--that was 1993--where they looked at a cattle
slaughter plant in South Yorkshire, and they were looking,
as I said, at cattle.  But there was also an earlier study
by Howe et al. which looked at a calf operation where they
got similar contamination rates.  Chapman was 30 percent;
the Howe et al. was 33 percent.  So we thought the Howe was
sort of corroboration.  And then we used this Chapman data,
and they found seven carcass positives out of 23 fecal
positives, so we put this into a beta distribution to
capture our uncertainty about the exact number that would be
contaminated.
        The second part of the probability is to look at
the possibility that subsequent carcasses following a
fecally contaminated carcass could also be cross-
contaminated.  In the Chapman study, this was 8 percent.
They found 25 fecal negatives that they tested that two of
them actually turned out to have positive carcasses.  Since
they didn't contaminate themselves, they must have gotten
the contamination from someplace else, from one of the other
carcasses.
        We used a geometric progression to capture this,
and the first animal then that follows a fecally
contaminated animal has a little over a 7 percent and the
second animal has a little less than 1 percent probability
of being contaminated.
        Yes, you have this slide.  So these are the event
trees.  You have a GI-positive animal coming in.  You have,
on average, a 30 percent chance that it will self-
contaminate itself and a 70 percent chance that it will not
contaminate its carcass as the hide is removed.
        If you have a GI-negative animal that comes in,
we're looking at--if it does not follow a positive animal,
it stays negative, it has a negative carcass.  If it follows
a positive animal, as I mentioned, we have--the next two
adjacent ones have some probability of becoming cross-
contaminated, but most of them will not be cross-
contaminated.
        The next step in the slaughter model is the first
decontamination where we have knife trimming or spot steam
vacuuming that remove visible fecal contamination.
Sometimes this is also followed by a carcass rinse.
        The pathway is that you can have removal of
0157:H7 if these procedures are effective, or you can just
redistribute it over the carcass.  If the knife is not
cleaned in between cuts, it can transfer it from one
location on the animal to another.  Or the water rinse
coming over can actually just move it physically down the
carcass rather than actually get it all the way off the
carcass.  So we have both possibilities.
        The model is based on data from two studies, Gill
and Dorsa.  Gill found a 0.32-log reduction, the Dorsa study
found a 0.7-log reduction as their most likely values.  So
what we did was we built a trapezoidal distribution around
this with a reduction of 0 to 1 log as being the whole
range.
 
        Again, there are only a few studies that were
done, and it would be useful if we had more data here.
        This shows you what the tree looks like.  We have
this contaminated carcass that comes along, and it has a
possibility from a 0- to 1-log reduction with 0.3 and 0.7
being the most likely points here.
        During a carcass evisceration, which is the next
step that's modeled in the slaughter plant model, the
process is that the GI tract and the rest of the organs are
removed.  The possible pathway for contamination is that you
can have a rupture.  You could have a knife nick, or there
could be some weakness in the GI tract because of maybe some
kind of an infection and it could rupture and come apart.
        Now, it doesn't appear as though E. coli 0157:H7
is particularly likely to cause this.  It's other organisms
that could cause this kind of a rupture, so whether the
animal's contaminated with 0157 is not likely to contribute
to the probability of a rupture.
        The basis of our model actually comes from Bob
Brewer, one of our team members, who has extensive service
in FSIS in investigating slaughterhouses, and he suggested
that this self-contamination, this nick, could occur maybe
one in 100 times.  The contamination level is assumed to be
equivalent to what we had in the dehiding earlier, and the
area contaminated is smaller.  It just ranges from 1 to 100
cm2 with 25 cm being the most likely value.
        So here you have this possibility of a positive
animal either rupturing or not rupturing.  If the animal
didn't have any GI--any 0157 in its GI tract, it's going to
continue negative.  Even if it had a rupture, it would not
cause contamination.
        Next slide, please?
        The next step in the model is to look at the
second decontamination procedure, and as I mentioned
earlier--oh, I see.  We have carcass splitting in here,
don't we, in you guys' handouts?  Well, we didn't model
that, so I left it out of these slides.
        So moving on to Slide 16 in your handout, the
Carcass Decontamination II, the process here is that
decontamination methods are used to remove or kill 0157 from
the carcass exterior.  At this point you have sides of beef
because it's already been sawed in half, and the two most
common techniques used by U.S. industry are steam
pasteurizer, in which these railed carcasses enter the steam
pasteurizer four at a time.  This stainless steel clamshell
shuts around them.  The air is blown in to blow the water
off the exterior of the carcass so that the steam can
penetrate, and steam of 180 to 210 degrees Fahrenheit is
applied for 5 to 15 seconds.
        Most small and medium plants use a hot water wash,
although there are a few large plants that also use the hot
water wash instead of steam pasteurization.  And here you're
using the heat as well as the volume of the water coming
over the carcass as methods of either dislodging or killing
the 0157.  You also have the possible addition of organic
acids or trisodium phosphate.  And the efficacy is going to
depend on the heat and the volume of water used.
        The pathway is that you can have--where you can
have a change in the risk status is that the carcass wash is
going to either reduce or redistribute the organisms, and
the steam pasteurization can significantly reduce
contamination.  However, low temperature use is not
effective.
        The data that we actually put into the model for
steam pasteurization, we used a triangular distribution with
a range of 0 to 2 logs--this is based on Gill's work--and
with 1-log reduction the most likely.
        For the hot water wash, what we did was we modeled
this the same way, that trapezoidal distribution, as we did
in the first decontamination procedure.
        This shows the event tree pathways, then, for the
second decontamination.  This shows the steam pasteurizer.
You have from a 0- to 2-log reduction with 1 log being the
most likely.  And so you take a random draw from this if it
went through the pasteurizer to see what level of reduction
you actually got for the particular Monte Carlo simulation.
And, again, if it's an uncontaminated carcass, it goes
through this decontamination procedure, it's going to remain
uncontaminated.
        The next step in the model is carcass chilling,
and the process here is that you have--sides of beef are
blast air chilled for 18 to 48 hours.  The pathway is that
you can get growth or decline of E. coli 0157:H7 on the
carcass surface, and that's going to be a function of both
the time and the temperature.  And you can also have cross-
contamination from other carcasses, and that's going to be
more likely the more crowded the chiller is.
        In the model, we pooled data from three slaughter
plant studies, from Dorsa and from Gill and Bryant, to come
up with a common distribution, with a normal distribution,
where the mean is 0 and a standard deviation of 1.
        This is the event tree.  You can hardly see these
things.  What we did was we assumed--and this is something
that we're thinking of perhaps changing.  We have this
contaminated carcass that comes in, and we're assuming that
the truckload is all going to either go into a chill--well,
it will go into the same chiller.  But in that same chiller,
you'll either get growth or you'll get decline.  It depends
on the efficacy of the chiller.
        I was recently in New Zealand, and there they were
suggesting that there is so much variability; even on the
carcass you can get 5 degrees Centigrade difference in the
air coming onto the chiller and the air going--let me stop.
Air that comes on--the temperature of the air coming onto
the carcass and the temperature of the air going off the
carcass, there can be a 5 degree Centigrade difference.
        So I had been thinking about this as being--
looking at physically the chiller and how it's located and
how close the carcass is to the door that opens and closes,
or how close it is to the blast air chiller coming out, it
would be cooler there, or how crowded it is.  So I was
thinking of a fixed room with sort of the geographical flow
of air in that room.
        But they were also suggesting, too, that you need
to look at the differences in temperature even on one
carcass, which was a whole new concept, and they have some
data that they're willing to share with us, so we might try
to complicate this part of the model.
        But, again, if you have an uncontaminated carcass,
we're not really modeling the cross-contamination that
possibly could occur.  We're assuming it stays negative.
        What this tree does--and you do have one of these
in your slides.  It doesn't have--
        DR. POWELL:  This is Slide 28 in your handout.
        DR. ROBERTS:  Oh, it's in a different location.
I'll be coming back to talk about fabrication later.
        What this event tree does here--and it's labeled
slaughter event tree in black, so it's a little hard to read
up here--is it summarizes all the steps in the slaughter
process we've discussed so far.  So we have the incoming
fecal status of the animal.  What happens is the hide is
taken off, what is the carcass status, what happens during
the decontamination, evisceration, second decontamination,
and the chiller.  And at the end over here, you're coming up
with, for each pathway in the event tree--so here's one
pathway, and, you know, here's another one.  You have these
19 pathways, and each pathway in the event tree has a
probability and a level of carcass contamination associated
with that pathway.  Then you want to know how many pounds of
meet actually end up going on these different pathways.
        These boxes in black here indicate negative
pathways, so you don't have to be worried about those.
Fortunately, most of them are negative.  However, there are
five positive pathways, and I thought I'd give you a little
bit of discussion on each one.
        Starting at the top, S1 ends up with a positive
carcass.  You have some contamination here.  And that's
because you had a positive animal come in and it
contaminated its carcass.  It's F-positive and C-positive.
And then, in fact, it even went on to contaminate itself
during evisceration, too.  So that's not going to be a very
likely event because evisceration is a low probability
event, only 1 in 100 times does that happen, you get any
contamination there.
        The next pathway, S4, is a little more common, and
that's this one right here.  You have a positive animal
coming in.  It contaminates its own carcass, and then it
gets some reduction during contamination, but it's not zero.
So it ends up being contaminated.
        S7, you have this carcass that comes in--I mean
that's fecally positive, the carcass is positive, and it
goes through decontamination so that it now becomes negative
during the first decontamination procedure, but then it
contaminates itself during evisceration.
        S15 is where you have a negative carcass that
comes in--I mean, a negative animal that comes in that gets
contaminated by a preceding carcass, so it becomes C+ in the
dehiding and that causes the contamination.  And S15--oh,
that was that one.
        DR. HULEBAK:  S11?
        DR. ROBERTS:  Yes, I guess I was talking about 11.
No, wait a minute.  No, that was right.  Then S11, yes, is
the one I've left out.  Thank you.  That's where you had a
positive animal come in, it did not contaminate at carcass,
but it got contaminated in evisceration.  And that's also
going to be a low probability pathway.
        So the pathways that are most likely in this
scenario are S14 and S15.  They're going to be the most
likely contributors to contamination.
        DR. HULEBAK:  S11 and S15?
        DR. ROBERTS:  S4 and S15.  Yes, S4 and S15.
        This slide you don't have in your handouts, and
what it does is it summarizes the data in the model looking
at the probability of a carcass being contaminated.  And,
again, it's broken down by cattle type.  For the cow/bull
herds, you have a 0.3 percent most likely prevalence, so
only 0.3 percent of the carcasses will be contaminated.  The
upper bound is 2 percent and the lower bound is 0.14
percent.
        For the steer/heifers, you have about a three-fold
greater probability with the most likely prevalence for the
carcass being contaminated 0.98 percent.  So it's less than
1 percent.  Upper bound 5 percent, lower bound 0.5 percent.
        The next step in the process--
        DR. POWELL:  If I might, this is Slide 21 in your
handouts.
        DR. ROBERTS:  Thank you, Mark.
        DR. HULEBAK:  Page 11.
        DR. POWELL:  Page 11, No. 21.
        DR. ROBERTS:  This is the last step in the
slaughter model.  In fabrication, you have the carcass
coming in on a rail, and it's cut into steaks and roasts and
other cuts, and the trim is put into vacuum package pieces,
boxes--well, the trim is put into the boxes and the combo
bins.
        For your explanation--you may not know what these
combo bins look like.  They're these enormous cardboard
boxes that are either round or hexagonal-shaped, and they're
lined with an enormous plastic sack, which they then tie off
on the top once it's full.  It will hold 2,000 pounds of
meat, more or less, and they put it on a forklift before
they even load it, you know, when it's empty, and then--I
mean, they put it on a wooden pallet that a forklift can
then lift up and put it right onto the truck.  And you end
up then with--some of these trucks, you know, have up to 20
combo bins that will be on them, and then they'll take them
off to the grinder from the slaughter plant.
        Or if they don't have an immediate shipment going
out, it may be put into the chiller, or they may actually
grind some on a plant and it'll go into a room, be
refrigerated, waiting until they grind it on the premises.
So you have this product coming in from the chiller, and
it's cut into trim and put into these boxes or combo bins,
and some of the product is sent off site, et cetera.
        There was a question earlier about the ratios that
we used, and here in this fabrication process for a
steer/heifer plant, you have--trim from one steer/heifer may
go into five combos, or there may be 30 to 100 animals per
combo bin.  You'll see on the next slide or the one a couple
slides later that 18 percent on average of the meat from a
steer/heifer ends up as trim, and the other 82 percent goes
into roasts and steaks and chops.
        For a cow/bull plant, typically the trim from one
animal goes into two combo bins, but there may be up to 20
animals per combo.  And it depends partly on what kind of
lines they have set up, what it is they're trying to take
off the animals, as to how many they have.  And from a
typical cow, on average you have--54 percent of the product
does end up into the combo bin.  These are older, tougher
animals, but they do increasingly, with the improvements in
tenderization, take off more of the roasts and other cuts to
use in other products, whereas almost all of the bull meat
ends up in the combo bin because it is even though and I
guess has a stronger flavor as well.
        The pathway where you can have potential
contamination is you can have detritus stuck on to the
equipment, and earlier contaminated meat can contaminate the
fabrication line, which, you know, then that contamination
can then be transferred onto subsequent pieces of meat that
come down that conveyor belt, or contaminate the knives or
whatever else
        You can also have growth of E. coli 0157 if the
fabrication room temperature is not controlled.  Typically,
it's at 50 degrees or less.
        Next slide?
        The level and probability of contamination during
fabrication is dependent on--
        DR. POWELL:  This is Slide 25, page 13 of your
handout.
        DR. ROBERTS:  --is a function of the plant level
quality, and Scott Malcolm on our team developed this index.
And on the x axis you have the level of contamination CFUs
per cm2.  On the y axis you have the probability of
contamination.
        For a plant of good quality, you're going to have
low levels of contamination, and it's going to be under
strict control.  You're going to have a very narrow
variance.  For plants that have not as good control over the
quality, you're going to have higher levels of contamination
on average for the pieces that come through there, and
you're going to have a greater variability associated with
that.  So that's the kind of distribution you get.
        In the model we have roughly 50 percent of the
plants have no change in contamination, no increase, one-
third have a 1-log increase, and 16 percent have a 2-log
increase.
        Next slide?
        So in the fabrication model, then, you have the
probability and level of contamination that is going to vary
from plant to plant, depending on the plant quality.  You
also going to have a probability and level of contamination
that will vary by cattle type.  As you've seen earlier
slides, we talked about the differences in the prevalence of
contamination depending on the incoming cattle, depending on
what type they were, steers and heifers versus cows and
bulls.
        You also get a more minor effect due to the
differences in the carcass surface contamination in the
combo bins that varies by the cattle type, and this is shown
on the next slide, which is your Slide 26.  It's slightly
different.  We have an extra column added on here.  And what
this column does is this shows you the difference in carcass
weight depending on the animals.  You know, the male animals
are slightly larger.  And the percent of meat we're assuming
that the hide--I mean that once you have a carcass and you
take out the bone, the 70 percent that's left is your meat,
that 30 percent of it is bone, that the percent of the
carcass that's going to end up in the combo bin is going to
vary, as I mentioned before, so steers and heifers, it's 18
percent; cows, it's 54 percent.
        But you also have this difference in the percent
of the contamination that's on the surface of the carcass
that's going to actually end up in the trim, and we're
thankful for Todd McAlewn (ph) for giving us the information
for the steers and heifers which we extrapolated to cows and
bulls.
        What this means, then, is that when you look at
the ratio of the contamination to the trim percentage, you
get very different ratios, with the steers and heifers
actually having a higher probability of having any
contamination that was on the carcass actually ending up in
the trim than you do for the cows and bulls, which have a
different ratio because they have more sterile meat from the
interior of the carcass versus the exterior.
        Next slide?
        These next two slides you don't have, and it's the
summary of the data of the model so far, and what I want to
emphasize here is that the "nc" means not contaminated.
This shows you for the cows and bulls the number of combo
bins that are not contaminated, and if they are
contaminated, what the log contamination ratio is.  So for
the most likely scenario, which is in the middle here, 95
percent of the combo bins will not be contaminated.  The
upper and lower bounds are 98 percent and 77 percent.  And
then you can see that you're getting very low levels of
contamination in this 2,000-pound combo bin.
        Next slide, please?
        The steers and heifers are slightly more
contaminated because they're coming in with more
contaminated animals and you have this ratio of this trim,
the exterior-to-interior effect that I showed you two slides
ago.  The most likely scenario is that 72 percent of the
carcasses--of the combo bins, excuse me, for steers and
heifers will not be contaminated, with the upper and lower
bounds of 83 percent an 34 percent.
        You can see that the log levels is also slightly
greater for steers and heifers than it is for cows and
bulls, but it's still at very low levels.
        This is a summary of the data on the previous two
slides, and, again, you do not have this, but we'll send it
to you.  This shows you the most likely prevalence of E.
coli 0157 levels in the contaminated combo bins.  So we've
taken out the uncontaminated ones, and we're just looking--
no, wait a minute.
 
        I don't know if this is levels or percentages.
Eric, does this make--what is this?
        DR. EBEL:  Those are the prevalences of the
contaminated combo bins on this slide.
        DR. ROBERTS:  Okay.  So this would be the average
of the whole table shown on the previous slide, the
contaminated and the uncontaminated?
        DR. EBEL: The portion of all combos that are
contaminated.
        DR. ROBERTS:  Okay.  So it says then 4.8 percent
of the cow/bull combo bins have some level of contamination,
and 28.1 percent of the steer/heifer combo bins have some
level of contamination, although it's low levels.  And this
shows you the upper and the lower bounds.
        I forgot; it's these next three slides that show
the levels.  I was jumping ahead of myself here and getting
confused.
        So you've seen these slides before in Eric's part
of the talk where we're looking at the distribution and with
the most likely and upper and lower bounds of contamination.
This is the cow/bull scenario, and this one is the
steer/heifer scenario for the uncertainty about the 0157
levels in the combo bins.
        This slide is a summary slide, what shows you the
levels in those combo bins that are positive.  This is the
slide I thought I was looking at three slides ago.  And by
cattle type, you can see what the levels are on average in
these contaminated--you have that whole distribution before,
but in the cow/bull combo bin, you're only talking about
1.15-log CFUs per 2,000 pounds of meat.  And in the
steer/heifer, it's 1.44 CFUs, slightly higher, per the whole
combo bin of 2,000 pounds of meat.
        Just a couple of wrap-up comments about modeling
variability and uncertainty.  As Eric mentioned, variability
is a state of the world, and in developing this model, I was
really impressed with when you're trying to look at the
impact of a whole industry, you're going to have quite a bit
of variability in these models, and much more so than the
models of individual plants because you have so many
different kinds of processes.  Different kinds of things can
go wrong in plants with different procedures, and you're
trying to capture all of these different events in your
model.
        Next slide?
        The uncertainty in our model is a function of the
limited data on plant processes, the limited data on the
performance of these processes, and the problem of measuring
cross-contamination.  I'd also like to point out that we can
reduce our uncertainty greatly by collecting better data on
each one of those three points and by also improving our
modeling of the physical process, and I talked about what we
might be doing in the chiller as an example of this.
        And I'd like to close then with this last slide,
that some of the future modeling scenarios we're thinking of
looking at would be to see what impact reducing the levels
of incoming 0157 on the incoming cattle had on the impact of
the probability and levels in the beef trim, and then also
to explore various kinds of changes, either the worst and
the best practices in the plants and what these have on the
impact on 0157 contamination.  This would be during all
steps that we've modeled: the dehiding, evisceration,
decontamination, steam pasteurization, chilling, and
fabrication practices.
        And we did have some data that was submitted to
the docket from Foodmaker, and they have a rather extensive
program for testing, and we would like to see if this then--
if we went to some of our best practices and compared them
with the Foodmaker combo bin data, what our results were.
        Thank you for your attention, and I'm ready for
questions.
        MS. OLIVER:  Does the Committee have any
questions, and the invited experts?  Art?
        DR. LIANG:  Art Liang, CDC.  I'm going to probably
get gasps from the audience.  I'm going to ask a stupid
question.  The models by their very nature are
simplifications over reality.
        DR. ROBERTS:  Yes.
        DR. LIANG:  So I was wondering if you or anyone on
the team could discuss by what criteria you choose to
simplify collapsed steps versus increase your precision in
describing a given step in a model?
        DR. ROBERTS:  Well, so far what we've done is
we've just looked at the literature and whether the data
seems to indicate that it's a very risky step or not.  Now,
it could be that it's just not in the literature yet, that
nobody's chosen to study that particular thing, whether
there is a high-risk practice going on.  So there could be
some ignorance on our part here.
        On my last slide, I talked about how we're going
to be looking at changes in practice and how they affect the
model.  Well, they call that sometimes significance analysis
or importance analysis, and that will show us how robust our
model is to the various things that we've assumed, you know,
we put into the model, and then we could possibly be making
some adjustments at that stage if we find out that things
that we thought were important aren't really important.
        I don't know.  Does any of the team members want
to add some more comments to that?
        MS. OLIVER:  Dan?
        DR. ENGELJOHN:  On that issue?
        MS. OLIVER:  No.
        DR. ENGELJOHN:  Dan Engeljohn.  Tanya, on the--
it's Slide 22 in our handout, but it's 29, I think, that was
on the screen, going back to the issue of carcasses
represented in a combo.
        DR. ROBERTS:  Yes.
        DR. ENGELJOHN:  In the comment period that we had
out on 0157, we got information that the combo bin would
represent 300 carcasses.  So--
        DR. ROBERTS:  Would represent how many?
        DR. ENGELJOHN:  Three hundred.
        DR. ROBERTS:  For the steer/heifer plants?
        DR. ENGELJOHN:  I don't--we didn't get a
distinction between steer/heifer or cow/bulls, but I'm just
curious as to where you got your information, and that may
be something we need to follow up on.
        DR. ROBERTS:  Yes, I actually thought that--on the
team, Clare was actually handling the fabrication part, and
I thought that she had followed up on that and made any
changes.  So, frankly, I'm sorry, I can't say.  But she had
originally talked to several members of people in the
industry in developing that part of the model.
        But each plant is going to be a little bit
different, too, in the way they operate and the size of the
animal they get, the kind of breed that they get in.  So,
you know, I think--I don't know that it would be any fewer
than 300, so maybe you're just saying we ought to raise the
upper end of that range so it would be 30 to 300 rather than
30 to 100.
        MS. OLIVER:  Roberta?
        DR. MORALES:  Roberta Morales, Research Triangle
Institute.  Tanya, I was curious.  When you were talking
about fabrication and you were talking about how they were
starting to fill these combo bins--
        DR. ROBERTS:  Right.
        DR. MORALES:  --that some of them were directly
trucked out and others were stuck in the chiller.  Are all
of them kind of--I would assume--you said there was a fairly
large number of them that went into the truck.  Are they
loaded--
        DR. ROBERTS:  Right, 20.
        DR. MORALES:  Twenty?  Are they loaded directly
onto the truck or--
        DR. ROBERTS:  Yes, with this forklift thing.  They
just--
        DR. MORALES:  Okay.  Is that going to affect the
temperature at which they're stored while they're waiting
for the truck to be loaded versus if they were chilled?  And
is that going to affect growth?
        DR. ROBERTS:  Well, Wayne actually has that part
in his model, but, you know, they actually keep these
fabrication rooms at 50 degrees, and the trucks are backed
up and they're opened to it so the temperature in the truck
is also 50 degrees or less, too.
        DR. MORALES:  Okay.
        DR. ROBERTS:  If that's your question.
        DR. MORALES:  And so when they go into the combo
bins, they're already pretty much at 50 degrees temperature.
        DR. ROBERTS:  Well, they've been chilled for 18 to
48 hours.
        DR. MORALES:  Okay.  So they are pretty much--the
other question I had--
        DR. POWELL:  Before we leave that point--this is
Mark Powell--I think Wayne will respond also to that
comment.  We have dealt with growth primarily in the
preparation segment.
        DR. SCHLOSSER:  I'll just cover that briefly.  I'm
Wayne Schlosser--
        MS. OLIVER:  Can you say your name?
        DR. SCHLOSSER:  Wayne Schlosser.  We actually have
a range of variability of storage practices that we handle
before grinding and after grinding and then on through
preparation.
        DR. MORALES:  Okay.  I had one other question.
When you were describing the incoming steer/heifer
contamination, I don't know much about how cattle are
transported, but in thinking about poultry, when you have a
flock that's--you know, they may or may not be positive, but
the cages in which they're transported can affect whether or
not they end up in the slaughterhouses positive or negative.
        I was wondering, when you were looking at that,
whether or not you considered separating out in your
decision tree there looking at animals that are contaminated
versus not contaminated, and then thinking about whether or
not the transport--the truck was contaminated or not
contaminated, because that would ultimately affect what your
proportion of contaminated animals would be.  And I was just
thinking about this quickly, the way you have this model,
you would have one in three scenarios in which the animal
would be contaminated, whereas if you looked at animals
first and then trucks as contaminated or not, you could
potentially have three out of four scenarios in which they
would come up contaminated, which would be a substantial
difference.
        DR. ROBERTS:  Eric, would you like to answer that
since you handled that in your part of the model?  Nice to
be able to do a hand-off.
        MS. OLIVER:  You need to speak into the
microphone.
        DR. EBEL:  Eric Ebel again.  I think the point you
raise is a real good one, and that's why we've tried to
emphasize the need for evaluation of high contamination
because really we think that that environmental source of
0157 is going to relate more to hide than intestinal
carriage.  Again, the reason we're limiting our incoming
depiction of 0157 in live cow right now to fecal shedding is
that's what data we have to link it to the carcass.  But
it's clearly a more complex process than we're currently
modeling, but that's where we're limited right now is that
data linking live to carcass.
        Thanks.
        MR. SEWARD:  Skip Seward, McDonald's Corporation.
        Just a couple of questions, Tanya.  On the carcass
evisceration discussion that you had relative to the self-
contamination.  Are you referring there to like puncture of
the intestinal tract?
        DR. ROBERTS:  Yes.
        MR. SEWARD:  And you mentioned that that occurred
or you were given information that that occurred 1 in 100
times.
        DR. ROBERTS:  Right.
        MR. SEWARD:  And I'm just curious--it seems like
there would be real good information on that from
inspectors, because I think if that event occurs in a
processing facility, that has to be documented.  And I just
don't--it seems like that's a much higher frequency than at
least what I've been told actually occurs in a production
facility, but I would think that that would be documented
and very easily obtained from inspection reports in a
facility.
        So you said you got that from somebody who worked
in your group and I don't know--
        DR. ROBERTS:  Yeah, Bob Brewer.
        MR. SEWARD:  Maybe that's where they got that, but
that seems--
        DR. ROBERTS:  I'll ask him to double-check on
that, because, frankly, I'm not familiar with it exactly.
        MR. SEWARD:  Thank you.  The second question.  In
regard to steam pasteurization, on our slide 17, where it
talks about you had a triangular distribution range of 0 to
2 logs.
        DR. ROBERTS:  Right.
        MR. SEWARD:  I guess what I'm curious about there
is that does that suggest that you could run a carcass
through a steam pasteurizer and have zero impact on the
microbiological load?
        DR. ROBERTS:  Right.
        MR. SEWARD:  And, again, in talking to everyone I
know in the industry who uses steam pasteurizers, all the
big processors, I doubt if they would agree that if you run
a carcass through a pasteurizer that there's a likelihood
that--any likelihood at all that you would have no impact
whatsoever on--
        DR. ROBERTS:  Well, it's a very low probability
event, because on the triangle, that is just the final
endpoint, and the most likely is that you'll get 1 log, and
then you have up to 2 logs.
        Now, maybe Colin Gill, since I've used your data,
maybe you would like to discuss what you found in the
plants.
        DR. GILL:  Well, I think the--Colin Gill.
        If the steam pasteurizer is operated properly,
then you'll get a 2-log reduction, but there's a tendency in
plants to screw down the temperature and reduce the time so
as to not affect the appearance of the carcass, and the
literature suggests that at least some plants, these things
are being operated at ineffective times and ineffective
temperatures.  So the zero effect is probably quite
reasonable in some circumstances.
        MR. SEWARD:  Well, that's something that you might
want to check out because all of the raw material suppliers
I know that, having made that kind of investment, are not
cheating on the operation of those pieces of equipment.  So
I would--if there's someone out there making that kind of
investment, and then trying to cheat on the equipment, I've
never heard of that, and I think before you just accept that
as fact, you'd want to have some real good hard facts to
support that.
        DR. ROBERTS:  Well, maybe McDonald's would like to
share some information with us, submit it to the docket.
        MR. SEWARD:  I'll certainly talk to the people who
are operating that equipment, and let them know that someone
is indicating that, you know, that those are not being
operated up to performance, because that's certainly not the
experience that I've seen.
        DR. GILL:  Could I just mention that I'm not
saying that I have knowledge myself--
        MS. OLIVER:  Can you identify yourself again,
please?
        DR. GILL:  Sorry.  Colin Gill--that I have
knowledge of anybody who's not operating it.  There's very
little in the literature, but what is published in the
literature, there is one case where the equipment apparently
was not being operated as an appropriate--for an appropriate
time and at its appropriate temperature, and in which they
were recovering substantial numbers of E. coli from the
treated product.  So one can only assume that some cases
this is happening, because this was apparently a commercial
processor.
        It would definitely be very well worthwhile
finding out what was really going on with the use of this
sort of equipment, because I'm sure that some people
apparently do not understand how it operates.
        MR. SEWARD:  A couple more questions if I may.
One on carcass chilling.  Wouldn't a third possibility be
that there would be no change?  You indicated that you would
get--potentially you could get growth or a decline.
Wouldn't a third possibility just simply no change, or maybe
that's captured and I just missed it?
        DR. ROBERTS:  Yeah.  Maybe I didn't point it out
very well either.  It's the slide that looks like this.
        [Laughter.]
        DR. ROBERTS:  You can't see these distributions
very well, but it actually is a normal distribution with the
most likely value being zero.  And we've had truncated into
half so that you're either going to get growth--but, see
most of it, the greatest percentage actually is at zero, or
a decline.
        MR. SEWARD:  Yeah, sure, okay.  That's my problem.
Thanks.
        On the level of contamination during fabrication,
I think you mentioned that there were some decisions made on
plant performance, a certain percentage were good, a certain
percentage were bad, if you will, and a certain percentage
were--
        DR. ROBERTS:  Right.
        MR. SEWARD:  Where did those numbers come from?
If you can help me understand how--I didn't quite get the
numbers because I didn't see them in here, but I was just
curious how you arrived at--how the plant performance--
        DR. ROBERTS:  Well, we had three studies, but you
know, it doesn't seem to be mentioned.  The data doesn't
seem to be mentioned on my slide, so that's an oversight.
But Scott took--pulled the data from these three different
studies, and put it together to build this plant quality
index.  We'll have to provide that to you.
        MR. SEWARD:  Thank you.
        I just have one more question, and that is that if
I interpret your model output baseline results correctly,
does that suggest that the model indicates that if you're
using steer heifer meat, that 28 percent of the time you're
using meat that is adulterated, and that if you put that
into ground beef, based on some earlier slides, that that's
going to be multiplied or doubled at least, and so
potentially something like over 50 percent of--according to
the model--over 50 percent of the ground beef coming from
steer heifer beef would be adulterated?
        DR. ROBERTS:  It says that there's 28 percent of
the time you will have one organism or more in the combo
bin, so when you think of on a per-patty basis, you're going
to--if it's quarter-pound patties, you're going to have
8,000 patties, so you'll have--you know, if it's only one
organism, 7999 will be uncontaminated.  But the combo bin
itself will have 1 organism or more.
        So Wayne will talk a little bit more about how
that replicates throughout the model.
        MR. SEWARD:  But for an answer to my question, I'm
sort of--because on an adulteration basis it's on a lot
basis, and it wouldn't matter whether you had one patty that
potentially contained 0157:H7 or all of them.
        DR. ROBERTS:  If you're asking about the policy, I
don't know how to answer it.
        MR. SEWARD:  No.  I'm just trying to interpret the
data.  Is that what the data is saying, is that the
prevalence is that if--at least in raw materials--that 28
percent of the time you're going to have adulterated
materials or it's going to contain E. coli--don't use the
word "adulterated"--it's going to contain 0157:H7.
        DR. SCHLOSSER:  Hi.  This is Wayne Schlosser
again.  Yes.
        MR. SEWARD:  Okay, thank you.
        [Laughter.]
        MS. OLIVER:  Dale?
        DR. HANCOCK:  I have a couple of comments.
        MS. OLIVER:  And can you identify yourself again?
        DR. HANCOCK:  Excuse me.  Dale Hancock, Washington
State University.
        You're validating this to some extent with MPN
counts from FSIS sampling; am I right?  Well, I should
probably start out saying I'm an epidemiologist who's been
forced into some microbiology, so maybe I don't understand
this totally, but to me that should be called an MPN index
rather than an MPN count, because basically we make a
dilution tube series, and it's the MPN count, only under the
assumption that at that endpoint dilution we can detect
those tubes with one and only one organism in them, and I
would go on record as saying that I doubt that the 50
percent detection endpoint for a single tube is as low as 1
or even as low as 10.  And so I think that number is
probably at least ten-fold lower than reality.  At least,
that's--I think that should be considered, and maybe for
people who know more microbiology than I do would comment on
that.
        I have one other point.  Should I go ahead--
        MS. OLIVER:  I don't know if Eric and Wayne would
like to say anything in response to Dale on that?   Since
Eric talked about the testing.  Well, maybe Bill would like
to comment on that too.
        DR. EBEL:  This is Eric Ebel.  I guess our
response is that we used the most probable number estimate
as our most likely scenario, but we do have boundaries that
are intended to incorporated both the uncertainty and the
MPN method, as well as just measurement error in the general
sense.  Again, we only have four observations on
concentration estimates anyway.  But your point is well
taken.  You know, the way we've tried to adjust for that is
in our uncertainty about what we think the distribution
looks like.
        DR. HANCOCK:  And it will relate at the retain or
consumer level too, because when we hear--like in the 1993
outbreak, a certain number of CFU per gram, I'm not certain
if that's adjusted for the analytical sensitivity of that
MPN procedure, and it certainly should be, just from an
epidemiologist's viewpoint, because almost certainly the
measured MPN count is much lower, perhaps by 10-fold than
the actual count.
        The other point I wanted to make--I think Dr.
Ebel--this is Bill Hancock by the way; I didn't say that.
Dr. Ebel mentioned it, and I just wanted to reiterate.  That
is, the hide thing is probably more important than we're
seeing.  He cited a little piece of work we did that
suggested 1.7 percent hide prevalence, but that was one
little dung lock as the carcass was--or the animal was
swinging by, and almost certainly the whole--if you had some
way of measuring the whole thing, it would be higher.  And
actually there's data in--somewhere in Meat Animal Research
Center, not readily available, I guess, that indicates the
hide prevalence is much, much higher than the 1.6 percent,
and that is an area that I think we're going to have to
focus on more.
        MS. OLIVER:  Colin?
        DR. GILL:  Yes, thank you.  Colin Gill,
Agriculture, Canada.
        I'd just like to make a few comments on the
presentation.  You refer to UK data--data from a UK as to
cross-contamination.  I'd suggest you should handle that
with extreme caution because that plant is unlikely to be
anything like the high-speed plants in which most of the
carcasses, beef carcasses are dressed in North America.
        Trimming, steam, vacuuming, washing, my own
conclusions were that none of these are effective at all for
removing bacteria from carcasses, although they're useful
for removing visible contamination.
        One thing that sort of puzzles me about both this
presentation and the previous one is that nobody considered
contamination from the head of the animal.  I know that
there's a keen veterinary interest in the other end, but
take any head removal, the head meats are heavily
contaminated with generic E. coli.  The head can be handled
extensively during the--during its removal, and you would
spread presumably 0157 would be--I'll go along with that.
        I'd also be interested to know from the veterinary
people present what would be the relationship between E.
coli carried in the stomach and E. coli in the feces?  Is
there any necessary relationship between the numbers
involved there?  Could there be a situation where you've got
some in the stomach and none in the feces and vice versa?
        Modeling chillers, I wouldn't try it if I were
you.  The air flows in these things are perturbed greatly by
the way the chillers are loaded, by the size of the
carcasses, by all sorts of things, very, very difficult to
get a detailed results.  You can, however, get gross
results, and it does appear that in almost any chiller a
fraction of the carcasses will be improperly--will be
inadequately cooled simply because the air flow has been
perturbed and they're not being affected by it.
        On the other hand you can see the gross effects of
chillers quite easily, and some, particularly those which do
not employ spray chilling, you do tend to get a substantial
reduction in E. coli numbers, but it does not appear to be
additive with treatments like steam pasteurizing or the
steam pasteurizing treatment's affected the subsequent
reduction due to a decontamination chilling process will be
modest.
        The main point I wanted to make was that some
recent work we've been doing during the last year or two,
has indicated that the majority of generic E. coli that are
found on manufacturing meat, emerging from slaughtering
plants, is deposited on the meat during the cutting
processes.  The sources of this contamination appears to be
inadequately cleaned equipment used in the cutting process.
This is not to say that people aren't trying to clean it;
they just don't realize that there are areas in their
equipment that they can't get at, that they can't see, and
when you get in them and have a look at them and dig the
stuff out, you can find that this is carrying E. coli.
        The increase in numbers can amount to average
increases of more than 4 logs, so it appears that in talking
about the bacteria on the carcass, you're talking about
something that is a disappearingly small fraction of the
total load that goes out on the manufacturing meat, and most
of that is in fact coming from improperly cleaned carcass
breaking equipment.  And if you want to do something about
the problem, that would be the place to start, because it
appears not to be widely recognized, but this is happening.
Thank you.
        MS. OLIVER:  Dane?
        DR. BERNARD:  Thanks.  Dane Bernard.
        I'm not sure I have anything to compare with what
Dr. Gill just shared with us.  However, Tanya, your outputs,
Dr. Seward had called attention to the numbers in the
baseline results, and even in your best case scenario we had
17.1 percent of combo bins with 1 or more 0157 in them.  And
then I glance down at foodmaker data and notice, obviously,
a substantial difference.  The obvious answer, I suppose, is
because foodmakers is actually based on testing, and you're
not going to test the whole combo, whereas yours predicts
contamination in a combo of 1 cell.  But is there anything
else we can glean from that?  The numbers are strikingly
different.
        DR. ROBERTS:  We haven't really integrated the
foodmaker data into our analysis yet, so I can't really
comment fully on that.
        MS. OLIVER:  Mike?
        DR. ROBACH:  Mike Robach.
        I just wanted a point of clarification just for my
own mind.  In your model assumptions, I just want to make
sure I understood this, of the carcasses that enter a plant
that are contaminated, are fecally contaminated, visibly
contaminated, am I to understand that your model is assuming
that 30 to 33 percent of these carcasses will be positive
for 0157
        DR. ROBERTS:  No.  It's saying of those that have
0157 and are shedding them as they enter the plant, and that
of those that have--that are positive for 0157, that 30
percent of them will contaminate their carcass as their hide
is removed.  But the actual levels that have come from
Eric's data on the incoming cattle or the numbers that are
positive for 0157, is that your question?
        DR. ROBACH:  Well, I guess I'm a little confused,
because I thought when Eric--this is Mike Robach again--when
Eric was concluding his presentation, I thought he said that
visible soiling was not a good predictor of carcass status.
        DR. ROBERTS:  Right, it's not.  So these are
actual--these are estimates in his model that coming into
the slaughterhouse, of what percent of those animals
actually have 0157 in their gastrointestinal tract and are
shedding the organism.  That's the number we're using.
We're not using whether they look fecally contaminated or
not.
        DR. ROBACH:  I also thought that he said there was
no correlation between fecal and hide status.  I'm just a
little confused, you know, how this is all flowing.  Maybe
Eric could enlighten me.
        DR. EBEL:  This is Eric Ebel.
        The critical, I guess, term in those things is
"visible", and our study that we were referencing there
basically just used gross indicators of degrees of
dirtiness, if you will, of the hide.  Actually, the other
study that we mentioned was the one that Dale just talked a
little bit more about, where he actually got paired samples
of feces and hide, or again, one dung lock from the hide of
an animal, and in that data he wasn't able to demonstrate a
correlation between those two statuses, but there, in fact,
is--as Dale pointed out, the sensitivity of the hide
sampling in that case was so low as to make correlation--the
failure to demonstrate correlation not unexpected.
        But we are--to make it clear--we are modeling
simply those cattle coming in that have 0157 in their
intestinal tracts, and we're using that as the indication
then of their likelihood of becoming initially a
contaminated carcass.
        And I guess--let me also just comment a little bit
on the combo bin prevalence issue.  I think you've, Dane,
have identified the main different there, is that we're
talking about surveillance data in that case that needs to
be substantially adjusted for the sample size collected from
each of those combos that the prevalence was estimated from.
I think the same thing applies to our ground beef sampling
evidence, and that's what we've attempted to do in our
comparison between what the model would predict from taking
a similar sample, to what the FSIS sample size is for ground
beef as well, because, as we've pointed out, maybe it's over
80 percent of the grinder loads are contaminated.  You take
a 25-gram or 325-gram sample from that grinder load at the
levels of contamination that we are modeling, we can
demonstrate that we would get about the same number of
positive samples, is what FSIS has been getting, because of
the low likelihood of actually getting an organism in the
sample.  So the data have to be adjusted for that phenomenon
of sample size and sensitivity of the tests that are used.
        Basically, the representativeness of that one
sample is indication of what the status of that whole
grinder or combo bin might be.
        Did I help you on your question then?
        DR. ROBACH:  If I could just--Mike Robach again.
Just one more point here.  We've seen a lot of numbers this
morning and a lot of flow charts, and so I am easily
confused about these things.  But from what I understand,
and let's just take, for example, feedlot animals coming in,
the most likely scenario is that you've got 13 percent of
your animals that are going to be presented to the plant
that are going to be shedding 0157.  And of those 13 percent
then entering, between 30 and 33 percent will contaminate
themselves during--they will self-contaminate themselves
during the process; is that correct?
        DR. EBEL:  Yeah, that is correct.  And if you
summarize the other routes of contamination, we get about a-
-40 percent of the incoming prevalence becomes initially
contaminated at dehiding.
        DR. ROBACH:  Because we have 8 percent.  Those
that may not be shedding, 8 percent will be contaminated by
adjoining animals?
        DR. EBEL:  Right.
        DR. ROBACH:  Thank you.
        MS. OLIVER:  Bill?
        DR. SPERBER:  Just a couple of quick observations.
I'm Bill Sperber from Cargill.
        I don't think we should get too excited about the
fact that combos might have 0157 in them.  If you look back
to the surveillance data from the past 5 years, in the first
3 years of FSIS's survey, 16,500 samples, they had an
incidence rate of about 0.1 percent, 1 sample in 1,000
contaminated.  If you assume--these were 25-gram samples.
If you assume one organism in the positive sample, that
calculates out to 1-0157 per 100 pounds of ground beef.  So
go back 2,000 pounds in the combo, that's 20 cells in the
combo.  So we can't go very far down that road before we run
into policy decisions and that sort of thing.
        True, 0157 is an adulterant, but it's an
adulterant in the sample size.  It's not an adulterant in
the combo or in 1 million metric tons of beef produced a
year per plant, that sort of thing.
        One brief comment on Dr. Hancock's observation on
most probable numbers.  In my experience I've done some
direct comparisons on MPNs versus petri dish methods for
coliforms, and CFUs on petri dishes are within a factor of 2
from the MPN geometric means.  The trouble with MPNs is you
have a much greater variability.  The 95 percent confidence
levels are very broad compared to direct plating.  So with
coliforms it's within a factor of 2 CFUs on petri dishes or
higher, and part of that is due to the fact that using VRB,
you can recover some coliforms that you don't recover in the
AOAC-MPN method, which uses laurel sulfate broth that's
inhibitory to some coliforms.
        So in principle I think MPNs and other types of
quantitation are fairly close together when you look at
geometric means on many observations.
        MS. OLIVER:  Thanks.
        DR. HANCOCK:  Can I respond to that?  Could I
respond to that briefly?
        MS. OLIVER:  Go ahead, but do it briefly, because
we have several other questions to cover.
        DR. HANCOCK:  While I do tend to agree that it
might be so for some dominant organization in--like
coliforms where they're the dominant organisms, I doubt that
it's true for something that you are trying to detect
amongst huge competing flora that outnumbers at 10,000 to 1,
and that's where--in fact, the only way to test it would be
to do a dilution series of known concentration in a
background flora, and see if your theoretical and your
observed agree fairly well, and I don't know of anybody
that's done that.
        MS. OLIVER:  Jim, and then--
        DR. ANDERS:  Jim Anders, North Dakota Health
Department.
        I just have--I'm having--someone just said there
were lots of numbers given out, and that clearly is--one
thing, just before I came down here I had gotten an e-mail
with a report that--and I didn't bring it with me, but it's
confusing to me.  It said that they now were saying that
over 50 percent of all cattle had 0157:H7 in that report.  I
come down here now.  Now we're talking about going into
these slaughterhouses, that only 11 percent of these are
contaminated.  That in itself is confusing to me, but I have
a question about sampling size that maybe is--
        DR. ROBERTS:  Well, let me just answer that first
question first.  I mean, that was the data that was looking
at--within a particular herd in the highest season of
shedding, they could be as high as the number that you
quoted, whereas we're looking at annual averages.
        DR. POWELL:  Is that hide data--Mark Powell--the
report that you're referring to--
        DR. ROBERTS:  I don't think so.
        DR. POWELL:  --is that hide prevalence?
        DR. ANDERS:  You know, I really don't have--I mean
clearly--the way I read it was, is that all of these animals
were contaminated.
        DR. POWELL:  The difference is important in that
we're not measuring or incorporating the link between hide
prevalence and carcass.  The only link that we have, and one
that we've asked the Committee to focus on, is how can we
better establish the link among the GI, the hide, and the
carcass status?  This right now is the link that we are
using because it's the only one specific to 0157 in the
published literature that we are aware of.  It is from
another country, and we are worried about using it for that
reason.  We would wish that there were data available that
could help us improve those linkages, but if it is hide
prevalence that you're discussing, we are not modeling the
linkage between the hide and the carcass status.  We modeled
the linkage solely between the GI status of the incoming
animals and the carcass status.
        DR. ANDERS:  Thank you.  I do have another
question thought, and thank you--hopefully that cleared that
up.  I'll take a look at that when I get back, and see
exactly what they were talking about.
        But I have a question about sampling, and clearly
I'm not sure--in this model, okay, we go through this model,
at the time that we're actually checking to see if the meat
is contaminated or the carcass is contaminated, I understand
that most of these studies have been done with 13 samples at
25 grams each, or 325 grams.  Is that correct, and--I guess
my question is this:  if it were--if we could show that by
testing more samples, or even at more grams, that was
higher, would it affect this system, because that's an
important issue here because--let me give you a little
background.
        If your own laboratory, the USDA Laboratory, I
believe in Athens, Georgia, I was to a seminar about a year
ago, and they were telling us that, for instance--now we're
talking about foodborne outbreaks, of course, but they're
taking the meat, and they said that you should take--that
there was a significant difference between 15 samples of 50
grams and 30 samples of 50 grams if they were to try to
isolate Salmonella of 0157:H7.  If we're talking--if we're
basing everything on 13 grams--13 samples at 25 grams, and
they're talking that to really get the right numbers, you
should be doing as many as 30 samples at 50 grams, I guess
I'm questioning whether that would make a difference in this
model?
        DR. POWELL:  We'll be getting into the samples
that are taken from ground product in the afternoon, but in
all cases we're making the distinction between the apparent
prevalence, based on the nominal rates from the reports, and
the adjusted prevalence where we have taken into account
test sensitivity and sample size.  So when we're talking
about true prevalence, we have taken into account
sensitivity and sample size as opposed to the nominally
reported rates.
        MS. OLIVER:  We'll go to Isabel and then Paul, and
then after that we're going to break for lunch.  Isabel?
        DR. WALLS:  My name is Isabel from National Food
Processors Association.
        And this question I think is really for Eric Ebel.
I want to go back to the previous presentation and pick up
on a comment that Dale Hancock made about the sampling.  I
think you indicated that the sampling was seasonal, that
half of all the positives occurred at one sampling time in
the warmer months.  And I want to know if seasonality can be
built into this model?
        DR. EBEL:  Well, yeah.  This is Eric Ebel.
        As I responded to Dale, we would like to do that.
We do have data that we can stratify, to some extent, by
season, and we think that the seasonality is in the within-
herd prevalence.  Again, we think that probably the
proportion of all herds that have 0157 within them stays
pretty constant through the year, but we can see seasonal
fluctuations in the within-herd prevalence.  And, of course,
the issue is how do we model that from one season to the
next?  But it can certainly be done.
        DR. POWELL:  Mark Powell.
        Just as an add-on, again, we have seasonality at
the beginning.  We had data that would allow us to model
seasonality at the beginning of the process on the farm and
at the end in this epi.  But unless we can model seasonality
between there, it would be--we would not be able to
incorporate that in the full model, okay, because there may
be seasonality in preparation, transportation, distribution,
slaughter, and those right now are treated as annual
national averages.  So we have to treat, even where we have
data that we might be able to model at a seasonal or a
regional scale, we have to go to the lowest common
denominator for the full model.
        MS. OLIVER:  Isabel, did you have any other
questions?
        DR. WALLS:  No.
        MS. OLIVER:  Okay.  Paul?
        DR. MEAD:  Paul Mead with CDC.  I just have a--I
hope a very straightforward question concerning your
assumptions about carcass evisceration.  And if I understand
this correctly, your assumption is that when the intestine
is ruptured in the process of slaughtering, that the level
of contamination is similar to the level you get with
dehiding, and that the area contaminated, as I understand,
is really just, on average, an inch or to in each diameter.
And not being one who has been in a lot of slaughter plants,
I nevertheless have this notion that somehow splitting open
the gut is a bit more catastrophic event than that--
        DR. ROBERTS:  Well, they try not to split it open.
        DR. MEAD:  Right.  Well, I understand that, and
that gets the issue of how commonly this happens.  But I
think the reason I bring it up is, I guess, there are some
who are concerned that it's--that these rare, slightly more
catastrophic events may really be quite important in terms
of introducing high levels of contamination that may in fact
ultimately be more likely to lead to human illness at the
far end of this model.
        And I so I guess my question is just how good are
the data leading to these assumptions, because it seems to
me they may have a big influence on your model, speaking
from a completely naive standpoint.
        DR. ROBERTS:  Now, I agree that that's a point
well taken.  I mean, you try to put all the important events
that you can plausibly have any sort of data for into your
model, and so maybe that only happened 1 in 1,000 or 1 in
10,000 times that you would have more global breakdown of
the gut.  Rather than just a little knife nick, you'd maybe
have the whole esophagus cut into or something or more of an
evacuation.
        And then you'd also need to model--since that
would be a noticeable event that you would definitely have
clean-up procedures for, then what would be the impact of
the clean-up procedures as well.  So we would definitely
welcome any data that anybody knows of that they could
submit to the docket on this particular issue.
        MS. OLIVER:  Paul, did you have any other
questions?
        DR. MEAD:  No.
        MS. OLIVER:  Okay, thank you.
        Before we break for lunch, I have two
announcements and that is, there were a number of people
over the last two days who had asked questions about
reimbursement either from the last meeting or from this
meeting.  Right before lunch Karen Hulebak and Kathy DeRover
will be available, Karen for questions and any reimbursement
for the last meeting, and Kathy for questions on this
meeting, so you can stop.
        And then we're going to break for lunch for an
hour, come back about 1:10.  Thank you.
        [Whereupon, at 12:07 p.m., a luncheon recess was
taken to reconvene at 1:10 p.m., this same day.]
 
 
A F T E R N O O N   S E S S I O N
(1:16 p.m.)
        MS. OLIVER:  I'd like to get started.  We're going
to take 10 minutes of questions after each of the sessions
this afternoon, rather than the 15 minutes for this morning.
Then we'll go into the questions for the Advisory Committee,
and Mark Powell will lead that session in the afternoon.
        And we know that the time will be limited and cut
short a little bit.  I know some of you will have flights,
but for those of you who want to have input, we'll go till
5:30 for those who want to stay and have additional input
for the afternoon.  And then I'll introduce the session
again later.
        The first session this afternoon will be Wayne
Schlosser on preparation, and Karen Hulebak will once again
give the introduction on the questions to keep in mind
during this presentation.  Thank you.
        DR. HULEBAK:  Okay.  For preparation, keep in mind
the following two questions.  Rather than modeling beyond
the last point where validation is currently possible in raw
ground beef, would it be preferable to consider simply a
proportional relationship between the prevalence of 0157 and
raw ground beef and the incidence of illness due to raw
ground beef?'
        Second.  How do we define a plausible frequency
distribution for extreme time/temperature handling
conditions in the absence of data?
        DR. SCHLOSSER:  Hi.  I'm Wayne Schlosser.
        In your handout I have removed two of the slides,
Slide 45 on page 23, and Slide  62 on page 31.  And it does
seem like a lot of slides, but I can assure everyone that we
will get through these in 45 minutes.
        In previous presentations we have seen how we
model the presence of E. coli 0157:H7 in cattle on farms,.
during transportation, in markets, and during the conversion
of cattle to beef carcasses.  In the preparation segment,
either trimmings or whole carcasses are converted to ground
beef.
        This is an outline of the subjects we'll be
covering today, and today, as with earlier segments, we will
simply say 0157 is an abbreviation for E. coli 0157:H7.
        The purpose of the preparation segment is to
determine the number and extent of human exposures to 0157
from prepared ground beef products.  This segment models
beef from slaughter through grinding and distribution to
preparation.
        As outlined in the initial presentation on the
project, the scope is limited to assessing consumer exposure
to 0157 in ground beef.  Although there are many other
sources of 0157 other than ground beef to which consumers
are exposed, these sources will not be modeled.
Furthermore, simulation of how contaminated ground beef may
lead to contamination of other products is beyond the scope
of this model.
        The preparation segment models the growth, decline
and dispersion of 0157 for each of four types of ground been
products:  ground beef that's intended for use as hamburger
in homes or within institutions, and ground beef intended
for use as an ingredient in beef-based products such as meat
balls or meat loaf within institutions and within homes,
ground beef which is intended for use as an ingredient in
dishes which require intensive cooking, and granulation of
the ground beef such as chili or spaghetti sauce is not
specifically modeled.
        The output from the preparation segment consists
of the number of contaminated servings and the distribution
of bacteria within those servings.  These are national
estimates.  The range of values returned reflects our
uncertainty about the actual number of contaminated servings
and the concentration in those servings.
        Preparation segment is a multi-path model that
simulates grinding, distribution and preparation of ground
beef for particular product types and locations.  A complete
model simulation consists of all combinations of product
types and locations.  This approach allows the entire model
to calculate total exposures in the population as well as
allow for more rapid evaluation of possible mitigation
strategies.
        This segment is designed to separate our
uncertainty about values and distributions from the
variability inherent in any biological system.  As such it
consists of three separate models:  growth, cooking, and
consumption and exposure.  These three models are then
processed sequentially to provide a distribution of
exposures.  Because of the complex structure of the model
which requires summarizing sub-module outputs before
simulating the next sub-module, and the large number of
iterations needed to accurately model ground beef
consumption, we employ the Visual Basic for Applications
Add-in to the Excel and At Risk computing environment.
        This is a simplified diagram of the preparation
segment.  In put from slaughter and data on consumption
determines the initial number of organisms in a serving.
The effect of growth and cooking is determined by additional
factors and added to the initial number to arrive at the
final exposure dose.  Multiple iterations through a single
simulation give us the frequency of different exposures for
a given set of uncertainty inputs.
        The preparation segment relies on two types of
input variables.  Product fraction inputs determine the
amount of product that goes into each pathway, and
concentration inputs then determine the amount of bacteria
present in the product.
        These are some examples of product fraction
inputs.  All product fraction inputs reflect uncertainty
only.  For example, there is a certain proportion of
hamburger that gets used in the home, but we don't know
exactly what that proportion is.
        These are some examples of concentration inputs.
Concentration inputs generally reflect both uncertainty and
variability.  For instance, we know that the time hamburger
is stored in the home varies from minutes to days.  This
represents the variability of storage practices.
Additionally, we are uncertain as to the proportions of
hamburger that are stored for the various times.
        As we have seen, some of our inputs will
incorporate uncertainty only, while others will incorporate
both uncertainty and variability.  Our final distributions
are reflective of both the uncertainty and variability of
the underlying distributions.
        The growth process of the preparation segment
simulates the effect of times and temperatures on the
numbers of 0157 bacteria in hamburgers and ground-beef based
products in homes and institutions.  The output is a
frequency distribution that describes the variation of logs
of growth expected for various combinations of times and
temperatures.
        Additional frequency distributions describe the
uncertainty attendant with the estimate of the original
distribution by illustrating the effect of assuming less
compliant and then more compliant processes.
        Beef trim and the subsequent ground beef is
subjected to a variety of storage conditions in the
continuum.  Ground beef may be stored under ideal conditions
in one part of the continuum and subjected to extremes of
time and temperature in another part.  The amount of growth
that takes place is dependent on the storage temperature,
the length of time the product is stored and the
thermodynamics of the product, which then influence the
internal product temperature.  We model on the temperature
and time of storage.  For modeling purposes, we have assumed
that the storage temperature of the product is the same as
the internal product temperature.  This is consistent with
the Food Code published by FDA and adopted by many states,
which bases correct storage temperature on internal product
temperature.  The percent of non-compliance and the extent
of non-compliance with the Food Code represent elements of
uncertainty in the model.
        Growth equations have been developed to predict
growth of 0157 given parameters of time, temperature, and
possibly pH, sodium chloride content, and other variables.
One set of equations was developed by Buchanan, and it was
later incorporated into the pathogen modeling program
available from ARS.  Another set of equations was
subsequently developed by Marks.
        Walls conducted a comparison of predictions from
the pathogen modeling program with observations of growth of
0157 in ground beef, and concluded that the pathogen-
modeling program offers reasonably good predictions of
growth in raw ground beef.
        Since the Marks equations were developed after the
Walls comparisons, we compared the predictions from the
Marks equations with predictions from the pathogen-modeling
program and the Walls' observations.  This chart of the
predicted lag period durations show that the Marks'
equations also gave reasonably good predictions.  Also the
Marks' equations used temperature as the only parameter.
This is important, because such a parsimonious model can be
used in a wider variety of scenarios with less uncertainty
regarding unknown inputs.
        This chart shows the predicted generation times,
and this chart shows the predicted times for a 3-log
increase of organisms.
        Thus, the following sets of equations are used to
predict growth of 0157 in ground beef.  LPD here is the lag
period duration.  GT is generation time.  And MPD is maximum
population density.
        Ground beef is stored in a variety of ways.  The
growth response of 0157 suggests that we're not generally
interested though in modeling refrigerated storage.  Thus,
the critical factor in determining the amount of growth of
0157 in ground beef is not the time of storage, but rather
the time of storage at temperatures out of compliance with
the Food Code, that is, above 5 degrees Centigrade or 41
degrees Fahrenheit.
        Modeling compliance with Food Codes requirements
entails modeling both time and temperature as linked
variables.  Since the Food Code allows the product to be
above 5 C for up to 4 hours, it is possible for ground beef
to be stored at temperatures that would allow for growth of
0157 and still be stored in compliance with the Food Code.
Thus, we model ground beef stored in compliance at
temperatures from 5 C to 35 C, and at times from 0 to 4
hours.
        In addition to being uncertain about the
probability of ground beef being stored in compliance with
the Food Code time and temperature requirements, we are also
uncertain what form such compliance or non-compliance may
take.  There are obviously many combinations of time and
temperature to which a product can be exposed.  Even if we
knew the distribution of these combinations with certainty,
we would still be face with a great deal of variability in
the storage conditions of ground beef.  Unfortunately, we
don't have data that suggests how ground beef that is in
compliance is stored.  Under such circumstances we would
normally model these variables with the least informed
distribution possible, which is a uniform distribution.
Nevertheless, we can make assumptions about how these
variables might be distributed, and evaluate the effect of
those assumptions on the model.
        This chart shows the different types of compliance
scenarios modeled for temperature.  Institutional ground
beef considered in compliance was modeled at temperatures
from 5 C to 35 C under three different scenarios.  In the
first scenario, the frequency distribution for the storage
temperature of ground beef was skewed toward 35 C and the
time was skewed up toward 4 hours.  We designated this
"Least Compliant."  In the last scenario the frequency
distribution for the storage temperature of ground beef was
skewed toward 5 C and the time was skewed toward 0 hours.
We designated this "Most Compliant."  The middle scenario
used uniform distributions.  Thus, any temperature from 5 C
to 35 C was considered equally likely, as was each time of
storage from 0 to 4 hours.  This chart shows the different
types of compliance scenarios modeled for time.  Time and
temperature scenarios are correlated within the model.  Less
compliant temperature scenarios correspond with less
compliant time scenarios.
        Ground beef stored out of compliance with the Food
Code would be stored at internal temperatures greater than 5
C for longer than 4 hours.  Again, we are uncertain as to
the distribution of storage times and temperatures in non-
compliant scenarios.  Therefore, we modeled three different
scenarios in a method similar to the one used for compliance
storage.
        This chart shows those different types of non-
compliant scenarios modeled for temperature.  Non-compliant
time scenarios were handled in a similar manner, with
possible times ranging from 4 to 10 hours.  Again, with
compliance scenarios, time and temperature are correlated.
Less compliant temperature scenarios correspond with less
compliant time scenarios.
        Although the frequency of non-compliance in the
home is modeled differently than the frequency of non-
compliance in institutions, the distribution type of non-
compliance is modeled the same for both home and
institutional users for a given scenario.  Thus, when we
model the least compliant scenario for institutional users,
we also model the same scenario for home users.  The effect
of linking these two scenarios is to increase the final
uncertainty in the model.
        The growth portion of the preparation segment
assumes that ground beef is subjected to 6 opportunities for
time and temperature non-compliance.  At each of these
opportunities an individual time and temperature is modeled
for the product.  Growth is then modeled for 6 sets of 64
pathways for a total of 512 separate pathways.  A pathway
set consists of one pathway assuming compliance at all 6
stages, and 63 pathways assuming lack of compliance at each
combination of the 6 stages.  Each set is then replicated
using "Least Compliant", "Most Compliant" and "Uniform"
assumptions about the degree of compliance or non-
compliance.  These three sets are then replicated for homes
and institutions.
        Since there's not sufficient data to construct a
completely accurate model of the growth of 0157 in ground
beef, it is necessary to make assumptions about how 0157
reacts to certain environments and how ground beef products
are handled.  The model assumes that as a product moves from
one stage to the next, the internal temperature of the
product is achieved immediately.  In reality the outside of
the product would reach temperature first and the inside of
the product last.  To construct cooling curves, however,
would require knowledge of additional variables that we do
not have.  The result would be a much more complicated
model, would not be any more useful because the underlying
assumptions would be arbitrary.
        The model assumes that all 0157 strains exhibit
the same growth characteristics regardless of the ground
beef product therein.  It further assumes that temperature
is the only significant variable that predicts growth.  We
do know that factors other than temperature also influence
the growth of 0157.  Nevertheless, the simplification is
necessary for modeling.
        It is reasonable to assume that 0157 bacteria
exposed to significantly different storage conditions would
need additional time to adjust to those conditions and enter
into a rapid growth phase.  Nevertheless, we have chosen to
model the lag period duration as a cumulative percentage
that begins at 100 percent and decreases as product is
subjected to varying temperatures at the different stages
along the continuum.  This is a simplifying assumption that
keeps us from needing to make additional assumptions about
when to restart calculations for lag period duration.
        Gill reported that frozen patties from
manufacturing and retail plants generally had lower log mean
numbers of E. coli bacteria than chilled patties.  It was
also noted, however, that the process for production of
chilled patties was distinct from the process for frozen
patties, and that the chilled patties may have had
opportunities for bacterial growth not experienced by the
frozen patties.  He also noted that in his discussion, that
freezing is likely to produce only small reductions in the
number of bacteria.  We have thus made the assumption that
freezing has no effect on bacterial numbers.
        Gill reported on increases of total bacteria,
coliforms and E. coli bacteria in beef trimmings at
slaughter plants, and the subsequent ground beef in retail
establishments.  Using those results, we calculated expected
values for E. coli bacteria in beef trimmings collected at
slaughtering plant and for ground beef on display at a
retail outlet.  The difference in expected values between
these two sites was .25 logs.
        Gill also reported on increases of total bacteria,
coliforms and E. coli bacteria in hamburger patties from
patty manufacturing plants and from retail outlets.  Using
these results, we calculated the difference in the mean logs
for the manufacturing plants and for the retail outlets, and
the difference in these two sites was .57 logs.  This was
most consistent with a compliance scenario that was skewed
toward the left or toward the more compliant.
        We thus chose to model compliant times and
temperatures as truncated exponential distributions.  An
exponential distribution requires only a single parameter,
the expected mean.  For storage time we set the mean at 1
hour with a lower bound of 0 and an upper bound of 4 hours.
For temperature we set the mean at 5 C with a lower bound of
0 and a upper bound of 35 C.  We further modified the
distribution so temperatures below 5 C would equal 5 C.
This was necessary to avoid calculation errors as the
temperature reached 0.
        Representative surveys of actual storage practices
of ground beef and ground beef products at all stages of the
continuum in the US are needed to validate assumptions
regarding frequency and degree of non-compliance.  Also,
enumeration of 0157 bacteria grown in a variety of ground
beef products under varying conditions will allow
construction of better predictive models.  Such research may
also identify high-risk items that can then be more closely
monitored.
        The cooking process of the preparation segment
simulates the effect of cooking in hamburgers and ground
beef based products in homes and institutions.  The output
is a frequency distribution that describes the variation of
log kill expected for various cooking temperatures.  Two
additional frequency distributions describe the uncertainty
attendant with this estimate by illustrating the effect of
assuming less compliant and more compliant processes.
        Nearly all ground beef is consumed cooked.
Effective cooking is dependent on the cooking temperature,
the storage temperature prior to cooking, and again, the
thermodynamics of the product.  We model the effects of both
the cooking temperature and pre-cooking storage.  Rather
than modeling the thermodynamics of the product, we have
assumed that certain processes will lead to certain internal
temperatures of the product.  The temperature to which
ground beef products are cooked is dependent on a variety of
factors.  We model cooking temperature based on the degree
of compliance with the Food Code for institutional users.
For home users we estimate cooking temperature from results
of surveys that capture consumer cooking habits which are
based on visual cues.  These visual cues then correspond to
a range of actual temperatures.
        Juneja determined the number of surviving 0157
versus the internal temperature of hamburgers inoculated
with an initial load of 6.6 logs of bacteria.  Internal
temperature of the hamburgers ranged from 56 to 74 C.  The
log of the surviving 0157 was then measured, and resulted in
the linear regression equation shown here.  Juneja noted
that 73 percent lean ground beef patties of 100 grams,
cooked to an internal temperature of 68, would have a 4-log
reduction of a 5-strain cocktail of 0157.
        This is consistent with a report by Jackson that
78 percent lean ground beef patties of 114 grams, inoculated
with 6 logs of bacteria and cooked to an internal
temperature of 68, would have about a 4.1-log reduction with
a standard deviation of 0.5 logs.
        Semanchek reported variability in heat resistance
among 3 strains of 0157, and concluded that exposure to
different environments may select for resistance to sub-
optimum conditions or subsequent stress.  Also, Jackson
reported that the response of 0157 to cooking, appeared to
be related to original storage temperatures.
        Juneja has demonstrated the linear relationship
between cooking temperature and the log reduction of 0157 in
ground beef.  Jackson also demonstrated this linear
relationship, and the data also includes the effect of
storage conditions on product before cooking.
        Oh, Jackson did not report on the effect of
cooking at temperatures greater than 68.3.  We extrapolated
the effect from these higher temperatures in the following
manner.  To predict a reduction of 0157 at temperatures
above 68, we assume a linear relationship in accordance with
the report from Juneja.  Using this assumption, we conduct
bootstrap sampling for each of the 9 Jackson pre-treatments,
using the mean log reduction and standard deviation to
create simulated data points.  Since Jackson reported
results based on 6 data points for each of the 27 pre-
treatment and cooking temperature combinations, we created 6
points for each simulated pre-treatment and cooking
temperature.
        From these data points, 18 for each pre-treatment,
we estimate the linear regression parameters, the y-
intercept, the slope and the standard error of the y.  Each
iteration resulted in new linear regression equations for
each of the pre-treatments, depending on the 18 simulated
data points.  These different equations, with their expected
values and standard errors, were used to predict the log
reduction for temperatures up to 77, which was the highest
temperature at which log reduction was calculable by the
Juneja equation.
        This chart shows the comparison of the predictions
of the Juneja linear regression equation with the output of
the bootstrap model, including 95 percent confidence limits
for storage at 3 C for 9 hours.
        We determined the internal product temperature of
hamburgers prepared in institutional settings to be a
function of compliance with the Food Code.  The Food Code
requires that hamburgers or other product containing ground
beef be cooked to an internal temperature of 68.  We
therefore constructed the model to simulate the effect of
cooking in cases of compliance, 68 and above, and in cases
of non-compliance below 68.
        As with temperatures in the growth model though,
it is considered that some hamburgers non-compliant, may
have reached temperatures close to 68, while other non-
compliant hamburgers may have reached much lower
temperatures.  And some hamburgers that are in compliance,
may only have just reached 68, while others may have reached
much higher temperatures.  The actual frequency distribution
of cooking temperatures may have a significant effect on the
log reductions observed in the model.  So we again determine
both the uncertainty and variability of log reductions
through cooking.
        Non-compliance with cooking requirements in
institutions is based on consumption information from the
1995 Continuing Survey of Food Intake by Individuals.
Institutional hamburgers considered in compliance were
modeled at temperatures from 68 to 77, with three different
scenarios.
        In the first, the frequency distribution for the
pre-treatment of hamburgers was skewed toward those pre-
treatments that were considered most abusive.  The frequency
distribution for temperature was skewed toward 68.  This was
designated least compliant.
        In the second scenario, the frequency distribution
of pre-treatment of hamburgers was skewed toward those pre-
treatments considered least abusive.  The frequency
distribution for temperature was skewed toward 77, and the
scenario was designated most compliant.  The third one used
uniform distributions.  This chart shows the three
cumulative distributions for each of the scenarios for
compliant institutional hamburgers.
        As with compliant hamburgers, those considered not
in compliance were modeled with three different scenarios.
In the non-compliant scenarios, temperatures ranged from 54
to 68.  This chart shows the three cumulative distributions
for each of the scenarios for non-compliant institutional
hamburgers.
        We consider it less likely that hamburgers will be
cooked to a given internal temperature in the home than in
institutions.  Institutional cooking is subjected to
regulation regarding product temperature, and institutional
cooks are more likely to have access to accurate measurement
devices.  Thus, in the home another method of determining
internal product temperature for the purpose of modeling is
used.
        Klontz distinguished between two categories of
hamburgers, rare and medium, defined as at least some pink
in the middle, and medium well and well, defined as no pink.
We used information from the 1995 Continuing Survey of Food
Intake by Individuals to model the fraction of hamburgers
served rare or medium.  Unfortunately, there is not as good
a correspondence between these designations and internal
product temperature as we would like.
        Lu reported on a study in which two replicates of
five previously frozen hamburger patties were cooked to
internal temperatures of 68, 71 and 74 degrees.  In one
replication, patties cooked to 68, 71 and 74 would have been
considered as cooked medium.  In the other replication,
hamburgers cooked to 68 would have been considered as cooked
medium.  Thus, there is considerable variability in the
final appearance of cooked hamburgers given the same
formulation and the same internal temperature.
        Hamburgers considered to have been cooked to a
medium degree of doneness may have reached internal
temperatures of 68, or even as high as 74.  Hamburgers
considered rare or medium are thus modeled as having reached
internal product temperatures of anywhere from 54 to 74 C.
        Furthermore, for the purposes of this model,
hamburgers considered medium well or well-done are
considered to have reached temperatures from 65 to 77.  As
with institutional cooking, it is probable that the actual
frequency distribution of cooking temperatures within these
two categories would have a significant effect on our
ability to predict log reductions.
        As with institutional cooking, home-cooked
hamburgers considered medium-well or well-done--what we'll
considered as in compliance--were modeled with three
different scenarios.  Temperatures ranged from 65 to 77 in
each of those, with weights toward least compliant, most
compliant, and uniform.  Outputs for each of these three
scenarios for both rare-medium and medium-well, well-done
hamburgers was captured and compared to determine if the
underlying frequency distribution would have an effect on
the log reduction predicted.
        This chart shows the three cumulative
distributions for each of the scenarios for medium-well,
well-done home-cooked hamburgers.  Home-cooked hamburgers
considered rare or medium were modeled at temperatures from
54 to 74 with the three different compliance scenarios.
        We have assumed that ground beef used as an
ingredient in products such as chili, spaghetti, soup and
other such products will be thoroughly cooked to an extent
that would kill all 0157 present.  This is because the
ground beef is pre-cooked in a granular form and then
subjected to further cooking.
        In products that use ground beef as a major
ingredient, we have assumed that cooking practices will
parallel cooking practices for hamburgers.  On the one hand,
we may think that consumers would be less likely to eat rare
hamburger than rare meatloaf.  On the other hand, we do not
have data describing the distribution of cooking practices
for other ground beef-containing foods.
        It is reasonable to assume that many individuals
cook hamburgers to higher temperatures than this model
assumes.  Jackson, however, did not study cooking beyond 68,
and Juneja did not study cooking beyond 74.  The linear
relationship between cooking and reduction of the number of
0157 organisms is based on an initial concentration of 6.6
logs.  This relationship predicts elimination of all 6.6
logs of organisms at around 77 C.
        It may be reasonable to assume that a higher
initial of 0157 may be affected not in a direct
correspondence to this relationship, but rather proportional
to it.  In other words, if a product was originally
contaminated with 10 logs of 0157, it would also achieve
complete elimination of all microorganisms at about 77.
Although this assumption is intuitively appealing, there is
no data to support it.  Therefore, we have chosen to model
the reduction of 0157 in direct correspondence with results
of experiments that had lower inoculums than those predicted
in the model.
        The purpose of the exposure process of the
preparation segment is to combine input from the slaughter
segment with the output from the growth and the cooking
processes to determine the frequency of contaminated
servings and the distribution of bacteria within those
contaminated servings.
        The majority of ground beef is used in hotels,
restaurants and institutions.  Ninety-eight percent of this
product comes directly from grinders.  Retail establishments
use coarse ground beef and mix with it trimmings produced
in-house.  Retail establishments also buy case-ready chubs,
which are plastic tubes filled with 5 to 10 pounds of ground
beef.  About 22 percent of retail ground beef includes at
least some retail trimmings.  A commercial lot of ground
beef is modeled as a uniform distribution from 2 to 15 combo
bins of 2,000 pounds each.  We model a retail lot of ground
beef as uniform distribution from 50- to 400-pound lots.
        The Food Code specifies a holding temperature of 5
C or below for ground beef and products made with ground
beef.  The proportion of product considered in non-
compliance is based on assuming that the likelihood of
trained individuals in the food service industry allowing
product to remain above 5 C for longer than 4 hours could be
as high as 1 out of 100 or as low as 1 out of 10,000.  We
have assumed that the likelihood of untrained individuals in
the home allowing product to remain at temperature above 5 C
for longer than 4 hours could be as high as 1 out of 10 or
as low as 1 out of 1,000.
        The Code also requires that hamburgers or other
products containing ground beef be cooked to an internal
temperature of 68 C.  Uncertainty about the level of non-
compliance with this requirement for institutional cooking
is based on the Continuing Survey of Food Intake by
Individuals, and it is modeled as the beta distribution
shown here.  Modeling of non-compliance cooking within the
home is also based on the CFSII, and modeled as the shown
beta distribution.
        Ralston conducted an analysis of the 1995 CFSII
which was based on reports of about 15,000 individuals and
covered about 30,000 days of observations.  This table shows
consumption of hamburgers at home and away from home for
four different age categories.  This tables shows
consumption of ground-beef products at home and away from
home for the four age categories.
        The presence or absence of clustering of 0157 in
ground beef is an important but unknown factor.  If CFUs
tend to be clustered, we would find fewer exposures but
larger doses.  And we'll make the assumption that clusters
of CFUs would be randomly distributed in contaminated ground
beef.
        Although there is no data to support the presence
or absence of clustering of 0157 in ground beef, we assume
that clustering follows a binomial process.  In other words,
the probability of a CFU of 0157 being clustered with
another CFU of 0157 is fixed but unknown.  The number of
0157 CFUs in a cluster will then vary and is directly
calculable if the probability of clustering is known.
        In the model, the number of clusters is calculated
by dividing the number of CFUs by the modeled mean cluster
size.  The mean cluster size is equal to the mean of the
negative binomial distribution plus 1.  The negative
binomial distribution returns the number of clustered CFUs
before a non-cluster event.  The number of 0157 CFUs in each
cluster is then simulated and summed using a negative
binomial distribution.
        This again shows the structure of the preparation
segment.  Remember that outputs from growth, consumption,
and slaughter are combined to develop a frequency
distribution per dose of exposures.  One simulation of the
model gives results for a given set of uncertainty inputs.
        Output from the grinder section of the preparation
model suggests that there are about 85 million potentially
contaminated servings produced in the U.S. annually.  About
four-tenths of a percent of these, or 375,000 servings, are
predicted to have at least 1 organism present at
consumption.
        This chart shows the log of the dose along the x
axis and the log of the number of exposures along the y
axis.  About 40 percent of the exposures are to 1 organisms.
About 10 percent of the exposures are to doses of 1,000 or
more organisms.
        Initial uncertainty analysis in the model has been
accomplished by first identifying the uncertain inputs.
These were generally the proportion of non-compliant events
and the shapes of frequency distributions.  The model was
then run with uncertain inputs set at the most likely values
and then with the uncertain inputs set at upper and lower
bounds.
        This chart shows the most likely exposure curve we
saw a couple of slides back, along with the exposure curves
resulting from simulations at the upper and the lower bounds
of our uncertainty distributions.  As you can see, there is
a considerable amount of uncertainty within the model.
        The preparation segment of the process risk model
is complex and resource-intensive.  Nevertheless, this
segment can represent only a simple view of reality.  As we
fill in data gaps, we get closer to modeling reality.
Obviously, if we had perfect data for every input, we
wouldn't need to do a risk assessment; we would know the
risk.  But one of the products of the risk assessment will
be to help us better identify where the data gaps are and
how we can fill them.
        Thank you for your time.  Questions?
        MS. OLIVER:  Okay.  We'll take 10 minutes of
questions now, so if the Committee or any of the invited
experts has--Dane?
        DR. BERNARD:  Thank you.  I thank you for your
presentation.  There are a couple of things I need
personally a little clarification on, though, if you don't
mind.
        As I remember the growth data--and Dr. Walls can
help me out with this--0157 doesn't grow at all below
somewhere between 7 and 8 degrees C.  Was that accounted for
in your predictive models?  Your tables here show 12 C, but
it is not linear down to 5 C, which is the Food Code-
recommended storage.  So when you talk about non-compliant
storage with Food Code provisions, and even at 12 C, what's
the lag time?  15 hours.
        At 8 C, what would the lag time be?  I think it
would be something probably much longer than that.  So in
your modeling, was that accounted for as you predicted what
the population might have been?  Did you account for the
fact that there is no growth at all below somewhere between
7 and 8 C?
        DR. SCHLOSSER:  What we do is we draw from a
distribution that goes continuously from basically 5 C up to
35.  So as we draw from 7 or 8 or 6, I think actually it
predicts some growth, but the lag period in the generation
time is very long, basically no growth.
        DR. BERNARD:  You know, just my--I'm not a
modeler, but personally I think it's not a fair assumption
to look at the Food Code as your null hypothesis in terms of
storage conditions, where you can begin to have a problem.
I think you have to look at what the model says, what the
actual observed growth says, and begin from there.
        In addition, while you've presented some very
interesting data in terms of survival from cooking
conditions, as I remember the CDC data--and Paul is better
to comment on this than I--pink hamburgers still come up as
a risk factor, whereas restaurant-cooked hamburgers dropped
off the last data set as a significant risk factor.
        In addition, some of the outbreaks that have
happened--and possibly Dr. Kobayashi can comment on this as
well--reports that we got back on the degree of undercooking
indicated that we didn't miss it by 1 or 2 degrees C; we
missed it by a mile.  So those slightly undercooked
situations don't appear to be showing up as a risk factor.
It's the drastic undercooked situations that appear to be
showing up as a risk factor.
        So I'm wondering if that's consistent with the
predictions that you've made here in terms of, A,
potentially contaminated servings, and, B, potential
exposures from those contaminated servings.
        Thanks.
        DR. SCHLOSSER:  As we look at probably individual
servings of hamburger, we see a relationship.  As we cook
hamburgers only to 54, we get a very low reduction of 0157.
As we begin moving that up higher and higher, we get more.
What the model does is draw from a continuous distribution
of hamburgers on each iteration, things that might occur in
the population.  And we're up around 68, we get pretty good
reduction.  As we drop down to 54, we get very poor
reduction.  So, basically, the rarer or the pinker it is,
the more problems you would have.
        DR. POWELL:  Mark Powell.  I wanted to respond to
Dane.  I'm not sure if Wayne responded to all of the parts
of your question, as I understood it.  Wayne discussed how
he is modeling the consumption essentially of the lag phase
duration, so that if a product is at a temperature at which
growth would occur, even if it is in compliance with the
Code, it would have to be held at that temperature for a
sufficient time for 100 percent of the lag phase duration to
be consumed prior to the initiation of growth.
        DR. BERNARD:  May I?
        MS. OLIVER:  Yes.
        DR. BERNARD:  I guess what I'm saying--Dane
Bernard--I'm not exactly in agreement right at the moment
with the assumptions that you started from to make your
calculations.  In terms of where you started, I think it's a
great leap in logic to say that lag period is shortened by
any storage outside the Food Code-recommended conditions,
which I think is what you indicated that you were modeling.
        There are a lot of assumptions here that I think
need additional study and might be providing maybe much more
uncertainty in your predictions than we would actually need.
If you even look at the information we have today in terms
of growth characteristics of the organism, I think we could
be a little more accurate in the predictions that we're
getting.  I mean, you may be exactly accurate now.  It's
just that I have some questions about the ongoing
assumptions and I'm personally not in agreement with some of
them.
        DR. SCHLOSSER:  One of the things the model will
allow us to do is set temperatures at particular settings
and then let us see what the effect is that the model is
predicting.  We haven't done it for those particular
temperatures and run it through to see, but we can do that.
        MS. OLIVER:  Bruce?
        DR. TOMPKIN:  Bruce Tompkin.  In your estimates,
are you considering that some of these in terms of growth,
the potential for growth, the fact that there's a
substantial quantity of ground beef that is sold frozen and
cooked from frozen so there is no possibility, or at least
the risk of growth is minimal?
        DR. SCHLOSSER:  Yes.  We consider the effect of
actually storage conditions gets introduced in the
survivability of E. coli as we cook it.  But we also
consider a great deal of product as having no opportunity at
all for growth.
        DR. TOMPKIN:  But are you coming up with an
estimate for the percent of ground beef that is sold and
cooked from frozen?
        DR. SCHLOSSER:  No, we don't have that, and if you
have that, we could use that in the model.
        DR. TOMPKIN:  Okay.  Can I ask one more question?
        MS. OLIVER:  Sure.
        DR. TOMPKIN:  With regard to this next to the last
slide, I thought--perhaps it's a matter of how I read it,
and so on, but, of course, with statistics you can go
anywhere you want with it, but you've got some high numbers
of E. coli 0157 and it's a question at what point will the
product be spoiled.  You know, there is a practical limit, I
think, in terms of how high you can go with 0157 as a result
of growth and the product still be acceptable and not, you
know, actually be rejected and not cooked or consumed.  So
there is a practical limit in there somewhere.
        MS. OLIVER:  Chuck Haas?
        DR. HAAS:  Yes.  I'm not clear how you got your
probability of not clustering.
        DR. SCHLOSSER:  We assume that the probability of
not clustering is somewhere between zero and 1, and we--
        DR. HAAS:  So you sampled from a uniform
distribution?
        DR. SCHLOSSER:  No.  We do one simulation assuming
it's 0.5, another assuming it's 0.1, and the last one
assuming it's 0.9.
        DR. HAAS:  Okay.
        MS. OLIVER:  John Kvenberg?
        DR. KVENBERG:  Thank you.  John Kvenberg, Food and
Drug Administration.  I guess I'm going to ask basically
points of clarification, but I would like to remark that I
have certain reservations about the assumptions, as I think
were expressed by Mr. Bernard.
        The first question, under the scope of this thing,
cross-contamination was not included in the scope of the
study, and I assume that was the basis that this model was
basically assessing the risk of hamburger, per se, is that
correct?
        DR. SCHLOSSER:  Right.
        DR. KVENBERG:  Well, what's the reason for within
the scope where cross-contamination is not included in your
model?
        DR. SCHLOSSER:  For instance, if we start looking
at the possibility that a food service worker is infected
with E. coli and from that contaminates some hamburger, we
just considered that beyond the scope of the model.  It's
much more complex than we're able to do in what we're doing
here.
        DR. KVENBERG:  Thank you, okay.
        DR. POWELL:  Mark Powell.  I just want to clarify,
too, that when we correlate the model with the epidemiologic
estimate we have made an effort to partition out secondary
transmissions and other sorts of sources of 0157 infection,
so that our effort with the epidemiologic estimate which is
derived independently from the 0157 is essentially to scrub
those cases out.
        DR. KVENBERG:  Thank you.  The second question I
have is I--and I ask your forgiveness for not comprehending
this.  Under growth factors, on slide number 17, storage
temperature is equal to internal product temperature.  Would
you go through that one more time, because I fail to
understand the temperature of storage as it relates to this
four-hour period and the internal temperature of the product
as it relates to growth factors because the assumption is
not only is there a lag phase involved, but there's a
temperature differential in the product versus room
temperature.  What's the rationale under growth factors,
please?
        DR. SCHLOSSER:  You mean why do we use that rather
than trying to model the growth curve and see what the
product internal temperature might be, given a certain
ambient temperature?
        DR. KVENBERG:  Why did you make the assumption
that the storage temperature equals the internal product
temperature?  If I understood your remarks correctly, you're
basing this on the '97 Food Code that is based on four
hours, with a time out of temperature at four hours.
Assuming that's the ambient temperature that the product is
in, how does the storage temperature or the internal
temperature of the product outside of storage, if that's the
point--or is this the storage temperature within
refrigerated storage you're referring to here?  I'm
confused.
        DR. SCHLOSSER:  It's basically the storage
temperature.  We say the internal product temperature is
equal to whatever the ambient temperature is.
        DR. KVENBERG:  Within cold storage or prior to
preparation during that four hours, both?
        DR. SCHLOSSER:  Correct, yes.
        DR. KVENBERG:  Well, I think that's a flawed
assumption because the internal temperature of the product
certainly can't equilibrate to room temperature immediately.
        DR. SCHLOSSER:  Right, and when it goes into the
refrigerator, it can't equilibrate to the refrigerated
temperature immediately.  And we thought again that that was
an important simplifying assumption to make in the model.
        MS. OLIVER:  Okay, we'll take one more question
before we move on to the next session.
        Colin?
        DR. GILL:  The modeling of lag times is
notoriously inaccurate because you'll get a different result
depending on how the lag was induced.  Do you know what were
the conditions applying for your models?  Particularly, are
you talking about aerobic or anaerobic conditions?  Are the
cells--are you considered pH-adapted cells, cells adapted to
acid conditions, or unadapted cells?  And do you know what
the circumstances were of inducing cessation of growth?
        DR. SCHLOSSER:  I don't think I can answer any of
those questions.  Is there anyone here that could answer
those for us?
        DR. WALLS:  I can give the information for our
model, and I can give it to you later, probably.
        MS. OLIVER:  Okay.  Skip, if you have a quick
question, we could take that and move right on.
        MR. SEWARD:  No.
        MS. OLIVER:  No, okay.  We'll move on to the next
session, then, on dose-response, and Peg Coleman will give
that section and Karen Hulebak will introduce it.
        DR. HULEBAK:  Okay.  During Peg's presentation,
think about the following two questions.  Are there
sufficient data and methods available to develop a separate
dose-response relationship for the susceptible sub-
population?  How might we validate such a curve?  Is the
basic envelope approach sound?  Peg will describe that
approach.  Is it appropriate to anchor the most likely value
for the dose-response; i.e., the beta-Poisson envelope?
        MS. COLEMAN:  I hope you have new hand-outs.
There should be six per page.  I was told that they were
delivered during the lunch break, or perhaps in the morning,
and it would have this title on it.
        Well, I am pleased to represent the team, and
present to you our ideas and concerns about the dose-
response assessment for this project.  I should credit Chuck
Haas for starting us off.  I think it was in '95 or '96 that
we brought him down from Drexel to start us thinking about
dose-response modeling in the Food Safety and Inspection
Service, and we credit his expertise.
        And some of the Committee members know, then, that
I have a longstanding interest in dose-response assessment,
and I'm pleased to have been elected just this week as
President of the Dose-Response Specialty Group for the
Society of Risk Analysis.  And I look forward to some
beneficial cross-fertilization in the coming year with our
colleagues who address similar issues for chemical and
physical hazards.
        The team is requesting your input on dose-response
assessment, and in order to assist us with incorporating
your input we're eager to learn your perspective of our
interpretations of the available data.  Perhaps you might
suggest different interpretations or additional studies that
we should consider.  My goal with you is to be brutally
transparent about what we think we know and what we don't
know, and our confidence in alternative interpretations.
        In the next hour, our focus on the dose-response
assessment for 0157 is two-fold.  First, transparency
between science and judgment.  One perspective of the
available science is that no available human data from
either epidemiologic investigations or controlled clinical
studies is available to directly develop a dose-response
relationship for E. coli 0157:H7.  Of course, some data are
available from which we can make inferences about the dose-
response relationships for 0157:H7, and one perspective of
the dose-response assessment is that of a mixture of science
and judgment because we are making inferences and
extrapolating from less than ideal definitive data sets.
        The second focus of this talk is on uncertainty.
In our judgment, uncertainty predominates the dose-response
assessment for a number of reasons, including the lack of
data from controlled human clinical studies and the lack of
data on the dose-response relationship for more susceptible
human sub-populations.  However, we do recognize the
importance of variability in dose-response assessment.
        Variability in each aspect of the epidemiologic
triangle--the host, the pathogen, and environment--and also
their interactions, is important to realistically describe
the complex, multi-stage process of pathogenesis.  And this
idea is not new to you, since the FDA has mentioned these
issues to you already in describing their risk assessment
work.
        However, our judgment is that the available data
are insufficient to permit us to explicitly model
variability of host, pathogen, environment, or interactions
at present.  And we will turn our attention, then, to
uncertainty and selection of plausible surrogate dose-
response models.
        I expect that if we in this room voted on the
first question, are children more susceptible, that we might
have consensus.  Although we may believe that children are
more susceptible, the judgment of our team is that the data
are really insufficient to estimate how much more
susceptible children might be.  The next three slides
highlight some relevant data on differential host
susceptibility.
        A note, though, on immune function.  A linkage has
been established since the early 1950s with the importance
of immunity and disease effects.  And a recent paper was
reported in which decreased severity of illness in children
was demonstrated with administration of bovine colostrum
enriched in antibody against 0157 and hemolysin.  So we know
that immunological effects are probably a large portion of
age dependency and susceptibility, but how much so is
difficult to say.
        These unpublished data are from the first two
years of FoodNet, and they were kindly provided by Fred
Ngula.  Differences do exist in the rates of illness for
children and adults, but we don't have information on the
dose dependency for these data.  If you don't have overheads
or the hand-outs, then I'll try and describe the axes for
you.
        These are age intervals from infants under 1
through the elderly; it looks like perhaps 85 or 86 and over
at this end.  And these are the numbers of cases, I think,
per 100,000, and you'll notice that the 1- to 2-year-olds--1
year was nearly 20 cases per 100,000 for this age interval,
and if you look at those in the range of 40 to 50, it's less
than 0.5 for some of these years.  And, remarkably,
variability increases for those over 60, perhaps even over
50, but they are certainly not drawn from the same
susceptible sub-population as the 1- to 2-year-olds.
        Could you back up?  Sorry.  I got a little too
wordy.
        We could assume that the 1- to 2-year-olds appear
40-fold more susceptible than these adults down here, and
that the elderly over 65 might be 8-fold more susceptible.
But these factors are confounded by our uncertainty about
the doses that actually make them ill, and the frequency of
exposure which we'll address.
        A number of recent Japanese studies have been
published that appear consistent with this FoodNet data on
age dependency of attack rate.  However, the apparent
resistance to illness noted in these adults was very
surprising to us.  The scale of 16- to 39-year-olds, the
asymptomatic cases--it looks like 164 isolations--sorry;
those were symptomatic.  And then the asymptomatics in that
same age group were 195 cases.  So we're looking at less
than 45 percent illness, given colonization.
        This is surprising to us because we haven't
documented asymptomatic cases very much in this country.
And it seems to me that the Japanese only detected these
large numbers because they were investigating contacts of
their symptomatic cases.  The cases may not all be food-
borne, but the Japanese data, though they do indicate some
greater susceptibility of children, they also suffer from
the same uncertainties about the ingested doses.
        This graph illustrates differential susceptibility
in an animal clinical trial.  We have log dose on this
scale, and response.  The response here is actually
colonization or infection, and these animals were followed
over 11 days, so that seems pretty well-established.
        Note the shift in the dose-response curves is not
strictly a linear shift from the normal animal to the more
susceptible animals, but that the shifts reflect changes
both in the shape and the position of the curves.  This red
line is fitted to the normal animals, and this black line is
fitted to animals that received a pre-treatment with
antibiotics to knock out their indigenous flora.  And their
ID50s or the infectious dose for 50 percent of these animals
shifted five orders of magnitude.
        The intermediate lines represent fits for pre-
treatment with antibiotics for two days, and then challenge
with the pathogen three days, for days, five days.  So by
five days, the indigenous flora appears to have recovered
enough that the animals are fairly similar to the normal
animals in their dose-response relationship.  These data
illustrate that one true infectious dose not exist, and that
the shape and the position of the dose-response curves for
normal and susceptible animals differ, the ID50 in this case
shifting five orders of magnitude.
        In recent months, studies in the peer-reviewed
literature, like those Japanese studies, came to our
attention and caused us to reconsider some of our
assumptions about this pathogen in the older literature.
For example, we expect that as you increase dose, the
likelihood of illness would increase.  There was also a
Stanford study that was published in Science that
demonstrated increased severity of illness with increasing
dose.
        Many data sets from the clinical feeding trials
fit a variety of sigmoidal curves, and a manuscript
appearing this month in Risk Analysis includes Dr. Doyle's
name on the author line and they explored a number of
empirical model forms for dose-response assessment.  We'll
present some evidence for the importance of depicting
uncertainty about the true model form, the shape and the
position of the dose-response curve.
        Before we ask if our modeling is defensible, we
ask you as an Advisory Committee about the interpretation of
the available body of evidence from the scientific
perspective.  Clearly, a model that is based on weak science
is a weak model.  As a team, we're seeking your input so
that our model is transparent and is based on the most
defensible scientific data available and on well-reasoned
judgments.
        Four possible criteria are posed in this slide for
selection of surrogate models.  We're going to focus pretty
much on this last category which we think is probably the
strongest, where we're looking at the similarities in the
expression of specific virulence genes.
        Other Escherichia strains are potential
surrogates, obviously.  But, in addition, several other
related bacterial strains might be considered as potential
surrogates.  The genus Escherichia seemed to diverge from
Salmonella perhaps 150 million years ago, and then to
diverge again from Shigella only 80 million years ago.  And
you may be aware that the full genomes of both the comensul
and 0157:H7 strains have been sequenced recently.  And
evolutionarily speaking, the pathogen seems to have acquired
over a million new base pairs that are not detected in the
comensul strain.  This work may greatly impact our
understanding of pathogenesis, and also our ability to
inhibit pathogenesis.
        Data are available from human clinical trials for
four possible surrogate pathogens--Shigella dysenteriae;
Shigella flexneri; enterotoxigenic E. coli, or ETECs; and
enteropathogenic E. coli, or EPECs.
        Will the selection of plausible surrogates affect
our risk assessment?  Most definitely, yes.  This slide
might help to convince you.  Here are the ID50 estimates for
fitted models, and these are beta-Poisson models.  Shigella
dysenteriae appears to be the most virulent, in that half of
the animals dosed would become ill at 1 times 10 to the 2nd,
200 organisms, whereas the EPECs require 7 times 10 to the
8th organisms to make 50 percent of the human volunteers
ill.
        The remaining surrogates, including one animal
study that--actually, Chuck Haas just gave us the paper.  It
will be appearing in the International Journal of Food
Micro, is that right?
        DR. HAAS:  Yes.
        MS. COLEMAN:  But he had also shared his last
student's thesis work with us, so we had seen these data and
some others, and I'm sure that will come up in the
discussion.  But for this point, I guess I'll just close
this slide out and say that since Wayne's output from the
preparation and cooking module predicts 40 percent of the
positive servings with only one surviving cell, when you're
extrapolating from these two extremes to the low dose
region, dose-response assessment is going to be very
important.
        This graph depicts the data by strain from human
clinical trials conducted at the University of Maryland in
the 1970s, and at Stanford in the 1990s.  In our judgment,
these two potential surrogates represent most closely the
range of possibilities based on the common virulence genes.
        Shigella dysenteriae, in blue, is the only
potential surrogate that routinely expresses Shiga toxin,
but it differs mechanistically in invasiveness.  The EPECs,
in green, are the only potential surrogate that shares with
0157:H7 the locus of enterocyte effacement or the lead
pathogenicity island, and also the associated pathology of
the attaching and effacing lesions in the host.  However,
most EPECs lack the Shiga toxin genes, and so they be less
virulent than 0157:H7.
        So we're reasoning that those two extremes might
be plausible upper and lower bounds for 0157:H7.  And those
data were sparse--2, 3 or 4 dose groups in each of the
experiments, and 40 individuals in the Shigella dysenteriae
volunteer trials, and 37 in the sum of the 3 EPEC trials.
        Our focus for this final leg of the risk
assessment is on three basic approaches, and each one is
sensitive to assumptions.  Six of us have contributed most
of the work in the dose-response assessment to date, and
half of us favor a simplistic bounding approach and I think
all of us are hopeful that we'll generate more epidemiologic
data to be able to anchor a bounding approach to outbreak
and FoodNet data.  But we'll describe some of those details
in further slides.
        Before we get into the evidence and the outputs of
our models, I'll spend a little time discussing the
epidemiologic evidence.  There are two basic outbreaks in
the U.S. that have caught our attention, the 1993
northwestern states outbreak associated with undercooked
fast-food hamburgers and the '95 venison jerky outbreak.
        Most of the details on this slide are probably
known to the Committee.  Although a direct count of the
persons who consumed hamburgers in this outbreak was not
available, we do have an estimate of the non-recalled
patties that we presume were consumed.  So though that's not
a definitive estimate of consumption, it does allow us to
calculate an attack rate.  And it is a very low attack rate,
.00037 per serving, if we're assuming that each of these
million-some non-recalled hamburgers were consumed.
        There were some severe endpoints with this
outbreak, and I'll just focus attention for a minute on
these results.  We've already had discussions about the MPN
method.  Is it an underestimate, an overestimate?  There
were a couple of interesting papers at the Society for Risk
Analysis that suggest that MPN may actually measure colonies
or clusters and not cells, which is an interesting idea.
        I also have heard through the grapevine that when
these analyses were conducted, there wasn't the pre-
enrichment step that we might do today to recover injured
organisms.  So we're not all that confident in the levels
even in the raw frozen product, and the next slides will
raise some additional questions.
        I think the idea of dose reconstruction is
probably more well-grounded in risk assessments for
radioisotopes, but we certainly can look at dose
reconstruction in a more formal way.  We have our MPN counts
in the raw hamburger.  There are actually conflicting
studies.  Some seem to suggest that there is a decline in
recovery from the frozen state, and others not.  We'll have
to look more closely at that.
        We know from a study by Bell that undercooking was
a risk factor, and that undercooking was actually fairly
severe.  In the Bell study, some burgers cooked under the
processes that the fast-food restaurants were using were
only cooked to 108 degrees F, where USDA currently
recommends cooking to 160, so very light cooking.  But the
bottom line really is we're uncertain about what the
ingested doses were in that outbreak.
        So here's a slide of what we don't know.  We're
not certain about how good our enumeration methods are, what
our recoveries are from food samples, what is the extent of
clustering.  And as a result, we have fairly low confidence
in the estimates of ingested dose from these six MPN values.
I already mentioned this.  That's enough.
        So this is the second outbreak, the venison jerky
outbreak, and there are actually a couple of you in the
room, I think, whose names are on this study.  And if you
don't mind my saying, the report was a little sketchy.
        [Laughter.]
        MS. COLEMAN:  So you might be able to enlighten us
a little bit, so I'm hoping that you'll do that.
        For example, we had 6 confirmed cases and 5
presumptive cases mentioned, 13 individuals on a weekend
retreat eating venison jerky.  And the presumptive
positives--there was some delay in actually analyzing the
fecal samples, or perhaps in getting the samples.  But 8 to
16 days after the onset of illness, you're probably not
going to detect that pathogen, even if it was the causative
agent of your illness.  So you might include the 6 confirmed
and the 5 presumptives, and estimate a pretty high attack
rate for that outbreak.
        Just a couple of notes.  There was a 3-year-old
that was the only case that sought medical attention.  There
was a 9-month-old that had diarrhea, but was not known to
consume the jerky.  Now, here's a little bit about what we
don't know.  The information in the report was pretty
anecdotal about consumption, and this is a quote that some
individuals, some hardy soul, consumed 500 grams over the
next few days.  That's a lot of jerky.  So, that's really
all the evidence that is reported in that outbreak as far as
consumption.
        Now, we have two estimates of MPN, 3 per gram and
93 per gram.
        He's telling me to hurry up.
        You could think about these as two observations
and take the mean and assume that a mean serving tells you
something.  But it's also possible that you really do have
some samples that are low and some samples that are high,
and so an interval analysis approach may be worth looking at
in addition to taking a mean and looking at what that tells
us about the doses that might have made people ill.  So
neither the last outbreak nor this one utilized a pre-
enrichment step, so we're not sure if we've really accounted
for all the injured cells in that count.
        And these are kind of my musings about what might
a 3-year-old child have actually ingested, and I guess the
bottom line is we really don't know, but probably 500 grams
in thousands of counts over the weekend is probably not
realistic.  But we really don't know if that child only had
one serving comparable to a Slim Jim pack and that that was
enough to make them ill.
        Some common uncertainties of epidemiologic data.
We generally don't know the doses of the pathogen that
caused illness or those that didn't cause illness.  We often
don't know the serving sizes, and the attack rates as they
are normally calculated don't account for variability and
uncertainty, though Mark will show us some ways to do that
in the next talk.
        So because of that, we have low confidence for the
estimation of the infective dose or minimum infective dose
from these two outbreaks.  And I don't think I need to make
a plug to this audience for improving the multi-disciplinary
nature of our outbreak investigations, but clearly we could
do some case-controlled studies and bring in some more food
microbiology and try and generate some data that will
enhance our understanding of dose-response.
        Our judgment so far is that the available epi data
don't permit development of a dose-response relationship for
0157:H7.  Originally, the team had expected to provide
separate estimates for exposure by age interval, so we still
have adults over 16 as our input for this model.  The input
from the preparation and cooking module comes in here.  We
put that value into our surrogate dose-response model and we
generate an output of a number of annual illnesses per year.
        The first two examples here will be illustrated in
subsequent slides, and I refer you to the JIFSAN Web page
for the third example.  But I thought it was worth saying in
an audience where risk management is of interest that some
of the importance of dose-response modeling might depend on
the goal of the assessment.  If a risk manager isn't really
interested in the truth number with attendant uncertainty of
illnesses, you might think that the dose-response model form
wouldn't be important.
        But even if you're just interested in finding
relative risk reductions when you intervene in the process,
model form can give you different answers.  So you'll see an
interesting example showing parameter uncertainty, and maybe
a less interesting one about model uncertainty, but let's go
ahead.
        Clark Carrington came up with some of this
philosophy for us.  Parameter uncertainty is pretty well-
known to the group, I think, that you can account for
sampling error and measuring error in the numerical
estimation procedures for fitting your model.
        Scientific uncertainty is a little more
philosophical.  Model form is one aspect of scientific
uncertainty, but this one--I think Clark coined a new term
here.  Analogical uncertainty refers to how good is our
analogy between 0157:H7 and Shigella, or 0157:H7 and the
EPECs.  And so that kind of analogy is a big part of our
scientific uncertainty for this process.
        Bootstrapping is a common technique, and you've
heard some mentions of it already so I won't go into it in
great detail.  But this is it, this is the data set for
Shigella dysenteriae.  We have 4 doses, we have 6
observations, and only 40 individuals that were sampled.
But go ahead to the next slide.
        Now, Mark is going to load a data file, and this
is the same data set we just saw on the screen.  And we're
going to put in a parameter file, so this file was generated
with a C-plus-plus object.  Clark Carrington had worked this
up first in visual basic and then hired a contractor at
Maryland to help him bring it in as a C-plus-plus object.
Very shortly, this will be posted on the Web for others to
use in looking at model and parameter uncertainty.
        When this is done cycling, maybe you can go down
and change the log scale and just step through a few of
these, but these are all models that fit that data.  This is
the low-dose region, this is 10 organisms, and we do have
one observation from the Shigella data set there.  And you
imagine, when this curve flattens out, what that is doing to
your prediction of illness at a dose of 1 organism.  These
all fit the data.  It's pretty amazing, so this is parameter
uncertainty, and the model form is beta-Poisson.  It's
pretty amazing.
        Okay, next.  Now, we have a couple of flat
pictures to show you about model uncertainty.  Here, we have
five different model forms, and this is the observed region
and they all fit the observed region well.  I just changed
the scale here so that you can see the divergence of the
predictions when you go to the low-dose region.
        He's making me hurry.  Okay, that's all right.
        So if we didn't account for model uncertainty, we
might be overstating our confidence about what proportion of
illness we would really be predicting at the low doses.
        I'm sorry.  I had an extra slide in there.  No
wonder I was confused.
        There are a number of published papers, two that
are mentioned here, that have looked at shigellosis as a
surrogate for 0157:H7.  And we'll show you some results, but
we're going to spend more of our time thinking about the
approach of bounding the human data with an upper bound of
Shigella dysenteriae and a lower bound of the EPECs.
        First, we entertain an assumption of a uniform
distribution between those two to reflect our uncertainty
about how analogous 0157:H7 really is to the upper-bound and
lower-bound strains.
        We're proposing this as a validation strategy, but
also a future form to generate dose-response models, and
that builds on the previous idea that we--actually, this
idea came first, so credit Mark Powell for this, but the
upper bound of Shigella dysenteriae, lower bound of the
EPECs.  But then we're looking at a "most likely," and this
is a very creative and interesting approach where we have
the data from those two outbreaks and FoodNet predictions
that anchor that "most likely."  We could also validate with
a rabbit clinical trial dose-response model, and there are
other assumptions that we might be able to test.
        Okay, so this is the data.  This is our one slide
of results for the dose-response modeling.  This your
average Shigella uncertainty, and you'll notice that it is
the broadest peak out there.  This does account for
variability and uncertainty, and it's the only model besides
maybe the epi data that does that.
        This red model is the upper and lower bound, with
a uniform assumption.  This bright blue curve is the beta-
Poisson envelope approach, the bounding approach with the
anchor to the epi.  And then these data--Mark will describe
to you the derivation later, but these are based on FoodNet.
Oh, I didn't tell you the axes--thousands of cases and the
probability of illness.
        I'll go through this briefly because I'm sure
there will be some questions.  But you could make different
assumptions about exposure for the dose and the response for
the venison jerky outbreak.  And you can generate a cloud of
points and fit a line through it, and that is exactly what
this blue line is.  So this is what we're saying is our most
likely, and it's convenient that it has fallen right between
the upper bound and the lower bound.
        And if you'll go to the next slide, you'll see
that this point here is the estimate of attack rate for the
western states outbreak.  And this is based on data from
Washington State, and I think if you look at the full four-
state outbreak the attack rate might be a little lower.  But
still you're between the upper and lower bound, and that's a
pretty good anchor for epi data.
        Okay, my summary slide.  I probably won't get any
argument that we have great uncertainty associated with
surrogate dose-response models.  In a sense, our most
compelling bounded approaches have attempted to account for
uncertainty as fully as possible.  We don't really know if
these models are really bounded in this way, but we also
wonder if this approach might also include the dose-response
models that might be envisioned for more susceptible sub-
populations.  I'm sure that will come up in discussion.
        So I had a few slides just kind of to get you
thinking about what kinds of things we'd like to hear you
bring up with us, and I'm sure you have others.  But what do
the available data tell us about predicting illness for
0157:H7?  Should we be limiting our modeling to healthy
adults, since we're basing our models on the human clinical
trials, and could we go farther?
        If the Committee felt strongly that we really
should model more susceptible sub-populations, how might we
think about doing that?  Should we treat the 1- to 2-year-
olds separately, since they seem from the FoodNet data to be
so different from all the other groups?  And how do we deal
with the immunocompromise in the elderly?
        If we did decide to expand our family of dose-
response models to incorporate all these, of course, as Mark
has already pointed out, that means that our whole exposure
assessment has to link in in the same way, that we would
have to have consumption data for the immunocompromised.
And so it's not such a simple thing to say, okay, just give
us another model and go with it.
        And then is a 10-fold factor enough?  Do we need
to think about more orders of magnitude than that, or do we
put some funding into research studies to try and address
the biology of the issues?
        Does the approach for surrogate dose-response
modeling convey attendant uncertainties sufficiently and
transparently?  Is the scientific evidence supporting each
approach for dose-response assessment plausible?
        And I want to thank especially Clark Carrington
for technical assistance and use of the C-plus-plus object.
And I thought I'd end with a nice view of Cranberry Lake at
Christmastime.
        Thank you.
        MS. OLIVER:  Thank you very much.
        We'll take questions now for ten minutes.
        Peggy?
        DR. NEILL:  From one Peg to another, I have a
couple of points, Peg, none of which are probably going to
really directly answer some of those latter questions, but
which I think are some important inputs, probably more
proximal in your presentation.
        The first has to do with the issues surrounding
demonstration of asymptomatically colonized individuals.  In
the data from Japan, as you noted, most of those appear to
have been picked up by cultures done on contacts, some
households, sometimes classrooms, business associated,
whatever, where there has been an indexed case.  They were
not always stratified by either antecedent or subsequent
illness.
        In other words, some of these putatively
asymptomatically infected individuals may have had
antecedent diarrhea and were simply convalescent excretors.
Conversely, they may have been picked up in the incubation
period.  It is a fairly widespread practice within Japan to
treat any person picked up with, quote, "asymptomatic
colonization."  And so I'm not sure we're going to be to
dissect out which of those two possibilities occurred.
        There is a paper that appeared this past year that
looked at household contacts of children with HUS in the
Netherlands, in which the predominance was 0157-associated.
There's a little bit of data in there, tantalizing, speaking
somewhat again to the same issues.  It does not tell you
whether you just picked up a person who has previously had
an illness.  In the parents, there was a suggestion that
that was true.
        On another point, in terms of the genetic
similarities, I think this is very, very problematic in
terms of looking at EHEC, namely 0157, and the EPEC
connection.  While a number of genetic loci have been
defined that are of great similarity, there has been more
data that has been coming in recently suggesting that they
may be organized differently or under different control.
        A paper this past year showed that when you take
the LEE out of EPEC and put it into a K12, you can confer
the entire attaching and effacing phenotype.  So it seemed
to make sense, if you just did the same thing with the EHEC,
or 0157, LEE, that it should occur.  It did not.
        Although the overall organization of the 0157 LEE
crudely looks very, very similar to the EPEC one, the
greatest degree of genetic variability is in the genes that
control the interaction with the host.  So kind of with
these two pieces of information converging, I have a
feeling--and it's only that, it's an intuition--that it may
not be correct to be thinking about S. dysenteriae and EPEC
as upper and lower bounds purely on the basis of their
genetic complementation.  It's possible that that, within
the EHEC connotation, may place them outside of that range.
        The last two are just very quick.  The third point
is--somebody help me here.  I thought there was data on--a
little bit on quantitation in the salami outbreak, West
Coast salami outbreak.  I cannot recall how much data was
there for consumption, but I thought there was a little.
Anne Marie McNamara and several other people in the room, I
think, are on the paper from one of the food--it's not in
the epi-oriented paper; it's in a second one.
        MS. COLEMAN:  Mark just reminded me we didn't have
an attack rate for that outbreak.  So we had a little food
microdata, I believe, but perhaps not the case description.
        DR. NEILL:  Okay, because it is within that data
that is the suggestion for some of the lowest possible
exposures in terms of CFUs that have been demonstrated, to
my knowledge, and I think Paul is indicating kind of the
same.
        Those are may major points.  I have some other
ones, but I'll communicate them to you later.
        MS. COLEMAN:  Thank you.
        MS. OLIVER:  John?
        DR. KOBAYASHI:  John Kobayashi, Washington State
Health Department.  With regard to the salami outbreak,
that's my recollection that there was quantitation, but the
number of cases was very small and may have been too small
to generate attack rates.
        At any rate, with regard to the 1993 outbreak, I
think it may be that you know this already, but at least my
recollection of the attack rate per number of hamburgers
served in Washington State was about 1 per 1,000, which is
more than that .00037, I think, that I saw there.
        On the other hand, I think in one of your graphs I
saw 1 per 1,000, so maybe you were correcting for Washington
State versus outside of Washington State.  The basic basis,
I think, for that is ascertainment--especially at that time,
ascertainment for E. coli 0157 cases in Washington State was
a lot better than a number of the other states involved.
        The other things are just a few tidbits with
regard to the details of those burgers.  My recollection is
that they were one-tenth-pound burgers that were cooked in
two minutes from the frozen state to a presumably cooked
state.  These burgers were cooked on an open grill without a
weight above it, so that if there was buckling or bowing of
the hamburger that there might not have been adequate
contact with the grill.  These were marketed as children's
burgers.  They had a slightly heavier burger that was
marketed for adults, so the bulk of the people exposed, I
think, were children in that circumstance.
        At any rate, the other thing is I believe that
there were a number of temperature measurements that were
taken during that time.  As I'm saying that, I think you
know that already with regard to the temperatures.
        DR. POWELL:  I just wanted to comment regarding
the analysis that was shown here on the Pacific Northwest
outbreak.  This point that is shown on figure 39 is derived
just from the Washington-based write-up, and rather than
inferring or implying a degree of undercooking, this is the
estimate of approximately 1 log without any cooking.  And
the most likely curve was derived, in part, from the venison
jerky outbreak, and then our effort was then to use this as
kind of a ground-truthing--on that curve, you see the
venison jerky outbreak--and then just to see whether in the
low-dose region it seemed to make any sense.
        So, that is intended as a validation on the curve
that is derived from the venison jerky outbreak, and we can
impute from that a degree of undercooking which turns out to
be somewhat less than 2 logs.  Rather than making an
assumption about the degree of undercooking, we can impute
that to force that onto the most likely curve you would
impute, I think it's about a 1.5-log reduction from the
frozen state.
        I also wanted to--I'm sorry--just make a
clarification that the output from the preparation segment
is for all ground beef servings consumed by all age groups,
not just for those 16 and over, but that we have not
distinguished the dose-response relationship for the
different age groups.  So all of the servings consumed by
all age groups is output by the preparation segment.
        MS. OLIVER:  John, did you have any other
questions?
        DR. KOBAYASHI:  No.
        MS. OLIVER:  Bill?
        DR. SPERBER:  Thank you.  Bill Sperber, Cargill.
In discussing the 1993 hamburger outbreak, you pointed out
the six MPN results and suggested that perhaps these weren't
accurate and you didn't know if this was really a good way
to estimate the infective dose.  For these six numbers, the
median count is about 2 per gram, and I would suggest that
they should be treated as accurate estimates of an
infectious dose because for this whole century we've had to
use similar procedures for estimating infectious doses of
other pathogens like Listeria monocytogenes and salmonella.
        And at the very least, these methods, inaccurate
as they may be, are showing us that some organisms like 0157
probably have a very low infectious dose, and other
organisms like LM probably have a very high infectious dose.
And until we have better methods that can more accurately
identify CFUs, individual cells versus clumps, that sort of
thing, we have to make the best use of these data, and I
think it's legitimate to do that with these data.
        MS. COLEMAN:  Thank you.  Yes, we really do intend
to use them in a more formal dose reconstruction.  But given
that those are counts in a raw frozen product, somehow we
have to get to what people eat, and so there are adjustments
that have to be made.
        MS. OLIVER:  Chuck?
        DR. HAAS:  Yes.  First of all, Peg--
        MS. OLIVER:  Can you give your name again?
        DR. HAAS:  Chuck Haas.
        I want to indicate that I find myself in, you
know, pretty good agreement with what you've done, and so I
think it's a nice approach.  And I just wanted to add that
our analysis of the animal 0157 data probably leads to a
dose-response curve that is reasonably on top of what you've
indicated as your median.  We should compare those curves
because I think it's going to be close to that.
        MS. OLIVER:  Paul?
        DR. MEAD:  Paul Mead.  Just briefly to follow up
on what Peggy said, I do think that while the salami
outbreak, you cannot establish an attack rate because you
don't know the number of people exposed, and so it perhaps
doesn't give you the infectious dose, I do think it provides
some of the really pretty good data in terms of the minimum
infectious dose, in that I think there were on the order of
seven or so samples tested, all of which were positive and
contaminated at pretty low levels, and that there was fairly
good consumption history at least for culture-confirmed
cases.
        Now, you don't know that that dose is the ID10 or
the ID50 or the ID90, but it does give you a hint that
somewhere on your curve it should come down to somewhere
below 50 organisms, which that seems to support.  And unlike
the ground beef and those items, there's no additional
cooking and factoring in that you have to worry about.  So I
think that paper--if there is some way to incorporate that
data, it would be helpful.
        MS. COLEMAN:  Thanks.
        MS. OLIVER:  Are there any other questions and
comments before we take a ten-minute break?
        [No response.]
        MS. OLIVER:  Okay.  Why don't we take a ten-minute
break and then come back?  Thank you.
        [Recess.]
        MS. OLIVER:  We'll get started again.  Mark Powell
is going to give us a model summary and an epidemiological
validation, and then we'll take some questions on that.
        Mark?
        DR. POWELL:  Before moving to consider the
epidemiologic data, I just want to try and recap very
briefly.
        Next slide.
        Again, the scope of the risk assessment is limited
to 0157 in ground beef.
        Next.
        For the production segments, our best estimate of
the prevalence of 0157 in live cattle destined for ground
beef production is 11 percent.  The uncertainty range is on
the order of 5 percent to 15 percent.
        Next.
        In the slaughter segment, our best estimate of the
prevalence of positive combo bins is 23 percent, and the
uncertainty there, depending on the class, ranges from 2 to
66 percent.  Our best estimate of the concentration per gram
in the combo bins is negative 4.5 logs.
        And there's a lot of uncertainty regarding these
outputs.  For example, for steer/heifer plants, the
estimated relative frequency of containment in combo bins
containing 1 log load of 0157 ranges from less than 10
percent to approximately 25 percent.
        I'm recapping essentially what Eric and Wayne
presented this morning.
        For the preparation segment, our best estimate is
that 81 percent of grinder loads contain at least 1 CFU, but
again the loads tend to be very low and there's a great deal
of uncertainty attendant to that estimate.
        Our best estimate of the annual number of
contaminated servings post-cooking is 375,000.  That
translates to a prevalence of about .002 percent, and again
low levels, about 40 percent of the exposures in post-cooked
servings being 1 organism, only about 10 percent of the
exposures being doses of greater than 3 logs; a lot of
uncertainty, again, attendant to those estimates.
        We considered a number of alternative dose-
response relationships, and these fall into two broad
categories.  The first category consists of models based on
Shigella as a surrogate pathogen.  Within this category, two
model forms have been published in the literature, the beta-
Poisson, the beta binomial.  The second category is the
envelope that uses the dose-response relationship for
Shigella and EPECs as bounding estimates.  Within this
category, one alternative considers a variety of statistical
dose-response model forms, and that was illustrated with the
C-plus object that we ran during Peg's talk.  And the other
alternative developed uses the beta-Poisson curve and
anchors the most likely value in outbreak in epidemiologic
data.
        So, that's a sum of where we've been.  Next, I'll
address the epidemiologic validation.  In the presentation
that I'll give you, I have deleted the second slide on the
hand-outs.  And the third slide, Epidemiologic Risk Factors,
has been moved to the discussion of the proportion
attributable to ground beef, which begins in your hand-outs
on page 15.  So those are the only changes that have been
made.  So I'll proceed then to get right into the analysis
of the FoodNet data, page 4 of your hand-outs.
        Our analysis begins with the reported population
base rate in the five original FoodNet catchment areas
during '96 to '98.  These rates of illness per 100,000
person-years are then weighted by the population of the
state in which the FoodNet catchment area occurs for the
purposes of extrapolating to the national level.  Thus, the
rate reported in a large state like California is given
greater weight than the rate in a small state like
Minnesota.
        Next.
        To represent the annual variability in the number
of reported cases, we placed the cluster-weighted rates from
the three years of surveillance into a discreet, uniform
distribution.  And as we perform our simulation with each
iteration of the model, the rates are drawn at random with
equal probability from this distribution.  And then to
extrapolate to the national level, we simply multiply this
distribution by the estimated U.S. population in '98.
        Next.
        These estimates need to be adjusted, however, for
the recognized sources of underreporting because some ill
persons do not seek medical care, physicians do not obtain
stool specimens from all patients, laboratories don't
culture all stool samples for 0157, and at some proportion
of labs the results are false negatives.  With the exception
of test sensitivity, each of these proportions is treated as
dependent on whether the infected person presents a bloody
or non-bloody diarrhea, so we first estimate the proportion
of bloody and non-bloody cases.
        We then characterize the uncertainty about the
proportions of cases at each node in the pathway that leads
to a successfully reported case.  These proportions feed
into a sequence of negative binomial distributions that are
used to estimate the number of cases missed at each step.
Then we sum the resultant two uncertainty distributions
about the number of cases, both bloody and non-bloody, to
estimate the total annual number of cases nationally.  For
the severe cases, defined as bloody diarrhea for which the
person seeks medical care, we will subsequently estimate
progression of illness to more severe health outcomes, such
as hospitalization, HUS, or death.
        Next.
        We proceed by observing that 409 of 480 reported
cases presented with bloody diarrhea.   These data provide
the parameters for a beta distribution that characterizes
our uncertainty about this proportion that is centered about
85 percent.  The data come from the first year of statewide
surveillance in Washington, reported by Ostroff and
colleagues, and the first year of FoodNet--that is, 1996--
that was reported by Hedberg and colleagues.
        Next.
        In conjunction with the active surveillance
system, FoodNet has conducted a number of companion surveys
to estimate the degree of underreporting in the sentinel
sites.  Here, we observe the results of the FoodNet
laboratory survey in which 79 percent of labs reported
testing bloody stool samples for 0517, but only 47 percent
of the labs reported testing all stool samples for 0157.
These data feed into a beta distribution characterizing the
uncertainty about the proportion of labs that cultured for
0157 in bloody and non-bloody stool specimens, respectively.
        Next.
        The sensitivity of the sorbitol McConkey agar, or
SMAC test that is used by the labs to identify 0157 in stool
samples is assumed to be 75 percent.
        Next.
        In a survey conducted in the FoodNet catchment
area, 78 percent of responding physicians reported that they
obtained stool specimens from patients presenting with
bloody diarrhea, and 36 percent reported obtaining specimens
from patients with non-bloody diarrhea.  These data feed
into yet another beta distribution characterizing the
uncertainty about the proportion of the physicians that
obtained specimens from patients presenting with bloody and
non-bloody symptoms, respectively.
        Next.
        Regarding the proportion of ill seeking medical
care, Cieslak and colleagues found that 55 percent of bloody
diarrhea cases in an 0157 outbreak in Las Vegas reported
seeking medical care.  We used these data to characterize
our uncertainty about the proportion of bloody cases seeking
medical care.  For the non-bloody cases, we used the results
of a FoodNet population survey in which 8 percent of the
respondents who had had a recent bout of diarrhea reported
seeking medical attention.
        Next.
        From this point, the negative binomial
distribution is employed in a step-wise fashion to add to
the reported number of cases those that are missed by the
surveillance system due to test insensitivity, laboratories
not culturing stool samples for 0157, physicians not
obtaining stool samples from patients, and ill persons not
seeking medical care.
        The highlighted figures in this table represent
the expected value of the annual number of severe cases,
approximately 7,500, and the total annual number of bloody
and non-bloody cases.  Taken together, the expected value of
all cases is approximately 73,000, and in just a moment I'll
present the uncertainty that is attendant to this estimate.
        Next.
        Proceeding from the estimated number of severe
cases, we characterized the uncertainty regarding the
proportion of such cases that progressed to more severe
health outcomes--hospitalization, HUS or TTP, and death.
These uncertainty distributions are based on CDC data on
disaggregated health outcomes from 203 outbreaks reported
during 1982 to '98.  I would note that applying these attack
rates to all cases involves an assumption that the severity
of outbreak in sporadic 0157 strains is similar.
        Next.
        We then generate the estimated total number of
cases of 0157 using Monte Carlo simulation methods.  As you
can see, there is considerable uncertainty in these
estimates.
        Next.
        We proceed by characterizing our uncertainty
regarding the etiologic fraction of 0157 cases due to ground
beef.  During 1994 to '98, ground beef was identified as the
likely vehicle of infection in 31 percent of reported
outbreaks where a likely vehicle of infection was
identified.  Eighteen percent of these outbreaks were
attributed to ground beef.  These limited data do not
provide a precise estimate of the proportion of total 0157
illnesses--I'm sorry--18 percent of the illnesses in the
outbreaks were attributed to ground beef.
        These limited data do not provide a precise
estimate of the proportion of total 0157 illnesses due to
ground beef consumption.  And, in general, we are wary of
relying too heavily on outbreak information to characterize
the proportion of cases attributed to ground beef.  But the
outbreak data do help bound our uncertainty about this
etiologic fraction.
        Next.
        While ground beef is the most frequently
identified source of outbreaks, most cases of 0157 are
estimated to be sporadic.  In the first nationwide case
control study of 0157 conducted in '90 to '92 by Slutsker
and colleagues, consumption of pink ground beef was the only
dietary risk factor independently associated with diarrhea
in multivariate analysis.  The population attributable risk
for this behavior at that time was 34 percent.
        More recently, Kassenborg and colleagues also
found that consumption of pink ground beef was a
statistically significant risk factor in a case control
study conducted at five FoodNet sites during '96 to '97.  A
preliminary multivariate estimate of the population
attributable risk from consuming pink hamburger or ground
beef was 19 percent.  We used the most recent case control
findings to anchor our uncertainty about the etiologic
fraction of cases due to ground beef.
        Next.
        An estimate derived just on the basis of the
outbreak-associated illnesses due to ground beef is
consistent with case control findings, but it seems to us
over-confident.  An estimate derived from the proportion of
outbreaks due to ground beef is less confident but seems
biased in light of the case control findings.
        Therefore, we characterize our uncertainty about
the etiologic fraction as a pert distribution with a minimum
equal to the 2.5th percentile of distribution A, based on
the outbreak illnesses, a most likely value equal to the
median of distribution A, and the maximum equal to the
97.5th percentile of distribution B, which is derived from
the occurrence of outbreaks.  The expected value of this
pert distribution is 21 percent, approximately.
        Next.
        This figure presents the three different sources
of information that could be used to depict the uncertainty
regarding the proportion of illnesses due to ground beef.
The tight green distribution is the one that we felt was
over-confident, derived from the proportion of outbreak-
associated illnesses due to ground beef, and it's centered
about 18 percent.
        The broad black curve, which we felt was less
over-confident but perhaps biased, is derived from the
proportion of outbreaks due to ground beef.  It's centered
around 32 percent.  The intermediate brown distribution is
the pert distribution that we've defined and used to
characterize the uncertain proportion of illnesses due to
ground beef.  Again, it's anchored with the case control
data and bounded by the outbreak data.
        Next.
        To estimate the number of cases of 0157 due to
ground beef, the estimated total national annual number of
cases of 0157 is multiplied by the preceding pert
distribution.  The resultant distribution of the total
number of cases of 0157 due to ground beef has a median of
approximately 16,000, and a 95-percent confidence interval
of approximately 9,500 to 29,000.  Approximately 10 percent
of the cases meet the severe case definition.  The estimated
annual number of deaths due to 0157 in ground beef ranges
from 5 to 20.
        Next.
        So, again, this is showing the curve that Peg
presented earlier.  This figure compares the epidemiologic
estimate, which is the furthest to the right--this is the
epidemiologic estimate of the number of cases of 0157 due to
ground beef--and the results predicted by three of the dose-
response models discussed during Peg's presentation.
        Here, we have integrated the dose-response models
over the most likely exposure distribution that is outputted
by the preparation segment.  We have not yet pushed through
the bounds of the preparation segment, so the full
uncertainty in the three model curves, which are those to
the right, is not fully reflected.  But this gives us some
means of relative comparison of the three dose-response
alternatives.
        All of the models pictured here overlap to some
extent with the epidemiologic estimate, but the extent of
the overlap is greatest for the beta-Poisson envelope.  Now,
this is not particularly surprising, since the most likely
value for this dose-response model has been anchored by the
epidemiologic estimate.  So it, in a sense, has been given
an advantage over the other models which do not use the
epidemiologic data.  Those are set aside independently for
validation.  Nevertheless, even with the beta-Poisson
envelope that is anchored, the overlap is not complete
because the upper and lower bounds of the envelope are
independent of the epidemiologic estimate.
        Next.
        This figure presents in descending order the rank
correlations of various factors in the model for the total
number of cases of 0157 due to ground beef.  The pattern
that emerges is that if we seek to reduce our uncertainty in
the overall number of cases, then we should focus on
enhancing the data on the non-bloody cases, beginning with
those that don't seek medical care.
        Toward the other end of the range, it seems that
if we seek to have a more precise estimate of the overall
number of cases, then we may gain relatively little from
decreasing our uncertainty about the proportion of 0157
cases that are bloody.  As is often the case, however, the
results of the sensitivity analysis depend on what question
you're trying to answer.
        Next, and this is my final one.
        If, rather than trying to reduce the uncertainty
associated with the estimated total number of cases due to
ground beef, you are instead seeking to reduce the
uncertainty in the estimated number of deaths due to 0157
from all sources, then the pattern that emerges from the
sensitivity analysis is that you're keenly interested in
improving your state of knowledge about the disposition of
the bloody cases than about the overall rate of 0157 in the
population.  And these results help to underscore the
importance of knowing what question you're trying to answer
in any sort of analysis.
        And that will be the end.  Now, are there any
questions specific to the epidemiologic validation or our
efforts to correlate the model outputs with the
epidemiologic estimate?
        MS. OLIVER:  Do the Committee or the experts have
any questions or comments?
        John?
        DR. KOBAYASHI:  John Kobayashi, Washington State.
Not a question, but a comment.  There was an old study
authored by McDonald and O'Leary in JAMA in 1986, I think it
was.
        DR. POWELL:  Eight per 100,000?
        DR. KOBAYASHI:  Right.  At any rate, I'm not sure
if it adds much to Cieslak's case control study, national
case control study, but I just wanted to make sure you're
aware of that.  And I think an advantage of the study we did
back in the '80s was that it was a very defined population
with an HMO, with a very clear population base.
        Basically, all of the individuals who sought
medical care and had a stool culture were tested for 0157 at
that time for one year.  And an association was found with
hamburger, so you can get an estimate of the burden of
illness in the absence of outbreaks for a one-year period.
        DR. POWELL:  Right.  Originally, we had used that
as a bounding estimate, and used the raw FoodNet data as our
lower bound, using the McDonald report as the upper bound.
But we were, I think, convinced that it would be more
appropriate to use the approach that CDC has used and take
the active surveillance data that is more current and then
apply these uncertain proportions in this step-wise fashion
for underreporting to arrive at a bounding estimate, given
that uncertainty about the active surveillance data.  The
bottom line was that our results didn't change a whole lot
from what had been used prior to that with using the
McDonald study as our upper bound estimate.
        DR. HULEBAK:  This is Karen Hulebak.  Mark, could
we go back to the third to the last slide, Comparison of Epi
Estimates with Other Model Predictions, and talk a little
bit more about what you're showing there--
        DR. POWELL:  Sure.
        DR. HULEBAK:  --the epi data being the curve
farthest to the left, and then our best model prediction?
        DR. POWELL:  Right.  Well, I guess I'm hesitant to
say that we have a clear winner.  I think it's obvious--
well, let me get into answering your question.  This blue
distribution is the epi-based estimate that's--
        DR. HULEBAK:  It's a little hard to see color.
        DR. POWELL:  Oh, okay.  Yes, that was--which is
furthest to the left; I guess that was my right, your left,
distribution.  I apologize.  Your other left, your other
left--distribution is the epi estimate, derived totally
independently from the model, okay.
        This broad distribution of the number of cases,
okay, is that which was developed by Harry Marks and is
based just on Shigella.  And we insert this as kind of a
place-holder for all of the models that have been developed
and published in the literature based on Shigella as a
surrogate for 0157, suggesting that using Shigella as a
surrogate--
        DR. HULEBAK:  Doesn't really match up very well.
        DR. POWELL:  --may overstate the cases, although
given the level of uncertainty about that data, there is
still some overlap, although, you know, not a terrible lot.
        This curve is that which was developed by Clark
Carrington.  These two are envelope methods using Shigella
as the upper bound and EPECs as the lower bound, and this
curve was not anchored to the epidemiologic data or deriving
a most likely value utilizing our best estimates of the
exposure distribution.
        I would also add there isn't a whole lot of
overlap with the epi distribution, but this particular
implementation of the object that has been developed by FDA
reflects only model uncertainty and not within-model
uncertainty.  So that distribution one would expect to be
somewhat more broader.  When we run that again, there would
be a little bit more overlap, but still the central tendency
is, you know, somewhat high for that curve, okay.
        Now, it's true that the beta-Poisson envelope has
the most overlap with the observed data, okay.  But one of
the questions that we pose to the Committee is, you know, is
it, you know, legitimate to utilize this curve which has
been anchored, okay, as opposed to these which take a more
neutral or uninformed sort of an approach, at least for
specifying the most likely value within the envelope.
        Now, I would say that again I'd just repeat that
the only value that was anchored in the beta-Poisson
envelope is the most likely value within the bounds, okay.
That was derived from initial estimates of our uncertainty
about the venison jerky outbreak, okay, and then using
concepts that are sort of maximum likelihood estimation
concepts saying what beta-Poisson parameters, given the best
estimate of the exposure output and our best estimate of the
occurrences, the epidemiologic data, would be most likely to
be observed under those conditions.  So that's the anchoring
that took place with the beta-Poisson envelope.
        DR. HULEBAK:  And the beta-Poisson envelope is our
own prediction.  I mean, it's--
        DR. POWELL:  That was one of two that was
developed by the team.
        DR. HULEBAK:  Right, right.
        DR. POWELL:  So we have an unanchored envelope and
an anchored envelope.
        MS. OLIVER:  Paul?
        DR. MEAD:  Paul Mead, CDC.  I have to confess--I'm
not sure if it's late in the day or something in those
cookies, but my head is kind of spinning at this point.
        DR. ROBACH:  Mine, too.
        DR. MEAD:  A couple of quick questions, and I'm
not sure how it influences--or comments--I'm not sure how it
would really work out in this model, but to the extent that
you use the outbreak data, I think it might be appropriate
to not include outbreaks due to unknown etiology.
        DR. POWELL:  We have.
        DR. MEAD:  Okay, to only use those where the
etiology is known because otherwise--I don't know that that
makes a major difference.
        DR. POWELL:  Yes.
        DR. HULEBAK:  But we've done that.
        DR. MEAD:  Okay.  On this table, the unknowns are
in here, so I thought perhaps that influences those
percentages.  The percentages given in the table, you'll see
under 1, 25 percent of outbreaks are due to--
        DR. POWELL:  That's number 2 in your figure?  Yes,
that was deleted from the presentation and that was
initially intended to serve as a means of kind of
background, laying out, and is not incorporated in the
analysis.  It's only based on outbreaks.  I think it's slide
13.  That was the information that was used in just number
13, and there we've limited--actually, I think about 40-some
outbreaks with unknown etiology occurred between '94 and
'98, and those were omitted from that data set.
        DR. MEAD:  Okay, great, thanks.  The other
question, I guess, is if I understand this, your results are
very much linked to sort of the attributable risk associated
with the consumption of pink hamburger.  And I guess this
opens up the broader question and it kind of gets back to
one of the earlier talks about cross-contamination and its
rule.
        I mean, in one investigation we did of sporadic
illness we didn't find any association, or consequently any
attributable risk with eating undercooked hamburgers, eating
pink hamburgers.  We did, however, find an association with
not washing your hands after handling those.  Now, I guess
that gets into some sort of philosophic issues about what's
the error there.  Is it hand-washing or is it the fact that
it was in the hamburger in the first place?
        But I'm concerned that the sort of attributable
risk to pink hamburgers, although it has been statistically
significant in some settings, really underestimates the role
of ground beef in terms of bringing this into the household,
and even in undercooked hamburgers which may or may not be
pink.
        DR. POWELL:  It may be, and this is why we would
feel that simply utilizing the confidence interval around
that population, attributable risk from the case control
study, would probably understate our uncertainty about that.
And that's one of the reasons we've tried to use kind of
bounding estimates that utilize other information to
characterize the proportion, you know, that etiologic
fraction.
        Certainly, some proportion of the outbreaks that
are identified as, you know, a non-meat source or something
else--the origins of that infection were ground beef, and
that's an uncertain proportion.  But I think what we tried
to do was not hypothesize, but treat the data that we have
in a way as to try and acknowledge the uncertainty that is
attendant with it and not just rely on the confidence
interval about the case control results.
        MS. OLIVER:  Paul, did you have any other?
        DR. MEAD:  No.  Thank you.
        MS. OLIVER:  John?
        DR. KOBAYASHI:  A couple of comments.
        MS. OLIVER:  Can you identify yourself?
        DR. KOBAYASHI:  John Kobayashi, Washington State
Health Department.  A couple of comments.  You may know
this, but in recent years, I think, since the beginning of
'98 in the international classification of disease coding
for hospital discharge data and deaths, there has been a
code for hemolytic uremic syndrome that wasn't present
previously.
        While that's not a lot of time, maybe if you look
at national data, you could get an idea as to the burden of
HUS, of which I think in the United States is almost all E.
coli.  At least that will give you some information about
the total burden of E. coli, not necessarily that related to
hamburger.
        The other comment, sort of following on Paul's
issue of secondary cases, it's worth remembering in the '93
outbreak that two of the three deaths that occurred were
secondary cases.  And it may be worth it to factor in some
sort of occurrence of secondary cases, such as looking at
the proportion of secondary cases in that outbreak.  In that
particular outbreak, we made a lot of efforts to reduce
secondary transmission, and I doubt that those kind of
efforts are done in response to sporadic cases.  So that
might be a conservative estimate of how much secondary
spread occurs related to contaminated hamburger and 0157.
        DR. POWELL:  Well, your point is well taken.  I
would presume that some unknown proportion of the cases in
outbreaks where the likely vehicle of infection is
identified as ground beef are secondary cases, and so to
some extent that's built into the outbreak data.
        Am I mistaken?
        DR. KOBAYASHI:  I'm not sure.
        DR. POWELL:  The number of cases are not
partitioned out this many from an outbreak where the likely
vehicle of infection was ground beef.  We'll put these in
the secondary transmission, then we'll put these in the
ground beef bin.  My understanding is that--
        DR. KOBAYASHI:  They are all included.
        DR. POWELL:  So you've already got some of that in
the outbreak estimates.
        DR. KOBAYASHI:  Maybe so.
        DR. POWELL:  So that proportion is the observed
proportion.
        DR. KOBAYASHI:  Right.
        DR. POWELL:  If you do have some data that would
help us estimate the proportion of secondary cases from the
primary cases, that would be very helpful.
        DR. KOBAYASHI:  Yes.  I think your point is well
taken that at least in our write-up in the '93 outbreak we
did include both secondary and primary cases.
        DR. POWELL:  Sure, sure.
        MS. OLIVER:  Does anyone else have any questions
or comments for Mark on the model summary and
epidemiological validation?
        [No response.]
        MS. OLIVER:  If not, why don't we move into the
next section, Mark?
        DR. POWELL:  Great, so I'll ask the panel members
to all come up to the table now.  And at this point, we'd
like to return to the initial points simply as a point of
departure for a broader discussion.  Again, we don't want to
limit the discussion to these questions, but we felt that it
would be helpful for us to pose some questions as a way of
initiating some discussion.
        So the first set of questions--if we have time, I
think we could return to this resolution question, since I'm
sure some people are going to be heading for the airport
soon.  Let's deal with the second of these.
        Is the panel aware of any evidence that would help
us to, or permit us to adjust for the specificity of the
microbial analysis?  There's been a lot of questions about
the test sensitivity, and a lot of our effort was devoted to
coming up with means of adjusting the apparent prevalence of
surveillance and other data for test sensitivity, both in
terms of the sample size, the microbial methods that were
used, and making it also in some cases conditional upon the
concentration on the incoming product.
        But specificity is an issue that we've not made
any adjustments for, and so we would ask if the panel is
aware of any evidence that would enable us to make that
adjustment.
        MS. OLIVER:  Does anyone on the panel have any
information, any comments on what he's asking?
        DR. POWELL:  The methods would be very similar to
what we had done for sensitivity.  So it's not a
methodological limitation, it's a data limitation.
        MS. OLIVER:  Art?
        DR. LIANG:  Art Liang, CDC.  I don't have an
answer.  I have another question, and that is I think I've
heard the term "sensitivity" used in two different ways.
You know, I'm used to thinking of sensitivity as, you know,
not as a positive predictive value, which is, I think, the
other use of sensitivity that I've heard earlier in the day.
I don't know how you used it.
        DR. POWELL:  Eric?
        DR. EBEL:  Well, I think we just want to define
specificity in this case primarily as the probability of
false positive results.  I'm sorry.  That would be one minus
the probability of two positive results, but we're
interested in trying to understand if any of the positive
results that we're getting are, in fact, false.
        Obviously, there's a whole series of explanations
for why you might get a false positive sample.  We haven't
accounted for that in our analysis.  We don't know that it's
an important issue, but if the Committee feels that we
should adjust for that, we'd like to know that.
        MR. SEWARD:  This is Skip Seward, with McDonald's.
We talked a little bit before about this, but I don't think
it's a very large issue that deserves much additional
attention as long as when you get the information, in the
methods it's clear that the people have taken the
identification all the way out to the very endpoint and not
relied on some previous endpoint prior to full elucidation
of what the microorganism is to say that they have a
positive.
        A lot of times, in industry, for example, they
will only go part way and just stop the test.  And in some
data, that may be reported as a presumptive positive which
eventually gets lumped into some estimate of a number of
positives.  So I think that's the only risk involved in
that, but if you go all the way, I would say it's not an
issue worth adding to your deliberation.
        MS. OLIVER:  John?
        DR. KOBAYASHI:  John Kobayashi.  I don't have any
advice for you with regard to this question, but I wanted to
mention that among the CDC and state health departments
there's a significant amount of discussion with one of the
major national laboratories which does testing on clinical
specimens.  And apparently there are plans for this national
laboratory to stop testing for E. coli 0157 and they would
simply rely on the SLT test.
        And although they are apparently willing to send
isolates and what not to state laboratories for further
identification, this may significantly affect sensitivity
and specificity of measuring the burden of E. coli 0157 in
the long run.
        MS. OLIVER:  Any other comments on this question?
        [No response.]
        DR. POWELL:  Next, then, we'll proceed to the
production segment.  We would glad entertain the Committee's
recommendations if they can think of a better way than what
we have presented to link live cattle to contaminated
carcasses.  We're aware of the limitations of the data that
were currently used to quantify that link, but are unaware
of a better way to establish that link.  And a related
question is are there data or methods currently available
that would improve the quantitative links among fecal, hide
and carcass contamination.
        MS. OLIVER:  Dale?
        DR. HANCOCK:  Dale Hancock, Washington State.  I
think that the amount of data here is really limited.  That
one study--
        DR. POWELL:  Chapman?
        DR. HANCOCK:  --Dr. Chapman's study, and that's a
tiny little study.  And so, you know, the obvious answer is
a U.S. study with lots of animals, but I guess a less
obvious answer is it would be better to tied at the group
level, it seems to me, because at least from available data
it is really clustered at the pen level, so that our high
prevalence pens--do they have high prevalence carcasses, and
what's the function going on there at the group level?
        And then I guess the nice thing would be to extend
that to the ground beef level, although I don't have any--or
at least the boxed beef or the combo level, although that
might be super hard to do.
        DR. POWELL:  Are you aware of any efforts to
gather that sort of data that would take into consideration
the group-level effect?
        DR. HANCOCK:  Yes.  I hear MARC, the Meat Animal
Research Center, has done something on that, but I don't
really know.  They kind of play their cards a little close.
        MS. OLIVER:  Isabel?
        DR. WALLS:  I'd like to see more data on
contamination of the hide.  You know, maybe a study could be
set up, and I would also like them to consider seasonality,
if they could.
        MS. OLIVER:  John?
        DR. KOBAYASHI:  John Kobayashi, Washington State
Health Department.  Of course, I'm not a food scientist, a
modeler, or a member of the beef industry, and I think my
comment needs to be taken with a grain of salt.  But at
least my two cents' worth is I don't understand why it's not
possible when cattle are slaughtered to tie on a sheaf of
bar codes onto that carcass, and as that carcass gets
separated from the hide and dismembered and what not various
bar codes accompany those products along the production
line, because it seems to me that there are many, many
questions as to relationships and probabilities that are
pretty uncertain at this time.  And I think that those could
be resolved with, you know, more detailed tracing of
products and seeing where contamination goes.
        MS. OLIVER:  Thank you.  Does anyone else have
comments on this question?
        [No response.]
        DR. POWELL:  Moving then to--
        DR. POWELL:  Dane?
        DR. BERNARD:  I'm sorry for the delay.  Mine is
actually a question--Dane Bernard--for you all.
        MS. OLIVER:  Can you identify yourself?  Oh, I'm
sorry.
        DR. BERNARD:  I did.
        MS. OLIVER:  I know.  I apologize.
        DR. BERNARD:  Have you asked for studies that
address your second question there?  If so, obviously there
are many techniques available to us today to mark organisms
that could be included in a feeding regimen, for example, in
feedlot that you could then track their eventual
translocation to a finished carcass.
        And to address John's intervention regarding
tagging, I'm sure that the experience here has been the same
that I have had when this subject comes up because it goes
in so many different directions when a carcass doesn't go
into one type of thing.  It goes into sausage.  Parts go to
sausage, parts go to this, parts go to that.  It just goes
phht.
        DR. POWELL:  Was that the technical term for--
        DR. BERNARD:  Yes.
        DR. POWELL:  Yes.
        [Laughter.]
        DR. BERNARD:  At least at NFPA, it is.  We have a
definition on my wall.  So it's tough.  It's very expensive
to do that.
        But back to the central question, there are
techniques, and I think it would be interesting to do that
kind of study.  But I don't know whether you have asked for
that or whether you need a recommendation from here, but I
agree with Isabel's earlier intervention that it would be
nice to have some of that work not only to look at hide, but
to start out with fed cattle or a marked strain and then
follow where that might lead to and in what quantities.
        DR. POWELL:  The only answer on the part of the
team would be that part of our effort through the Federal
Register, you know, notice and comment procedure and the
public meeting that was held in October '98 was that we
requested that all relevant data be submitted to the docket.
        MS. OLIVER:  Mike?
        MR. ROBACH:  Mike Robach.  Has there been a
request for a sister USDA group such as ARS to look at the
ecology of this organism as it relates to cattle production
and cattle slaughter, something similar to something we've
just completed in the poultry industry looking at the
ecology and modes of transmission in reservoirs of
campelobacter, both in live production and following those
flocks through the processing plant?
        It seems to me that we generated a tremendous
amount of valuable information from hatcheries, from feed
mills, from grow-out facilities into processing plants that
have allowed us to begin targeting intervention strategies
to reduce the incidence of campy in poultry.  And it seems
to me that this was a good combination of government-focused
research at a problem with strong industry cooperation.
        Has anybody approached that thought with ARS?
        DR. POWELL:  Well, my only piece of information
that I would add is that I know that in the CSREES
extramural grant solicitations in the last couple of years,
that sort of information, or those sorts of proposals have
been invited through the RFP process.  And that is really,
you know, a large chunk of money relative to the intramural
research monies.  I'm unaware of what sort of success rate
there have been for proposals to look at those issues.
        MS. OLIVER:  Dale?
        DR. HANCOCK:  Yes, just to address Dr. Kobayashi's
point--
        MS. OLIVER:  Can you identify yourself for the
record?  I'm sorry.
        DR. HANCOCK:  What did you say?
        MS. OLIVER:  Can you identify yourself for the--
        DR. HANCOCK:  Dale Hancock, Washington State.
        MS. OLIVER:  Can you speak in the microphone,
also?
        DR. HANCOCK:  To address the issue of animal
identification, while I agree that's a great idea and we
should do that at some point, it's not required to answer
this question.  All we need for this question is cooperating
feedlots, which are no problem; cooperating slaughter
plants, which are a little bit more of a problem; and then
cooperating government agencies agree not to use
surveillance data for regulatory purposes, which admittedly
is a little bit of a problem.  So it should not be that
overwhelming to collect data to answer these questions, it
seems to me.  And with regard to the point about the
ecological studies, there are a number of groups that have
studies underway at the farm level, although we can always
use more.
        MS. OLIVER:  Colin?
        DR. GILL:  Some of my colleagues have a scheme
afoot to follow generic E. coli contamination through the
packing plant using molecular techniques of strain
differentiation.  They assure me it can be done and I have
to believe them.  As far as cattle identification is
concerned, there's a whole industry concerned with carcass
identification which is moving ahead quite rapidly, I
believe.
        MS. OLIVER:  Jim?
        DR. ANDERS:  Jim Anders, North Dakota Health
Department.  I did ask a question this morning.  I still
feel that the hide and the carcass contamination--any
numbers that we currently have on those have been with
methods that may not be standardized methods.  And so down
the line I guess I agree that we need studies on those.
        And when we get to the carcass contamination, we
have to be very careful because, as I mentioned this
morning, some of those studies may be done on very small
numbers of grams and very small numbers of numbers and may
not be reliable, which seems to me that would make a
difference in the dose-response.
        DR. POWELL:  It certainly would, and again we've
incorporated not only the microbial test sensitivity and
what we know about that in the adjustment from apparent to
true prevalence, but incorporated in that uncertainty is
also the size of the sample, okay, and the concentration
because obviously at a lower concentration any given test
method is going to--at a lower concentration, it will be
less likely to detect the organism.  And so those sorts of
considerations have been factored in, and we'll try and make
that more clear, I think, about how we've gone about that.
        MS. OLIVER:  Any other comments on this question?
        [No response.]
        MS. OLIVER:  I don't see any.
        DR. POWELL:  So moving on the slaughter segment,
we would ask what sort of evidence would the Committee feel
might be necessary to satisfactory quantify the link between
hide and carcass contamination.  We're not aware of any such
data, but we would invite your comments as to what sort of
data would be adequate.
        MS. OLIVER:  Does anyone on the Committee or any
of the experts have any comments, thoughts on this?
        Dale?
        DR. HANCOCK:  This is Dale Hancock from Washington
State University.  Here again, I think the group-level data
becomes quite important to look at prevalence on hides and
in carcasses at the group level, and the same sort of study
that looked at fecal prevalence versus carcass prevalence
could look at this.  And actually I think there is a study
that is addressing this issue to some degree going on at
MARC.
        DR. POWELL:  At the group level?
        DR. HANCOCK:  That's what I understood, but that's
not real authoritative.
        DR. POWELL:  So if I interpret your comment
correctly, a study that would simply be a random study, say
1 in 100, 1 in 300, 1 in 500, would not be able to get at an
important factor that would affect that relationship?
        DR. HANCOCK:  It would not get at it from the
standpoint that these things vary at a group level.  I mean,
at least that's what the data suggests, so far as there are
groups with high prevalence.  And presumably their high
prevalence is much higher because it represents maybe
several weeks' contamination.  And so to my way of thinking,
that would be much better done at the group level.
        You could look at individuals within groups to see
if it made any difference whether that animal was positive
on its hide versus its carcass.  But then are groups with a
lot of hide contamination--are they more likely to have a
contaminated carcass, and what's the function that describes
that relation?
        MS. OLIVER:  Art?
        DR. LIANG:  Art Liang, CDC.  I just have more
questions.  I'm sure this is a stupid question, but I
thought one of the wonderful things about models is that you
didn't need any data and that you--
        [Laughter.]
        DR. LIANG:  So why is there a concern about this
particular step versus other places where you simplified the
model?  Maybe you've done some sensitivity analysis and this
turns out to be a critical node, and so maybe that's the
answer.  I don't know.
        DR. ROBERTS:  Well, actually, you're right.
        MS. OLIVER:  Can you identify yourself, please,
for the record?
        DR. ROBERTS:  Tanya Roberts.  Clare Scott and I
did another paper where we looked at some of the procedures.
We didn't include fabrication, which Colin Gill suggests
could be very critically important.  But when we looked at
dehiding, chilling, evisceration, and decontamination in a
hot water wash or steam pasteurizer, the dehiding was by far
and away the most significant factor.
        Now, this is a very simple model that we kind of
abstracted out here.  We took, you know, good, improved
versus not quite so good levels that you might expect in two
different kinds of systems based on rather limited data we
could find and ran it through the model.  So I don't want to
say that this is the last word here, but it's suggestive
that that would be a very important place to collect more
data.
        DR. POWELL:  Mark Powell.  As Dr. Gill suggested,
it's kind of a prime facie case of an important data gap
because it's a small study and it's not national.  It's not
U.S.-based, but it's the only one.
        MS. OLIVER:  Jim, did you have any other
questions?
        DR. ANDERS:  Oh, I'm sorry.  Let me put that down.
        MS. OLIVER:  Mike?
        DR. ROBACH:  Mike Robach.  I think the answer to
some of these questions--you know, what evidence would be
necessary--I think simply you need more evidence.  And I
think you're absolutely right; what you have are very small
numbers and maybe not indicative of what happens in this
country.  We also have to understand that we've got two
different systems, one that is taking care of feedlot
animals and the other one that is taking care of culled
animals.  And I think you've clearly identified differences
in those two rearing systems.
        And so there simply has to be more information
generated under the conditions in which we're currently
operating to base this very important node on.  And I agree
with you, I think it is an extremely important node, and a
lot of assumptions made here, you know, drastically impact
the numbers that you're seeing at the end of the model.  So
this is extremely critical, in my opinion.
        DR. POWELL:  Mark Powell.  Well, I guess my
response would be that more information would reduce our
uncertainty.  Our uncertainty about the model predictions
was shown by those error bars that essentially, you know,
reached from one end of the scale to the other--
        DR. ROBACH:  Right.
        DR. POWELL:  --are considerable.  And so I have,
you know, greater confidence that it would tighten our
estimates.  I think arguably the results of the baseline
model are reasonably consistent with the epidemiologic
evidence, at least within an order of magnitude, which for
models is pretty close to dead-on.  And so I think that the
additional information would be extremely helpful in
tightening our uncertainty and being able to focus, you
know, on where the key points in the process might be.
        I think Eric has another comment.
        DR. EBEL:  No.  I was pushing my microphone away.
        [Laughter.]
        DR. POWELL:  I think Eric will hold his powder
dry.
        MS. OLIVER:  Any other comments on that point?
Colin?
        DR. GILL:  Just one thing.  Could I just ask--
        MS. OLIVER:  Can you identify yourself again?
        DR. GILL:  Colin Gill.  These links between hide
and carcass contamination--do you consider evidence from
generic E. coli equivalent to evidence of potential
contamination with 0157, or do you actually want data on
0157, because data on 0157 is just about impossible to
collect because there's not enough of it?  So we can get
strings of zeroes anytime, but how do you view the data on
generic E. coli?
        DR. ROBERTS:  Tanya Roberts.  In an earlier
version of the model, I actually did use generic E. coli,
and we adjusted it down, though, by 1 log based on the Zow
and Doyle study looking at fecal shedding, in that the 0157
were 1 log less.  But that tended to give counts that were--
that was based on Graham Bell's work, where he was taking a
piece of the hide and putting it upside down on an agar
plate to mimic what happened if the hide would slap back on
the carcass.
        And the counts tended to be 1 log too hide based
on the FSIS data.  So it suggests that maybe we either--we
need to make sure that the aerosol contamination is
included, too.  So maybe we just don't want to look at hide
slaps; we want to look at the transfer from gloves, which
may be less, and the aerosol contamination.  And so there's
maybe a lot of adjustment factors we might have to take into
account for the generic E. coli.
        If the ARS data says that it's very high in some
herds in summer months and early fall months and you're
getting, you know, 25, up to 50 percent of the animals in a
herd going to a slaughterhouse infected, then maybe it's not
such a rare event in those seasonal things.  And maybe if
you could collect the 0157 data in the right season, you
could find a good relationship between a contaminated animal
with its hide and its feces and what happens to the carcass
level.
        DR. GILL:  So if I understand, you do want 0157,
not generic?
        DR. ROBERTS:  Well, we would prefer that.  I don't
know how to--maybe Eric and Wayne have some insight on this,
too, or Mark.
        DR. EBEL:  This is Eric Ebel.  Yes, we do want
0157 data.  I guess the one concern--well, one of the
concerns with generic E. coli data would be the ubiquitous
nature of, you know, E. coli in or on cattle come into
slaughter.  And then trying to develop a correlation with
the prevalence and concentration on carcasses would just be
a more difficult correlation to demonstrate because of the
fact that, you know, we might have 100 percent contamination
in the live animal.  Then anything else on the carcass makes
the correlation a little less defensible, whereas with
something that's less common than 100 percent in the live
animal status would allow us to have pretty good comfort
level in the correlation we can develop for 0157.
        DR. POWELL:  And I would add that it's probably--
Mark Powell--presumably, it has to do with the levels as
well, not only that about 100 percent would be positive for
generic E. coli, but presumably the levels, given that it's
positive for generic E. coli, would be positive, and that
could affect the link in terms of prevalence.
        Now, for generic E. coli, as Tanya was suggesting,
we have used generic E. coli information in terms of the
direction and the magnitude of changes once 0157 is on the
carcass, okay, making the presumption that, say, a treatment
that reduced generic E. coli, a physical treatment that
reduced generic E. coli by one log would also reduce 0157 by
1 log.  So we would invite more generic E. coli data and
have used it, but not for the link between the status of the
live animal and the prevalence of contaminated carcasses.
        MS. OLIVER:  There were a couple of others with
comments.  Dane, do you want to do a quick one, and then
John, and then we could move on to the next question.
        DR. BERNARD:  Yes, thank you.  Dane Bernard.  Very
quickly, 0157, yes, and I would agree with Dr. Gill's
previous intervention about if you want to look at where we
get contamination, you can look at further processing seems
to be a good source.  But the central question is where does
the 0157 come from, and that does not, according to the
correlation studies that we've seen--and I recall the one by
Acuff, et al, where they didn't find any statistical
correlation between occurrence of anything and 0157.  So we
don't have an effective marker.
        Further, there was an intervention much earlier in
the meeting that I'd just like to bring up once again, and
that has to do with outbreaks and how they are--we have
clustered cases.  Your output from what you've done is a
distribution over the entire population of outputs of 0157
in ground beef.  But recall that most of our problems seem
to be centered around outbreaks, even sporadic cases.
        We're getting better at linking those together and
saying that they had a common source.  And I think that as
you go forward, maybe you should consider that unique event
because what you're looking at is what happens normally.
That's my impression of what you've got here and the
distributions that happen normally, whereas there may be an
event, a catastrophic event, a punctured gut, a dung ball, a
hide slap has been brought up, that then carries through the
system and results in your outbreak.
        So there are a couple of scenarios here and I
think as you go forward, you may have to consider that.  I
don't know how you do that, but I'd just like to get that on
the table for further thinking.
        MS. OLIVER:  John, did you have a quick comment?
        DR. KOBAYASHI:  John Kobayashi, Washington State
Health Department.  Just a quick comment.  In investigating
hospital infections, like in surgical suites and what not,
it's not uncommon to spray an ultraviolet dye or something
like that on the field or some other area, do a mock
operation, and then see who glows at the end of this
procedure.
        And one would think that that would be an easy way
to gather some data on contamination of the carcass from the
hide and, you know, spray some hides and various parts of
the anatomy and all that sort of stuff, and then see where
the carcasses glow in various situations.
        MS. OLIVER:  Did you want to move on to the next
question?
        DR. POWELL:  Yes, we'll move on to the next
question.  I'm sorry.  Eric wanted to make a final comment.
        DR. EBEL:  I don't know if it's a final comment.
This is Eric Ebel.  I just wanted to mention that another
source or data is salmonella sampling data.  And we have
looked at some of that and it's kind of surprising, but at a
very crude level we found similar--at least at the level
we're analyzing things, about 30 percent of animals coming
in at least intestinally colonized with salmonella came out.
And that was actually done at the group level.  There was
about a 30-percent correspondence, then, with contaminated
carcasses.
        One of the problems with hide sampling and with
carcass sampling is the methods of doing those samplings,
and better understanding of how sensitive those methods are,
how many are we missing, is needed before we get too fired
up about doing a lot more of the sampling.  So I think
targeted research in that area of, you know, how confident
can we be and the results of standardized carcass and hide
sampling will be useful.
        DR. POWELL:  Okay, I'd like to move on then to the
second question that we'd like to pose regarding the
slaughter segment, and that is that we've attempted to
develop a mechanistic model that follows product through the
slaughter plant, in large part due to our efforts to satisfy
one of the objectives, which is to try and identify
potential critical control points in the farm-to-table
continuum.
        But we don't have anchors everywhere, obviously,
real data.  And so we ask would it be preferable to develop
a strictly data-anchored model which doesn't attempt to
model processes between the monitoring points, and if so,
what data would be required to develop this sort of an
empirical model rather than one that is more predictive.
        MS. OLIVER:  Does anyone have comments on that?
        Chuck?
        DR. HAAS:  Chuck Haas.  Well, as somebody who is
more of a modeler and less of a food person, I guess my bias
is mechanistic models are preferable, in that if your
overall objective presumably is to look at possible
regulatory scenarios or interventions and if you go strictly
the data-anchored route, that simply describes the state of
practice at present and gives you no ability to estimate
what would happen.
        MS. OLIVER:  Dane?
        DR. BERNARD:  Dane Bernard.  As a non-modeler, I
agree with the modeler.  I think if your ultimate purpose is
to go back and develop some interventions, then that's what
you have to do.  There was one element in that discussion,
though, that came up that I'd like to bring up again, and
that's the modeling in the cooler.
        I think Dr. Gill made an intervention that was
very important.  He has done some carcass mapping; others
have done carcass mapping.  You made an assumption of
general distribution over the carcass that I think is not
supportable by what we know about localization of bugs on
the carcasses.  Head meats, cheek meats are very highly
contaminated.  Based on the way carcasses are slaughtered, I
think you need to consider where they would be and how the
chilling is delivered first to the surface, which is where
they are likely to be, and how that might affect the growth
of your model.
        Thank you.
        DR. POWELL:  Mark Powell.  If you're aware of any
enumeration data on head and cheek meat, we'd love to--
        DR. BERNARD:  Probably not in the public domain.
        DR. POWELL:  That's what I'm getting at.  I'll
flash the FSIS document number again.
        Any other comments regarding this?  If not, we'd
like to move on to the preparation questions.  A similar
sort of question, and maybe we can already guess at what you
might respond, but has to do with modeling, again, outside
of the anchored zone.  Rather than modeling beyond the last
point where validation is currently possible, that is in raw
ground beef where we don't have another independent
validation point until we get to human illnesses, would it
be preferable to simply consider a proportional relationship
at that point between the prevalence of 0157 in raw ground
beef and the incidence of 0157 illnesses due to ground beef?
        MS. OLIVER:  Any comments on that?
        Dale?
        DR. HANCOCK:  Dale Hancock, Washington State.  It
seems to me that it would have to incorporate more than just
simple prevalence in ground beef, have some sort of
concentration distribution, or at least something that is a
proxy for that because it seems possible to me that as time
goes on that we'll have interventions that maybe reduce or
shift the distribution down for ground beef.  And that might
happen at the same time we develop more sensitive methods,
for example, and so it might look like we're not making that
much progress unless we have some way of inferring
concentration or level of contamination.
        MS. OLIVER:  John?
        DR. KOBAYASHI:  John Kobayashi.  Maybe I'm missing
something in your question, but it seems to me that the
occurrence of illness due to ground beef when consuming
contaminated ground beef is highly dependent on the extent
to which it's cooked.  And consequently I think if you
extrapolate from ground beef to some proportion, you're
making an assumption on the level of undercooking or
adequacy of cooking.  I'm not sure how you can do that for a
long-term model.
        DR. POWELL:  Mark Powell.  The proportionate
relationship that currently exists between the prevalence in
raw ground beef and the number of illnesses is kind of the
net result of all those things under current practices.  But
it would not necessarily be amenable to being able to
predict with a great deal of certainty about what the impact
would be of interventions that might change the shape of the
underlying distribution curve.
        If it were merely to shift it without changing the
shape, then it may be reasonably useful.  But if it were to
change the underlying shape of the exposure distribution,
such as something that would trim the tail, then perhaps
not.
        Chuck, why don't you weigh in?
        DR. HAAS:  Chuck Haas.  I think by the time you go
through that litany that you just ran through, Mark, you're
probably adding more assumptions than you would be saving by
putting the model to the state where it is now.
        DR. POWELL:  Mark Powell.  I think the model as it
currently exists involves a lot of assumptions going on
between raw ground beef and consumption of ground beef.  So
I think we would be replacing one set of assumptions for
another.
        MS. OLIVER:  Any other comments?
        [No response.]
        MS. OLIVER:  Move on to the next question.
        DR. POWELL:  Okay.
        MS. OLIVER:  Skip, did you have something?
        MR. SEWARD:  I agree with everything.
        DR. POWELL:  The second question is how might we
define a plausible frequency distribution for extreme
time/temperature handling conditions in the absence of data?
Wayne has elaborated the assumptions regarding least
compliant, uniformly compliant, most compliant, in the
absence of data.  And I think it's an approach that reflects
our uncertainty, but we'd entertain or invite your comments
about how we might improve that approach.
        MS. OLIVER:  John?
        DR. KVENBERG:  I could offer two possible
suggestions that may be useful to you.  One is that states
do two things.  They do food inspections and they do
outbreak investigations at the local level.  Relative to
outbreak investigational studies, I know of places like New
York where they will go to the root cause of the
investigation and make determinations on what conditions
existed that contributed as a factor to the outbreak.
        Secondly, at least some key states are tracking
compliance with their requirements through certain critical
factors that are addressed in the Food Code, to include
refrigeration, cooking temperatures, temperature abuse, hot-
holding, et cetera.  It may be possible through their
databases to get some information relative to developing a
plausible frequency of extreme conditions you're seeking on
a limited basis and them make some assumptions from that.
        DR. POWELL:  Thank you, good suggestion.
        MS. OLIVER:  Dane?
        DR. BERNARD:  Dane Bernard.  I think you had kind
of part of my message before.  I apologize for maybe being
too abrupt at that point in time.  However, I think you have
to look at what we do know about the growth of 0157 as a
starting point.  If it doesn't grow below 8 C, then let's
not model below that.  It just doesn't make sense.  And I
think you need to look at your assumptions on lag time in
terms of when you begin to lop off lag time and count that
toward when the organism might start to grow.  I think there
are some assumptions that you made there that I personally
wouldn't agree with.
        In terms of your assumption on instant temperature
equilibration, I think what you might do--I think, as you
said, the rationale was while it goes into a refrigerator,
it has got to cool down.  So, you know, you think it might
null out.  But once the beef is chilled, the quality
concerns of the industry are to keep it cold.  Even when it
gets out and it gets fabbed, we're doing that in cold rooms.
        So I think if you look even within the agency at
what kind of temperature profiles the agency allows, trim,
for example--and while practices vary, the custom in most of
the larger operations which produce the bulk of our beef is
to put trim in, put a layer of CO2 ice in, put more trim,
more ice, more trim, and they keep it cold and it goes back
into a 36-degree room.  So it just simply struck me as maybe
a bit out of the ordinary to assume instant equilibration
when we're storing at 36 in a 2,000-pound combo to assume
that it goes instantly to room temperature as it affects
growth.
        So I think if you look at what the ordinary
practices are--and there's plenty of information in the
agency to give you, I think, a better, more plausible
distribution in terms of time/temperature handling.
        MS. OLIVER:  John?
        DR. KOBAYASHI:  John Kobayashi.  I'm not sure if
this is what you're looking for because it's related to time
and temperature violations in foodborne disease in general,
not specifically to meat and not specifically related to
0157.  But there are a couple of studies out there that
relate to time and temperature violations and other sort of
critical item violations in the risk of foodborne disease.
        One was published around 1987 in the American
Journal of Public Health.  The first author was Irwin, and
what we did is we looked at restaurant inspections before
the outbreak occurred--we were able to do that
retrospectively because of computerized records--and the
occurrence of foodborne outbreaks in Seattle, Washington,
compared with restaurant inspections that were done on
unaffected restaurants.  And there were odds ratios and that
sort of stuff that occurred if inadequate refrigeration--
your risk of a foodborne outbreak increased by thus and
such.
        Since that time, there's an EIS officer named
Bucholtz in L.A. County who did the same study, except with
L.A. County data, which is considerably more abundant.  And
I haven't seen his data, but I've heard he came up with
basically the same conclusions.  But, again, this isn't
related specifically to 0157.  This is foodborne in general,
but maybe you can extrapolate if you're needing that type of
stuff.
        MS. OLIVER:  Dick?
        DR. POWELL:  Pardon me.  I just had a follow-up
question.  Would that help us get at the prevalence of
abusive conditions or the risk associated with abusive
conditions?
        DR. KOBAYASHI:  Well, yes.  I don't know.  You'll
have to look at some of the data and think about it.  I
mean, basically we were able to get odds ratios in
attributable risks involved, and I think they did the same
thing in L.A.
        DR. POWELL:  Thank you.
        DR. KOBAYASHI:  But, again, it's restaurants and
not slaughterhouses, and so forth.
        MS. OLIVER:  Dick?
        DR. WHITING:  Dick Whiting, FDA.  I would echo
some of the comments I've heard recently on the use of lag
times in some of this.  Maybe you've got organisms that some
might be adjusted to the intestinal tract and then you've
got other organisms that have dried out on the skin.  I
think the conservative approach is to sort of disregard the
lag phase and just assume the organisms can grow.
        Another comment.  I'm not sure just exactly how
you did model temperature, and so on, but we did come across
some data in our studies on the temperature in home
refrigerators.  And if that would be of use for you, I can
supply that.
        MS. OLIVER:  Dale?
        DR. HANCOCK:  Dale Hancock, Washington State.  It
seems to me to estimate how common those extreme things are,
couldn't bacteriological profiles at the retail level give
you some information about what fraction of ground beef has
been seriously temperature-abused?  Maybe Dr. Gill can
provide more insight on that?
        MS. OLIVER:  Colin?
        DR. GILL:  We're just in the process of completing
a rather extensive study of the cold chain in Canada.  It
turns out that all the products have reached temperatures
below 6 degrees centigrade by about 7 days, but it takes 7
days to get down there.  After that, we found no product
above 6 degrees centigrade until it got to the retail level,
on display, when 4 percent of the products at any time is
above 7 degrees centigrade.
        So the only time that you see to be getting
temperature abuse to any extent in the general distribution
is in the cooling down phase, after the carcass is broken
up, and when it's return to the retail shelf.  We've also
looked at combo bins and those are, of course, brought down,
as you say, with dry ice.  And they were uniformly below 6
degrees centigrade.  So, basically, until it gets to the
retail level or to perhaps the restaurant level, you haven't
got much of a problem as far as temperature is concerned,
apparently.
        MS. OLIVER:  Isabel?
        DR. WALLS:  Isabel Walls, NFPA.  I'd like to agree
with what Dane said, and also just to urge caution using
modeling data.  There are so many unknowns here.  I think
it's helpful to use modeling data for "what if" scenarios,
but we don't really know how long products are temperature-
abused, although there's some evidence now coming out.  We
don't have a lot of data on how many people are abusing
these products.
        So there's a lot of unknowns, a lot of assumptions
that could adversely the model, so I'd just urge caution in
doing it.  It may be helpful or interesting to do some "what
if" scenarios.  What if it is abused?  But, you know, unless
we have really good data on how much is abused and at what
temperatures, you may not want to use it in the model.
        MS. OLIVER:  Any other comments on this point?
        [No response.]
        MS. OLIVER:  Do you want to move on to the next
question?
        DR. POWELL:  Thanks.  Finally, we'll turn to the
set of questions regarding dose-response, and we would ask
whether the Committee feels that there are sufficient data
methods currently available to develop a separate dose-
response relationship for the susceptible population and how
we might validate such a dose-response curve.
        MS. OLIVER:  Any comments?
        Chuck?
        DR. HAAS:  Chuck Haas.  There is not data that I'm
aware of, and I think the only approach to getting such data
will be to develop animal models and to do the test on an
animal model susceptible sub-population.
        MS. OLIVER:  John?
        DR. KOBAYASHI:  John Kobayashi.  If by a
susceptible population you're talking about children, I
agree.  I'm not sure that there's that much data out there,
but a couple of things that come to mind is that maybe you
can tease something out of the '93 outbreak, making the
assumption that those were children's burgers and most of
the people who ate them were children, as opposed to small
adults or, you know, adults who were eating small amounts of
food, and then break apart--there should be an age
distribution, and so forth, with regard to the cases, and so
forth.
        The other thing you might want to look at is that
there were two waterborne outbreaks, one in Alpine, Wyoming,
related to 0157:H7, and another one in Missouri many years
ago.  And in that case, you had a whole community, young and
old, et cetera, that were exposed to contaminated water.
And maybe there's some way of looking at the differential
occurrence with regard to the illness by age.  I think one
of the problems with just looking at raw surveillance data
is how much of an influence exposure has on the age of
occurrence of the cases.
        MS. OLIVER:  Chuck?
        DR. HAAS:  Chuck Haas.  I'm familiar with both of
those waterborne outbreaks.  In Kabul, that actually, as far
as I understand, preceded the ability to measure 0157 in
water samples, and so there simply are no water data
available.  For Alpine, the people I've talked to say
they've been unable to isolate 0157 from the water sample,
so again we lack exposure data.
        MS. OLIVER:  Can you identify yourself and talk
into the mike, please?
        MS. COLEMAN:  Peg Coleman.  Just a follow-up
question on waterborne outbreaks.  Wasn't there an outbreak
in New York this year?
        DR. HAAS:  There was, and I haven't heard any
indication as to whether or not they've got exposure data
for that.
        DR. KOBAYASHI:  This is John Kobayashi.  I do know
that they got the 0157 out of the water, and I assume they
quantitated it.  But then the question is the population
exposed and how well they were able to define that because
it was a big state fair or something like that.
        DR. POWELL:  A lot.
        MS. OLIVER:  Any other comments on this?
        [No response.]
        DR. POWELL:  So moving on to the next--
        DR. HULEBAK:  Just one point.  I did talk to Nancy
Strockbine and Paul Mead before they left and they've
promised us that they will be submitting some comments by e-
mail, anyway, in written form, to help address some of these
questions.  And I should reiterate that that invitation
stands for all members of the Committee and invited experts
to give us your thinking, any other thoughts that you have
in the next couple of months.
        MS. OLIVER:  Do you want to move on, then?
        DR. POWELL:  Thank you, yes.  The next question--
I'm sorry that Margaret seems to have left because she had a
comment earlier regarding the applicability of the EHEC to
define the lower bound of the envelope.  But we'd ask
whether the Committee thinks that the bounding approach used
in the envelope methods, using Shigella dysenteriae as the
upper bound and the EPECs as the lower bound--whether that
approach is sound and is reasonably likely to capture 0157
somewhere within those bounds?
        MS. OLIVER:  Does anyone on the Committee or any
of the invited experts have any comment on that?
        Chuck?
        DR. HAAS:  Chuck Haas.  I'm starting to think
although it certainly was reasonable to begin with that
probably the Shigella may be simply much more potent.  That
may be over-conservative in terms of estimating the upper
bound.
        MS. OLIVER:  Any other thoughts or comments?
        [No response.]
        MS. OLIVER:  I don't see any.
        DR. POWELL:  And then finally the one dose-
response curve that did have the greatest extent of overlap
with the epidemiologic curve obviously is anchored, and so
it has kind of a leg up on a more uninformed approach.  But
on the other hand, we think that if you've got data, you
ought to use it, is another argument.  And it flows from
ideas about, you know, kind of most likely estimation sorts
of procedures, where you want to--or maximum likelihood
procedures where you want to say what values for the
parameters are most likely, given the available data.  So we
would ask your comments as to whether you think that that
sort of an anchoring approach was appropriate.
        MS. OLIVER:  Does the Committee have any comments
on that?
        DR. POWELL:  Well, if Chuck Haas shrugs, we'll
take that as a--
        DR. HAAS:  Chuck Haas.  You know, I'm comfortable
with it because as I mentioned to you before one-on-one, it
looks like that dose-response curve probably overlaps the
animal dose-response curve that exists.  So, you know, I'm
comfortable that it's giving reasonable-looking results.
        DR. POWELL:  Dane?
        MS. OLIVER:  Dane?
        DR. BERNARD:  Dane Bernard.  This is obviously a
question more appropriate to the modelers than the non-
modeler food technologists.  The only thing I would ask is
that you, in your descriptive terms, do some ground-proofing
on it by looking at what has happened with outbreaks, what
we do know about outbreaks in terms of what may have been
there even though there are methodology differences and
there are gaps and there are holes.
        I go back to what I said earlier that we seem to
have run into--the problems come from outbreaks and
clusters, from what we know, and we seem to have probably
presented a good deal more 0157 than we have problems.  So
just try to take what is there and look at it practically
and see whether the anchor that you've picked makes sense in
terms of what we're observing in practice.  And you're
puzzled and I don't know how to go any further to answer the
puzzled look on your face.
        DR. POWELL:  Mark Powell.  Well, looking at the
active surveillance data suggests that most of the cases of
0157 are sporadic rather than associated with outbreaks.
And clearly I think we would do well to follow your advice
by trying to explore more fully the kind of extreme events
that could lead to big doses or a large population being
exposed and what sorts of steps in the process lead to those
extreme outcomes.
        MS. OLIVER:  Any other comments, Chuck?
        DR. HAAS:  Yes.  Chuck Haas.  I'm not quite
comfortable with the argument that most of the case burden
is from outbreaks.  And, you know, let me throw one other
piece of data on the table.  The data that has been reported
in England and Wales--they report 0157 confirmed outbreak
cases and total laboratory reports, and they show a ratio of
about 10-fold between them.
        You know, I think most people believe that England
and Wales probably captures a greater proportion of outbreak
in their reporting system than we do.  So, you know, I would
use that possibly as a lower bound for ratio.  I think there
are a lot more sporadic cases or unreported outbreak cases
than reported outbreaks.
        MS. OLIVER:  Dane?
        DR. BERNARD:  Dane Bernard.  I'm not going to get
into a war with Chuck over this.  I would agree.  You know,
my intuition says that a lot of the sporadic cases are just
outbreaks that we haven't linked up.  So I guess that's kind
of where I was coming from with that.
        But, you know, with the ratio that you've
developed in terms of the total number of cases that are due
to ground beef--that was, what, 17 or 18 percent--if you
look at that, that kind of narrows down the total case
burden.  And I do think that we're going to be getting
better at linking things together, which still leaves me
uncomfortable at looking at this just as a general problem
with leaving out that hump out there, that unusual event
that contributed to those outbreaks.  That was my only
point.
        MS. OLIVER:  Any other thoughts?
        Colin?
        DR. GILL:  If the distribution of cases is
sporadic, aren't you just seeing the upper end of a
distribution of an organism that's present at very low
numbers?  You're just seeing the normal distribution with a
large standard deviation and you're seeing the top 0.1
percent.
        MS. OLIVER:  Jim?
        DR. ANDERS:  Jim Anders, North Dakota Health
Department.  And I'm from the laboratories, by the way, and
so I'll speak from the laboratory point of view here, from a
public health laboratory.  As far as the cases that we get--
and, of course, North Dakota is not very heavily populated,
but from the numbers that we're getting of 0157:H7, almost
all of them are not related to outbreaks, okay, that can be
traced to outbreaks, per se.
        So I don't know what that means, other than that's
what we're seeing.  And I can't speak for some of the other
states, but basically we get them here and there, and when
they check them out, they do not seem to be related to
outbreaks.
        MS. OLIVER:  Any other comments?
        [No response.]
        MS. OLIVER:  Okay.  Did you have any other
questions for the group?
        DR. POWELL:  None at this time, no--
        MS. OLIVER:  Okay.
        DR. POWELL:  --although we reserve the right to
come back to you with more questions.
        MS. OLIVER:  Fine.
        I'd really like to thank the group for all of your
input, thank the Committee and the experts for all of the
input.  It has been very beneficial to both agencies and we
really appreciate it.  And we really appreciate the long
days that you've had to put in.  Some of you have had to
endure three days, and have endured it.  We appreciate that.
        We will be having a meeting in the spring of the
Advisory Committee.  We haven't quite gotten the agenda and
the topics together yet, and Dr. Wachsmuth and I need to
talk about that.  I'd like to wish you all a safe trip home,
and enjoy your holidays.
        [Whereupon, at 4:59 p.m., the meeting of the
Advisory Committee was concluded.]



Home   |   HACCP
Hypertext updated by cjm/dms 2000-JAN-27