CDC logoSafer Healthier People  CDC HomeCDC SearchCDC Health Topics A-Z
NIOSH - National Institute for Occupational Safety and Health

Skip navigation links Search NIOSH  |  NIOSH Home  |  NIOSH Topics  |  Site Index  |  Databases and Information Resources  |  NIOSH Products  |  Contact Us

NIOSH Publication No. 2005-106:

Mixed Exposures Research Agenda - A Report by the NORA Mixed Exposures Team

December 2004

back 2 Analysis of Research Needs
 
3 Priorities for a Research Agenda next

Biomarkers

The term biomarker is used quite broadly to refer to indicators of exposure, effect, or sensitivity that are measured in biological samples or systems. Biomarker measurements have a high potential value because they are made with human samples; whether that value is realized is a function of how well the biomarker is understood. For example, some people interpret the presence of a chemical in blood as an indication of an adverse effect. For some well-studied chemicals, such as lead or carbon monoxide, blood levels can be equated to different degrees of health risk. However, for most chemicals of interest, the methods to accurately measure the biomarker and the relationships of a biomarker to effects and to sources of the chemical are unknown. Some measurable biomarkers may be indicators of a health effect, while others are simply indication of past or present exposure. Similar examples could be given for biomonitors of health, whether they are relatively simple (for example, a symptom questionnaire) or complex (for example, full medical exam). Deoxyribo nucleic acid (DNA) adducts have been studied for years, but what does the presence of a certain level of DNA adducts mean?

Genomics is rapidly enabling the development of more information about a person’s genetics, but it will still be necessary to determine, for example, whether a particular genetic array of metabolizing enzymes places a worker at higher risk. Genomics, proteomics, and other related technologies could also be useful to screen for changes in gene expression in persons working in one environment versus another. Although the literature on promising biomarkers is growing, the ability to interpret them in terms of health risk and prevention is not nearly commensurate; and there are significant ethical issues related to obtaining and applying genetic data.

Applying the concept of biomarkers to studies of exposures to complex mixtures can greatly aid in understanding the consequences of such exposures and may help identify the active components. These issues can be difficult when studying specific agents but become even more complex as the exposure complexity increases. However, the advantage of applying biomarker data is that a finite number of responses or health outcomes may be categorized. By carefully working backward from the response to the exposure, it may be possible to identify events, markers, or changes that can be monitored or used to predict outcome, or used to design prevention plans. Exposures for which identical metabolic intermediates are produced could provide useful mechanistic information.

Most risk characterization approaches for mixtures rely on estimations of risk of a few components of the mixture. This creates significant uncertainty and variability. By directly measuring adequately characterized biomarkers, it is possible to more effectively predict, measure, and intervene in adverse health events.

New Knowledge Needs

Potential for Intervention

Risk Assessment Methods

A pre-eminent need in the field of mixed-exposure research is to develop scientifically valid risk assessment strategies that can facilitate establishing protective regulatory standards and risk management procedures. Ideally, to determine the toxicity of any chemical mixture, one would determine the range of possible exposures and test the complete mixture for these exposures. Traditional risk assessment of individual agents or stressors has relied on toxicological tests such as the 2-year rodent bioassay, a laborious and expensive procedure. With the infinitely large number of chemical mixtures in the environment, risk assessment methods that rely on the conventional methodologies and approaches are not feasible because of the immense resources required. The challenge for mixed-exposures risk assessment is to develop alternative methods that can take the data available for chemicals or mixtures and make scientifically valid predictions for priority mixtures of relevance to occupational and environmental exposures.

Risk assessment for mixed exposures is limited by the availability of data. To meet the mandate and limitations imposed by various laws, individual chemicals, stressors, or biological agents have been tested, and one rarely finds studies that have data on evaluation of multiple health effects in the same organism. Thus, studies are needed in which multiple health effects have been assessed in the same animal or human populations, representative of real life epidemiological studies.

One approach that could be applied to complex mixtures of varying composition is to identify all the chemicals in a mixture, determine the toxicity of each, and have available complete information regarding the possible interactions of all components of the mixture over the expected exposure ranges. For complex mixtures, data are typically inadequate to make predictions with certainty: not all components of the mixture may be identified, the proportions of components may not be known, information about possible interactions between components will be rare, and epidemiologic data on human health effects are often missing. Risk assessment in the area of chemical mixtures, therefore, is characterized by making judgments in the face of multiple unknown factors.

Whole-Mixture Approach (Mixture Treated as a Single Toxic Agent)

Whole-mixture testing considers the mixture as a single entity and conducts a standard health risk assessment for the chemical mixture in the same way that one is conducted for a single chemical. It is the simplest way to study the effects of a mixture, because the sole information needed to apply this method is the dose-response curve of the whole mixture in the organism desired.

Dose-response data on the whole mixture as a single entity are ideal for risk assessment because the extrapolations are minimal. This method has been used for the risk assessment of cigarette smoke, diesel exhaust, and mixtures of groundwater contaminants. Influences of possible interactions among the components of the mixture are included because the whole mixture has been tested. However, this approach cannot identify any toxicologic interactions or the causal mechanisms for the observed toxicity. Whole-mixture procedures are best for mixtures that maintain fairly constant composition and exposure concentration throughout the expected timeframe of the exposure. Dose-response data for whole mixtures, however, are rarely available, in part because most legislative promulgations have been oriented toward single-chemical exposures. Even if such data are available, extrapolation across routes or from high to low dose may be required, introducing uncertainties. No information about the identity of the mixture components is obtained, and testing of all relevant potential mixtures (for variations in dose or proportions) is impossible.

Similar-Mixture Approach

The similar-mixture approach uses data on a well studied, but toxicologically similar mixture to estimate the risk from the mixture. Mixtures are usually judged to be toxicologically similar based on composition or observed toxicological properties. Mixtures may have the same components at different ratios, some common components but some unique ones, or one or more additional components when compared with the original [51 Fed. Reg. 34014(1986); EPA 2000]. The main toxic effects should be the same for the surrogate and mixture.

Federal Register. See Fed. Reg. in references.

Group of Similar-Mixtures Approach

A different kind of similar-mixture approach is called the comparative-potency method [Lewtas 1985]. In this approach, the human toxicity of the mixture is estimated from that mixture’s toxicity in a nonhuman study by multiplying by a proportionality constant that is estimated from data on the other mixtures in the similarity set. This approach is empirical, requiring human dose-response information for the main adverse health effect for a mixture that is toxicologically similar to the one in question.

Although this approach includes interaction effects, it focuses on only one health effect, so it may not provide an adequate evaluation of the overall health risk. In addition, full information about similar mixtures is rare, so this approach may not be readily useful in most risk assessment situations. This method has not been used extensively, so its general validity and applicability are still undetermined. It has primarily been used with carcinogens.

Component-Based Mixture Approaches

A single component of a chemical mixture may be a relevant index of toxicity when that component is suspected to account, qualitatively and quantitatively, for most of the toxicity. For example, photochemical pollution is typically indexed by the level of ozone, the principal oxidant. The concentration of ozone is used for both health and regulatory purposes with regards to the mixture, and ozone levels are widely used in toxicologic investigations as a surrogate for the mixture sample. Another example is the use of benzo-pyrene as an indicator for the carcinogenic potential of mixtures of PAHs. This approach is useful, under the appropriate conditions, because only the dose-response information for the indicator is required. Ideally, a marker of exposure to a complex mixture should be (1) unique to the mixture in context, (2) present at a consistent ratio to other components, (3) readily detectable at lower concentrations, and (4) measurable with good accuracy at a reasonable cost. Obviously, the disadvantage of this approach is that the potential toxicity of other mixture components is ignored.

Mathematical Models of Joint Toxicity

Component-based risk assessment of simple, identified mixtures can be improved by using physiologically based models reflecting PK/PD of the component chemicals. Risk estimates can be tailored to exposure situations and worker characteristics by incorporating time-dependent exposure patterns and physiological factors appropriate to the situation. For such modeling methods to be implemented as standard practice, a set of central and health protective default parameters must be used in the absence of chemical-specific data.

Risk characterization from PK/PD models should include a description of the uncertainties. Parameters in these models are rarely measured independently, so that biologically based models usually include default or estimated parameters. Uncertainties in the risk application should then address model fit, judgments of relevance of supporting data (if extrapolation is used), and biases (for example, use of protective assumptions or confidence limits).

Toxic Equivalency Factors

This approach is appropriate when the components of a mixture are all congeners or isomers of the same chemical and have been used extensively for risk assessment with PAHs, halogenated aromatic hydrocarbons, and endocrine disruptors. It is used to provide an estimate of the potency of less well studied components in a mixture relative to the potency of a component that has undergone more extensive testing, termed the index chemical. Each component’s exposure is converted into the toxicologically equivalent exposure of the index chemical by scaling by the relative potency. The scaling factor is called the toxicity equivalence factor (TEF). The mixture exposure is then calculated by summing these equivalent exposures to obtain the mixture exposure in terms of the equivalent index chemical exposure [EPA 2000].

This approach requires complete knowledge of the mixture composition and assumes that all components act through the same biologic pathway, the exposure concentrations of the individual chemicals are additive, the dose-response curves for different congeners are parallel, and the chemicals act on the same organs over the doses studied. Using EPA’s definitions of interaction, this approach then assumes no interactions between isomers and that a single TEF is valid for all types of toxic effect. The principal use of this approach in occupational exposures has been in investigations of mixtures of volatile organic compounds, often a suspected culprit in incidents of sick-building syndrome, and for mixtures of dioxins.

Hazard Index (HI)

Risk assessment of mixed exposures often combines pieces of information that differ widely from each other. Exposure data for some stressors may be only as time-weighted averages, while others reflect daily activity patterns. Toxicity data for some chemicals may allow estimation of probabilistic risk for one endpoint, while only providing qualitative descriptions of other endpoints. It is possible to develop the risk characterization using the original information in a high-dimensional matrix, but such a summary will be difficult to evaluate and communicate. One approach to diverse multivariate facts is the decision index, used as an action level for regulatory action or occupational health intervention. The advantage of a decision index is the simplicity in converting highly multivariate technical information into a single number. The most common example used for health risk is the HI for mixture risk.

Although specific for a single affected target organ, each HI is based on multiple studies of multiple chemicals, often involving multiple-test animal species and test exposure concentrations and highly varied measures of toxicity. The HI is a rough implementation of dose addition, in which all component chemicals are assumed to be toxicologically similar. The HI is the sum of single chemical exposure concentrations, with each scaled by its relative toxic potency, most often implemented by using the ratio of the exposure concentration to the corresponding acceptable concentration (for example, TLVs for long-term exposures or short-term exposure limits [STEL] for short-term exposures), commonly called the hazard quotient.

This summation of scaled component concentrations has its regulatory origin in the ACGIH formula that was adopted by the ACGIH in 1963. This was incorporated into OSHA regulations [29 CFR 1910.1000 (d)(2)] shortly after passage of the Occupational Safety and Health Act of 1970. By the ACGIH formulation, the additive hazard index approach is only used when substances act on the same organ system [ACGIH 2003]. Note again, that the ACGIH criterion of same organ system is somewhat different from the criteria described by the Food Quality Protection Act, that the components have the same mechanism of toxicity.

The version of the HI developed by the EPA Superfund Program Office is by far the most common approach used in conducting mixture risk assessments in the field, aided in part by the ready availability of reference doses for oral exposures and reference concentrations for inhalation exposures, which are used for the scaling factors. When the calculated HI value for a mixture exceeds 1, it reflects a health risk similar to that involved if an individual chemical exceeded its concentration limit by the same extent. The EPA recommends that a separate HI be calculated for each toxic effect concerned [EPA 2000]. In addition, Feron et al. [1995a] proposed combining a hazard hierarchy scheme with the HI approach by selecting the top 10 chemicals in a complex mixture with regard to toxicity, and then combining the relative toxicities of each into a single measure of relative risk for the mixture.

The main disadvantage of a simple index is that the uncertainties in its calculation are largely hidden. Another key disadvantage is in quantifying what are often scientific judgments. For example, the HI implemented under Superfund is a number whose decision threshold is usually given as 1.0, so that when HI>1, additional action is indicated. A numerical estimate of the uncertainty in the HI value would help interpret the need for additional action.

Target Organ Toxicity Doses

The use of an acceptable level in the relative toxicity scaling factor (for example, 1/TLV or 1/reference dose [RfD]) may be overly protective in that the RfD (or reference concentration [RfC]) is based on the critical effect, defined as the toxic effect occurring at the lowest dose. When the HI is calculated for a different, less sensitive effect, the RfD will be too low, so the factor (1/RfD) will overestimate the relative toxicity, and the HI will be too large. One alternative that avoids this critical effect conservatism is to use a toxicity-based exposure concentration that is specific to the target organ and is derived similarly to an RfD (or RfC). For oral exposures, this value is called the target organ toxicity dose (TTD) [Mumtaz et al. 1994]. The formula for the HI would be identical, with the TTD replacing the RfD. For inhalation exposures, a similarly defined target organ toxicity concentration (TTC) could be used. This same approach can be applied to HIs for shorter exposures by using the effect-specific data appropriate to the shorter exposure period of concern.

The TTD is not a commonly evaluated measure, and no official EPA activity is deriving these values as for the RfD and RfC. This alternative should be considered when there is sufficient reason to believe that the overestimate of the HI caused by use of RfDs is significant to the interpretation of the mixture assessment. In that case, TTDs can be derived for the mixture components by following the scientific steps used in deriving an RfD. The evaluation of quality of the candidate toxicity studies and the choice of uncertainty factors should parallel those steps in the RfD process. One difference in the uncertainty factors concerns the factor for completeness of the database used for RfD development. For example, if no two-generation study existed for a chemical, there could be an additional uncertainty factor used to obtain the RfD, because the RfD must protect against all toxic effects.

However, when a renal TTD is developed, no additional factor would be used because the data would only include renal effects.

Any TTDs derived for a mixture assessment must be clearly documented, including the array of studies considered, the study and dose selected for calculation, and the uncertainty factors chosen. When the critical effect of a chemical is the effect being described by the HI, the RfD and TTD will apply to the same target organ and so should be the same unless the TTD is based on newer information. When data for one or more components are not sufficient for deriving their organ-specific TTDs, their RfDs should be used and noted as a source of possible overestimation of the HI. These recommendations and discussions also apply to HIs for shorter exposures and to TTCs as replacements for RfCs in an HI for inhalation exposures.

Estimation of Interactions

This approach attempts to characterize synergism or antagonism in a mixed exposure based on putative interactions between the components. Effect modification is considered to have occurred when the combined effect of two or more exposures is larger or smaller than the anticipated effect predicted by the exposures individually. Strict criteria for using and evaluating this approach have yet to be developed; statistically significant data will require very large numbers. It has, however, been effectively used to study the combined effects of agents known to be independent risk factors for disease, for example, cigarette smoking and asbestos.

Better laboratory and analysis tools for identifying synergism and antagonism are needed. Testing of more contemporary chemicals is also needed. The combined synergistic effect of environmental chemicals with regards to endocrine disruption has not been adequately studied. There is also a need to better define the concepts of synergism and potentiation and to raise awareness of varying mathematical concepts of additivity [Simmons 1995].

Interaction-Based HI and the Weight of Evidence (WOE)

The HI approach does not account for interactions that may occur within the mixture. Toxicologic interactions have been mostly studied with binary mixtures. One way to include interactions in a mixture assessment is to modify the noninteractive assessment by knowledge of these binary interactions; a tacit assumption is then that higher order interactions are relatively minor compared with binary interactions. Although some mixture data exist [Lof and Johnson 1998], few studies quantify interaction, and even fewer quantitatively describe the dose-dependence of the interaction. Consequently, for an approach to be able to use available data, some qualitative procedure is needed for judging the impact of the potential toxicologic interactions. The WOE approaches used by the EPA and the Agency for Toxic Substances and Disease Registry (ATSDR) are an attempt to qualitatively combine empirical observations and mechanistic considerations [Mumtaz and Durkin 1992].

The WOE approach determines the most plausible influence of one chemical component on the toxicity of another chemical in the mixture for a given exposure scenario. Factors in the determination include the direction of the interaction (adverse effect, greater than, less than, or equal to additive effects), mechanistic support (how an interaction might have occurred), toxicological observations (directly demonstrated, inferred from a related compound, or unclear), modifying factors (exposure sequence and/or duration), limitations, uncertainties and references [Hansen et al. 1998]. Confidence in the prediction is greater when the mechanism by which the interaction may occur is well characterized. A rating matrix can also be developed when the other determinations are made. Binary mixtures will have comparatively simpler ratings than a mixture with more components and a complex rating matrix. The WOE is potentially useful for a variety of chemical mixtures, but it is an approach that has not been adequately validated by experimental or epidemiological data.

The interaction-based HI uses the WOE approach to modify the HI calculation. The EPA procedure modifies an earlier method [Mumtaz and Durkin 1992]. The WOE determinations are converted into numerical scores, and then combined with functions of the component exposure concentrations and the component hazard quotients to give an HI that incorporates the pair-wise interactions. The formula is modular so that improved dose-interaction relationships can be easily incorporated. Published interaction information, however, does not usually include interaction magnitudes, only the direction of interaction (for example, synergism) so that evaluating the accuracy is difficult. The HI-WOE method works only when the HI approach works. The HI approach is based on the principle of additivity. The principle of additivity is recommended for those toxicants that have similar mechanisms of action. Thus, the HI-WOE approach works for toxicants with similar mechanisms of action and not for dissimilar mechanisms of action [Mumtaz et al. 1998].

Occupational Exposure Limits

Few mixed-exposure regulatory standards have been established because assessment methods for mixed exposures have been based on extrapolation rather than direct toxicological data [Mumtaz et al. 1995]. The current challenge for environmental and occupational scientists is to provide a sound, scientific basis that enables policymakers to substitute current, simplistic, single chemical standard setting with real-life, mixture-oriented standard setting [Feron et al. 1995a]. The European Maximum Workplace Concentration (known as Maximale Arbeitsplatz Konzentration or MAK) Commission refrains from setting scientifically based occupational exposure concentrations for complex mixtures because they feel that meaningful standards are not possible given the inadequacy of the data [Bartch et al. 1998].

Another approach for regulating exposure to mixtures is contained in the OSHA and MSHA Hazard Communication Standards [29 CFR 1910.1200] and [30 CFR 47], respectively. These standards prescribe labeling and worker training for substances with recognized potential for producing health effects in workers. The Hazard Communications rules provide the following logic for dealing with mixtures:

New Knowledge Needs

Potential for Intervention

Controls

The preferred methods for protecting workers from hazards are engineering controls (such as ventilation, isolation, and substitution); administrative controls (such as rotating workers’ tasks) are also recommended. Generally, an engineering control will be equally as effective for all components of mixed exposures. However, in instances such as substitution, the choice of the substitute material could create a new hazard by creating a new exposure situation and creating a multiple, nonsimultaneous, mixed exposure with the original agent. Use of administrative controls, such as rotating workers to different jobs or tasks, can create this same scenario of multiple nonsimultaneous exposures. Mixed exposures could defeat filtration controls, for example if an electrostatic filter is simultaneously exposed to humidity, gases, and vapors that affect the filters’ electrostatic charge, it may be degraded to reduce its efficiency to filter particles. Mixed exposures often occur in agriculture, construction, and other workplaces where engineering controls are not feasible. In these workplaces, personal protective equipment (PPE) is often used.

The use of respirators and biological and chemical protective clothing for protection against mixed exposures presents a unique challenge. First, mixtures can have an effect on the efficiency of air-purifying respirators. Gases and vapors can degrade electrostatic filter media, decreasing filter efficiency. Gases and vapors trapped on the surface of a particle can off-gas from the filter media where they are captured. Combinations of gases and vapors can decrease service life on a cartridge or—even in cases for which adsorption of one gas/vapor is preferential—render the cartridge useless against other gases/vapors in the mixture. Studies on chemical protective clothing have indicated that breakthrough times at one contaminant concentration of a mixture is not predictive of the service lives of the clothing when different concentrations of the component are present. Also, studies of chemical mixtures on biological and chemical protective clothing can show a synergistic effect on breakthrough times [Mickelsen and Hall 1987; Mickelson et al. 1986].

Additionally, protective equipment can create a mixed exposure. Respirators and biological and chemical protective clothing both can cause physiological and psychological stress for the wearer. In fact, the most protective gear may present the greatest set of stressors. Self-contained breathing apparatus can weigh up to 35 pounds and increase workloads up to 20 percent. Heavy protective clothing, such as firefighter ensembles, can increase heat stress and the worker’s cardiac demand. Psychological responses to respirators and full body ensemble responses include phobias such as claustrophobia.

In 1998 in Chicago, the Control Technology and Personal Protective Equipment NORA Team held a workshop—Control of Workplace Hazards for the 21st Century. Several of the knowledge needs identified apply to mixed exposures. The workshop recommended the following:

  1. Perform research to determine change- out schedules for cartridges and filters. This is particularly important for mixtures in which the presence of one contaminant will affect the ability of respirators to protect against other contaminants in the air. Four methods should be explored: laboratory testing, work place testing, sensors (for example, end-of-service-life indicators), and mathematical models.

  2. Evaluate physiological and psychological responses to workplace tasks and the wearing of respirators in order to minimize stressors. The evaluation should consider interaction with oxygen uptake, dead space carbon dioxide concentrations, comfort, effects on hearing and communication, thermal stress, and phobias.

  3. Develop state-of-the-art monitoring equipment and technologies that can be used in conjunction with PPE for mixtures. For example, improve laboratory and field testing methodologies including estab lishing end-of-service-life indicators and advanced real-time biological and chemical monitoring technologies such as micro sensors, colorimetric techniques, analytical techniques, and field detectors that are effective in mixed exposures.

  4. Investigate physiological and psychological factors associated with the PPE and the environment to reduce the effect of mixed stressors. For example, investigate practices for reducing the impact of heat stress or extreme cold.

  5. Explore decontamination procedures that are effective against mixtures.

New Knowledge Needs

Potential for Intervention



back 2 Analysis of Research Needs
 
3 Priorities for a Research Agenda next

NIOSH Home
 |  NIOSH Search  | Site Index  | Topic List | Contact Us