Xem mẫu

CHAPTER 20 Uncertainty Analysis Maxine Dakins and Carol Griffin CONTENTS I. Introduction.................................................................................................413 II. Uncertainty in Risk Assessment.................................................................414 III. Technical Aspects of Uncertainty Analysis................................................415 A. Uncertainty and Variability...........................................................415 B. Sources of Uncertainty.................................................................415 C. Describing and Summarizing Data ..............................................416 D. Sensitivity Analysis ......................................................................418 IV. Uncertainty Analysis...................................................................................418 A. Monte Carlo Methods...................................................................420 V. Communication of Uncertainty..................................................................421 VI. Conclusion..................................................................................................423 References...................................................................................................424 I. INTRODUCTION Uncertainty in risk assessment refers to the lack of definiteness that exists about the procedures, quantities, and data used and, therefore, to the lack of sureness about the resulting values and conclusions. Uncertainties exist in risk assessments whether or not they are acknowledged, incorporated into the analysis, or used by the risk manager in decision making. Ignoring or mishandling uncertainty may paralyze decision makers or generate controversy in risk assessment and management. Instead, uncertainty can be explicitly modeled, discussed, and incorporated into decision making through a quantitative uncertainty analysis, resulting in decisions that are more thorough and, hopefully, less contentious. 413 © 2001 by CRC Press LLC 414 A PRACTICAL GUIDE TO ENVIRONMENTAL RISK ASSESSMENT REPORTS This chapter discusses technical aspects of analyzing uncertainty, including the difference between uncertainty and variability; sources of uncertainty in risk assess-ments; describing and summarizing data; sensitivity analysis; and quantitative uncer-tainty analysis. Communication of uncertainty and the use of uncertainty information are also discussed. II. UNCERTAINTY IN RISK ASSESSMENT The processes of exposure and effect are analyzed and modeled in a risk assessment. Uncertainty can enter the risk assessment through both analyses. In an exposure process, subjects are exposed to the possibility of some change, usually negative. The effects process is the change that the subject or process undergoes as a result of an exposure. For example, children are exposed to lead through ingestion (paint chips, dirt, dust, and contaminated food and water), inhalation (dust particles), and dermal contact. Possible effects of lead poisoning in children include decreased intelligence, impaired neurobehavioral development, decreased stature and, in severe cases, death. Normally, an exposure model is developed, single-value estimates of model coefficients are selected, and calculations are performed to generate base-case (nom-inal) predictions of exposure levels. Risk assessors input these predictions into an effects model, again using nominal values for model coefficients, to arrive at esti-mates of effects. Risk managers then base management decisions on these predic-tions. Some risk assessments include a calibration step which is the adjustment of coefficient values to obtain a good fit between predictions and observations, or a sensitivity analysis which is the determination of the effects of changes in model input values, coefficients, or assumptions on risk predictions. A quantitative uncer-tainty analysis, the computation of the total uncertainty induced in the output of the risk assessment by quantifying the uncertainty in the inputs, coefficients, or model structure, is less frequently performed. The formal assessment of uncertainty and the determination of its effect on a risk management decision can be useful in assessing the reliability of predicted values, exposing areas of controversy or dis-agreement, making underlying assumptions explicit, combining information from multiple sources, documenting the details of the analysis, identifying whether addi-tional information should be collected, and determining how a data collection or research program should be structured. Uncertainties and unknowns pervade situations where quantitative risk assess-ments are used. They undermine the quality of risk management decisions that rely on single-value risk assessment predictions. Nevertheless, these predictions are rarely accompanied by information about their reliability. Instead, an uncertainty factor is often used to adjust the risk estimate to account for uncertainty. Uncertainty factors, often set at ten, are applied to reflect the uncertainty of extrapolating from animals-to-humans, from acute-to-chronic exposure, and for sensitive subgroups. The problem with using uncertainty factors is that the factors themselves are uncer-tain; thus, their use makes the degree of conservatism of the final decision unknown and controversial. © 2001 by CRC Press LLC UNCERTAINTY ANALYSIS 415 Another way to avoid underestimating risks is to use conservative values for some, or all, model inputs and coefficients. Use of reasonable maximum exposure values is an example of this approach. Using conservative values poses some prob-lems, however, including a lack of consistency of results from one analysis to another, controversy about the degree of conservatism in the final result, and unquantified social costs of conservatism. Again, the solution may lie in using uncertainty analysis to quantitatively assess the uncertainty in the model output, and then formally incorporating this information into the decision making process. III. TECHNICAL ASPECTS OF UNCERTAINTY ANALYSIS A. Uncertainty and Variability Uncertainty can arise from a lack of knowledge or from natural variability. Uncer-tainty arising from ignorance, the first situation, can be reduced through scientific research and information gathering. Examples of variables posing ignorance-based uncertainty are concentrations at a source, average-daily exposure at a particular place, and average-uptake efficiency. Uncertainty arising from variability, the second situation, has an irreducible component. This inherent variability exists regardless of the amount of information obtained. Variables possessing inherent variability include characteristics of the exposed population (such as age and body weight) or the natural environment (such as temperature, wind speed, and rainfall). Many variables in environmental risk models have both ignorance-based uncertainty and inherent variability. B. Sources of Uncertainty In models of exposure and effects processes, uncertainties arise in several areas. Major sources of uncertainty are limited scientific understanding of fundamental biological mechanisms and of environmental fate-and-transport phenomena, as well as inadequate mathematical representations. Since the relationships between vari-ables, which serve as the basis for risk assessments, are often unknown, incorrect, or incomplete, uncertainty can arise from the model structure. For example, model structure uncertainty includes choices about which aspects of a system to include in the model, selection of equations to represent relationships between variables, and choices of appropriate surrogate variables if relevant characteristics cannot be directly measured. In addition to the uncertainty arising from limited knowledge or inadequate models, the values of the coefficients used in risk assessments are often unknown and must be estimated. Uncertainty can arise due to our limited ability to measure model inputs and coefficients; sampling error due to the need to draw inferences about a population characteristic from sample data; and disagreement between data gathered at different times or in different laboratories because of differences in procedures, personnel, or materials. © 2001 by CRC Press LLC 416 A PRACTICAL GUIDE TO ENVIRONMENTAL RISK ASSESSMENT REPORTS Yet another source of uncertainty in risk assessments is extrapolation. Risk assessors extrapolate from high-to-low doses, species-to-species, acute-to-chronic exposures, or from laboratory data to field situations. Uncertainty associated with extrapolations may relate to the model structure uncertainty, discussed above, as in the case of a dose-response model, or may involve extrapolations for which no reliable model exists. Systematic error or bias in measurements is another source of uncertainty. Sys-tematic errors can arise from incorrectly calibrated equipment, poor laboratory procedures, or inaccurate assumptions used in calculating inferred quantities from observations. Systematic error cannot be reduced by additional observations, but careful design of equipment, procedures, and calculations can prevent it. Some aspects of the exposure and effects processes are inherently variable. Variability occurs when a quantity that could be modeled as a single value consists, in reality, of multiple values depending on time, space, or other factors. This type of variability may reflect biological differences between individual organisms, dif-ferences in activity and behavior patterns, seasonal differences, year-to-year varia-tions, differences due to spatial variations across a geographic area, or random fluctuations. Finally, value judgements are often treated as if they are constants instead of decision variables. An example is the choice of the population to model in a risk assessment for a household chemical. The most sensitive group (fetuses, infants, immune-suppressed, or elderly) could be targeted even though it represents a small percentage of the population. Alternatively, a typical individual representing the majority of the population could be modeled. These value judgements can influence the outcome of the assessment as well as subsequent risk management decisions. C. Describing and Summarizing Data The sources of uncertainty already discussed result in collections of data points that must be analyzed and summarized to facilitate their use in risk assessments and uncertainty analysis. Data points can be envisioned as having been sampled from a probability distribution that has a location, spread, and shape. A probability distri-bution function describes the way measured values are expected to vary. A data set can be summarized in several ways to provide information about a variable. Measures of location provide information about the center of the distribu-tion, measures of spread provide information about the different plausible quantities the variable could take on, percentiles provide information about high or low values, and graphs provide an estimate of the likely shape of the probability distribution (see Figure 1). The simplest summary statistics calculated from data are point estimates of location. Point estimates representing the middle or center of the data include the sample mean and the sample median. The sample mean, or average, is the sum of the data points divided by the number of points. The sample median is the data value which is greater than half of the data points and less than the other half. When the underlying distribution of the data is symmetric, the sample mean and sample median should be similar. Examples of symmetric distributions include the © 2001 by CRC Press LLC UNCERTAINTY ANALYSIS 417 Figure 1 Probability distribution functions. uniform distribution (see Figure 1A) and the normal distribution (see Figure 1B). When the underlying distribution of the data is not symmetric, the sample mean and sample median will differ. For a distribution with a long tail to the right, for example, the mean will be larger than the median. Examples of distributions that may be asymmetric are the triangular distribution (see Figure 1C) and the lognormal distribution (see Figure 1D). A second type of summary statistic describes data spread or variability. The simplest measure of variability is the sample range, the difference between the largest and smallest values. A better, more commonly used, measure of spread is the sample standard deviation. The sample standard deviation uses information from all the data points in the set and, as such, is a more powerful measure of variability than the range. Calculations of confidence intervals rely on estimates of the sample standard deviation. Point estimates are sometimes used to estimate a characteristic of the probability distribution other than the center. For example, a value at the upper end of the distribution may be used to ensure that the risk for the population is kept below an allowable level. If the maximum value of the distribution is known, as might be the case if the underlying distribution is uniform or triangular, it can be used. More commonly, the maximum possible value would be so far from the bulk of the data that it would not be useful. If so, percentiles of the distribution, usually the 90th, 95th, or 99th, are used instead. A percentile is the value that has a specified percent of the probability below it; for example, the 95th percentile is the value with 95% of the probable data values below it. However, estimates of upper percentiles made from small data sets can be highly variable. © 2001 by CRC Press LLC ... - tailieumienphi.vn
nguon tai.lieu . vn