CHAPTER 3 - MEASUREMENT ACCURACY

2. Definitions of Terms Related to Accuracy

Precision is the ability to produce the same value within given accuracy bounds when successive readings of a specific quantity are measured. Precision represents the maximum departure of all readings from the mean value of the readings. Thus, a measurement cannot be more accurate than the inherent precision of the combined primary and secondary precision. Error is the deviation of a measurement, observation, or calculation from the truth. The deviation can be small and inherent in the structure and functioning of the system and be within the bounds or limits specified. Lack of care and mistakes during fabrication, installation, and use can often cause large errors well outside expected performance bounds. Since the true value is seldom known, some investigators prefer to use the term Uncertainty. Uncertainty describes the possible error or range of error which may exist. Investigators often classify errors and uncertainties into spurious, systematic, and random types.

Spurious errors are commonly caused by accident, resulting in false data. Misreading and intermittent mechanical malfunction can cause discharge readings well outside of expected random statistical distribution about the mean. A hurried operator might incorrectly estimate discharge. Spurious errors can be minimized by good supervision, maintenance, inspection, and training. Experienced, well-trained operators are more likely to recognize readings that are significantly out of the expected range of deviation. Unexpected spiral flow and blockages of flow in the approach or in the device itself can cause spurious errors. Repeating measurements does not provide any information on spurious error unless repetitions occur before and after the introduction of the error. On a statistical basis, spurious errors confound evaluation of accuracy performance.

Systematic errors are errors that persist and cannot be considered entirely random. Systematic errors are caused by deviations from standard device dimensions. Systematic errors cannot be detected by repeated measurements. They usually cause persistent error on one side of the true value. For example, error in determining the crest elevation for setting staff or recorder chart gage zeros relative to actual elevation of a weir crest causes systematic error. The error for this case can be corrected when discovered by adjusting to accurate dimensional measurements. Worn, broken, and defective flowmeter parts, such as a permanently deformed, over-stretched spring, can cause systematic errors. This kind of systematic error is corrected by maintenance or replacement of parts or the entire meter. Fabrication error comes from dimensional deviation of fabrication or construction allowed because of limited ability to exactly reproduce important standard dimensions that govern pressure or heads in measuring devices. Allowable tolerances produce small systematic errors which should be specified.

Calibration equations can have systematic errors, depending on the quality of their derivation and selection of form. Equation errors are introduced by selection of equation forms that usually only approximate calibration data. These errors can be reduced by finding better equations or by using more than one equation to cover specific ranges of measurement. In some cases, tables and plotted curves are the only way to present calibration data.

Random errors are caused by such things as the estimating required between the smallest division on a head measurement device and water surface waves at a head measuring device. Loose linkages between parts of flowmeters provide room for random movement of parts relative to each other, causing subsequent random output errors. Repeating readings decreases average random error by a factor of the square root of the number of readings.

Total error of a measurement is the result of systematic and random errors caused by component parts and factors related to the entire system. Sometimes, error limits of all component factors are well known. In this case, total limits of simpler systems can be determined by computation (Bos et al., 1991). In more complicated cases, different investigators may not agree on how to combine the limits. In this case, only a thorough calibration of the entire system as a unit will resolve the difference. In any case, it is better to do error analysis with data where entire system parts are operating simultaneously and compare discharge measurement against an adequate discharge comparison standard.

Calibration is the process used to check or adjust the output of a measuring device in convenient units of gradations. During calibration, manufacturers also determine robustness of equation forms and coefficients and collect sufficient data to statistically define accuracy performance limits. In the case of long-throated flumes and weirs, calibration can be done by computers using hydraulic theory. Users often do less rigorous calibration of devices in the field to check and help correct for problems of incorrect use and installation of devices or structural settlement. A calibration is no better than the comparison standards used during calibration.

Comparison standards for water measurement are systems or devices capable of measuring discharge to within limits at least equal to the desired limits for the device being calibrated. Outside of the functioning capability of the primary and secondary elements, the quality of the comparison standard governs the quality of calibration.

Discrepancy is simply the difference of two measurements of the same quantity. Even if measured in two different ways, discrepancy does not indicate error with any confidence unless the accuracy capability of one of the measurement techniques is fully known and can be considered a working standard or better. Statistical deviation is the difference or departure of a set of measured values from the arithmetic mean.

Standard Deviation Estimate is the measure of dispersion of a set of data in its distribution about the mean of the set. Arithmetically, it is the square root of the mean of the square of deviations, but sometimes it is called the root mean square deviation. In equation form, the estimate of standard deviation is:

equation (3-1)

where:

The variable X can be replaced with data related to water measurement such as discharge coefficients, measuring heads, and forms of differences of discharge.

The sample number, N, is used to calculate the mean of all the individual deviations, and (N - 1) is used to calculate the estimate of standard deviation. This is done because when you know the mean of the set of N values and any subset of (N - 1) values, the one excluded value can be calculated. Using (N-1) in the calculation is important for a small number of readings.

For the sample size that is large enough, and if the mean of the individual deviations is close to zero and the maximum deviation is less than +3S, the sample distribution can be considered normally distributed. With normal distribution, it is expected that any additional measured value would be within +3S with a 99.7 percent chance, +2S with a 95.4 percent chance, and +S with a 68.3 percent chance.

Measurement device specifications often state accuracy capability as plus or minus some percentage of discharge, meaning without actually stating, +2S, two times the standard deviation of discharge comparisons from a calibration. However, the user should expect an infrequent deviation of +3S.

Error in water measurement is commonly expressed in percent of comparison standard discharge as follows:

eqn(3-2)

where:

QInd = indicated discharge from device output

QCs = comparison standard discharge concurrently measured in a much more precise way

E%QCS= error in percent comparison standard discharge

Comparison standard discharge is sometimes called actual discharge, but it is an ideal value that can only be approached by using a much more precise and accurate method than the device being checked.

Water providers might encounter other terms used by instrument and electronic manufacturers. Some of these terms will be described. However, no universal agreement exists for the definition of these terms. Therefore, water providers and users should not hesitate to ask manufacturers' salespeople exactly what they mean or how they define terms used in their performance and accuracy claims. Cooper (1978) is one of the many good references on electronic instrumentation.

Error in percent full scale, commonly used in electronics and instrumentation specifications, is defined as:

eqn (3-3)

where:

To simply state that a meter is "3 percent accurate" is incomplete. Inspection of equations 3-2 and 3-3 shows that a percentage error statement requires an accompanying definition of terms used in both the numerator and denominator of the equations.

For example, a flowmeter having a full scale of 10 cubic feet per second (ft3/s) and a full scale accuracy of 1 percent would be accurate to +0.1 ft3/s for all discharges in the flowmeter measurement range. Some manufacturers state accuracy as 1 percent of measured value. In this case, the same example flowmeter would be accurate to within +0.1 ft3/s at full scale; and correspondingly, a reading of 5 ft3/s would be accurate to within +0.05 ft3/s for the same flowmeter at that measurement.