ࡱ> M vbjbj== "WWl8@$dDU'2"&&&&&&&$( *Z&]&'&&V%@& nb.& &%'0U'&x, ,&Summary and Synthesis of Recommendations of the AmeriFlux Workshop on Standardization of Flux Analysis and Diagnostics, Corvallis, Oregon, August 2002 By The AmeriFlux Workshop Team February 2003 Principal architect: W.J. Massman With contributions from: J. Finnigan and D. Billesbach And comments from: S. Miller, T. Black, B. Amiro, B. Law, X. Lee, D. Baldocchi, L. Mahrt, R. Dahlman, and T. Foken Summary and Synthesis of Recommendations of the AmeriFlux Workshop on Standardization of Flux Analysis and Diagnostics, Corvallis, Oregon, August 2002 A DOE sponsored workshop was held August 27 30, 2002 at Oregon State University, Corvallis, Oregon. It was the second of the international AmeriFlux workshops intended to outline and recommend scientifically preferred procedures for calculating and correcting eddy covariance fluxes for all AmeriFlux sites. The fundamental goals of these workshops are (1) to reduce or eliminate, as much as possible, uncertainties in cross-site comparisons of fluxes resulting from different methods of signal processing, high and low frequency spectral corrections, coordinate systems, data detrending, post-processing QA/QC, etc. and (2) to highlight and explore emerging issues, such as, the influence advection and complex terrain can have on measured fluxes and the types, nature, and influence nighttime or stable atmospheric motions can have on measured fluxes. The workshop covered 8 topics. Each topic was introduced with a one-hour lecture, which was then followed by a one-hour discussion. In addition, sessions were also held to demonstrate processing and analysis software for eddy covariance data and to synthesize recommendations. Most of the topics covered at the workshop will be discussed in greater detail in a book entitled, Handbook of Micrometeorology: A Guide for Surface Flux Measurements. The scientific topics, the lecturers, the discussants, and the supplemental sessions are outlined below. A complete list of attendees is provided after the list of contributors. TOPIC 1 - Averaging and Detrending Lecturer: John Moncrieff Discussant: Tilden Meyers TOPIC 2 Coordinate Rotation Lecturer: Xuhui Lee Discussant: Kyaw Tha Paw U TOPIC 3 Low Frequency Corrections Lecturer: Yadvinder Mahli Discussant: Dennis Baldocchi TOPIC 4 High Frequency Corrections Lecturer: Bill Massman Discussant: Rob Clement TOPIC 5 - Flux Corrections for Cross Contamination Lecturer: Ray Leuning Discussant: Scott Miller TOPIC 6 Time Series Analysis Lecturer: Gaby Katul Discussant: Larry Mahrt TOPIC 7 Post-field Data Quality Controls Lecturer: Thomas Foken Discussants: Brian Amiro and Bill Munger TOPIC 8 Advection and Modeling Lecturer: John Finnigan Discussants: Bernard Heinesch and HaPe Schmid SUPPLEMENTAL SESSION A Software Development Presenters: Rob Clement, Thomas Foken, and John Nagy SUPPLEMENTAL SESSION B Consensus Building on Recommendations Leaders: Bill Massman, Xuhui Lee, and Ray Leuning ATTENDEES Peter Anthoni, Brian Amiro, Dennis Baldocchi, Dave Billesbach, Constance Brown-Mitic, George Burba, Rob Clement, Roger Dahlman, Mathias Falk, Thomas Foken, Gaby Katul, Joon Kim, Meredith Kurpius, Chris Fiebrich, John Finnigan, Marc Fischer, Bernard Heinesch, Larry Hipps, John Hunt, Chun-Ta Lai, Bev Law, Monique Leclerc, Xuhui Lee, Ray Leuning, Hank Loescher, Yadvinder Mahli, Larry Mahrt, Bill Massman, Tilden Meyers, Scott Miller, John Moncrieff, Kai Morgenstern, Bill Munger, John Nagy, Kyaw Tha Paw U, Elizabeth Pattey, Ruth Reck, Daniel Ricciuto, Hape Schmid, Russell Scott, Julie Styles, Andy Suyker, Susan Ustin, Shashi Verma, Dean Vickers, Marv Wesely The following is a list of workshop recommendations and discussions. It strongly urged that the AmeriFlux network follow these recommendations when calculating fluxes for publication. The mass balance equations that underlie aerodynamic calculations of the surface exchange employ the total average covariance, , between the instantaneous wind speed, u, and the instantaneous scalar mass concentration, c. When these instantaneous factors are split into means and fluctuations, + u, + c, by averaging, filtering, or detrending the total covariance must be correctly reconstituted in order to compute the scalar flux. Discussion: At present it is not possible to measure the total instantaneous covariance, , directly. Consequently, there will be a continuing need to estimate this term by splitting it into means and fluctuations. With current understanding there are three considerations associated with the splitting procedure, which, in general terms, are (1) the coordinate system in which the means, variances, and covariances are defined, (2) the need to resolve the diel variation of the surface exchange, and (3) the need to discriminate between true (relatively high frequency) boundary-layer turbulence and the more deterministic or synoptically forced lower frequency temporal trends in the data time series, which is one aspect of nonstationarity. The relative importance of these three aspects of the data analysis can result in different choices for the splitting technique and for the period associated with the splitting. Methods of splitting mean and turbulence quantities: Three methods are commonly employed to accomplish this splitting: block averaging, detrending (usually linear detrending), and filtering (often applied in real time as a recursive filter). Reconstituting the total covariance after applying any of these operators, in general, yields: = <> + <c> + > + (1) where the second and third terms on the RHS of this expression are the Leonard fluxes. However, only those operators, such as block averaging or ensemble averaging, that produce (identically) zero Leonard fluxes obey Reynolds averaging rules. For filtered or detrended data, the Leonard fluxes are not zero but may be small when the time series are approximately stationary and the filter time constant or detrend length is carefully chosen. The term <> is the advective flux, which is usually regarded as a deterministic process, while the term is the eddy flux and is presumed to describe turbulent transport. [Note that <> = for block averaging only. For filtering or detrending this is not so.] It should be emphasized, however, that while ensemble averaging by definition separates deterministic and random (turbulent) parts of the time series, the block averaging, filtering and detrending operators simply assign variance to mean or turbulence according to whether that part of the signal is varying faster or slower than the intrinsic time constant of the operator. Hence some of the random turbulent signal may be assigned to the mean flow, if there is variance with a period longer than the filter time constant or detrend length. In the past detrending or filtering have been used to remove the effects of calibration drift for some types of instruments. However, many current instruments are not as prone to instrument drift, thereby obviating the need to deal with this problem. More frequently, however, detrending or filtering have been used to condition data to resemble stationary time series, so that variances, covariances, spectra, and cospectra can be compared across sites and flow conditions. Although such comparisons are useful, it remains essential that the full covariance be properly restored when the mass balance is computed so that information in the trends is not discarded. Coordinate systems: For the purposes of defining a coordinate system, the operator usually used to separate the mean and turbulent parts of the signal is the block average. Serious conceptual problems follow if the coordinates are defined by the low frequency part of a filtered or detrended signal. In such a case the coordinates would be unsteady and the equations of motion would acquire extra terms. If the coordinate system uses the planar fit method (discussed below), which uses an ensemble of values of to define the x,y,z directions, then an appropriate averaging period would be 30-60 minutes. Once a coordinate system is defined using an ensemble of such values, measurements from any single period can be rotated into this frame. These measurements will have, in general, a non-zero vertical mean velocity. It is this part of the signal that is responsible for carrying the contribution to the total covariance associated with atmospheric motions longer than the block averaging period. A major advantage of the planar fit approach is that a large ensemble of values can be used to define a coordinate frame that can then be regarded as an independent reference frame. The original time series can then be rotated into this frame and filtered or detrended as desired and the total covariance restored as in equation (1). If the more traditional 2 or 3 rotation method is used, then the coordinates are redefined every block-averaging period so as to set the mean vertical velocity, , to zero in that period. This is equivalent to discarding contributions to the covariance from atmospheric motions with periods longer than the block averaging time. Hence, if this has been done, the data have effectively been high pass (and nonlinearly) filtered with a time constant given (approximately) by the block-averaging period. Evidence from several recent studies has suggested that the failure of daytime energy balance closure at many sites and the accompanying underestimation of the CO2 fluxes results from such implicit high-pass filtering so that contributions to the total surface-normal covariance, , from atmospheric motions longer than conventional averaging periods of about 30 minutes are lost. This is done either implicitly by rotating coordinates to set = 0 each period or explicitly by filtering or detrending data and discarding the trends or low frequency component when forming the covariance. At several sites that have been analyzed, it proved necessary to increase rotation or averaging periods to up to 4 hours to capture all the covariance as eddy flux, i.e., to insure that = . Resolving the daily cycle: To capture the daily cycle of surface exchange with good resolution we would like to have flux values averaged every hour or so. If atmospheric motions longer than this carry a significant portion of the flux, then the consequences of analyzing data in planar fit coordinates and of rotating every period are somewhat different. If coordinates are rotated so that = 0 each hour, then the exchange in each period will be incorrect as the low frequency contribution will be absent. The average of all the periods will then differ from the true daily exchange by the amount of the lost low frequency contribution. In planar fit coordinates, the low frequency motions will appear as mean vertical (advective) fluxes, in each period but as these contributions vary over longer periods than the averaging period, they will appear as (random) noise from period to period and the resolved diurnal trace will be noisy. In this case, however, the sum of all the periods, <u><c>, will add up to the true daily exchange (assuming there are no contributions from periods longer than 24 hours). In short, there is no way of precisely resolving the diurnal cycle over periods shorter than those that are carrying significant flux. Discriminating between sources of low frequency motions: At present we do not understand the sources of all the low frequency contributions to the eddy flux and as we extend the averaging period or effective filter cut-off time to 4 hours, unsteady turbulent contributions are confounded with deterministic trends, which occur, for example, at sunrise and sunset. Results so far suggest that the contribution of low frequency motions to the total flux depends on the site configuration (more low frequency contributions on tall towers over tall canopies) and climatology (low frequency motions are prevalent in deep convective boundary layers and in complex topography) and so the averaging period necessary to capture all the covariance will be site dependent. If the time series are non-stationary and we wish to discriminate between true boundary layer turbulence (which we might expect to match ideal patterns) and the synoptically forced trends in the data (which we dont), then block averaging should be avoided because any low frequency trends will be assigned to the turbulent or portion of the total covariance. In such cases filtering is the preferred approach with detrending as a second best choice because turbulent statistics and eddy fluxes obtained from filtered or detrended signals are more likely to correspond to expectations of ideal turbulence behavior. If data have already been high-pass filtered or detrended, non-stationary periods can be recognized as those where time series vary in systematic, non-turbulent ways. They occur during times of rapid atmospheric boundary layer growth or decay, frontal passage, or the passage of clouds or other relatively short-term atmospheric boundary layer disturbances. Tests for (flux) stationarity usually involve estimating the variability of the flux during the flux-averaging period (e.g., sub-sampling six 5-minute fluxes during a 30-minute flux-averaging period). Some specific recommendations: These concerns impact four related matters: (i) length of averaging period (or filter cutoff period) used to compute the fluxes, (ii) a specific tool, the ogive, for diagnosing when low frequency contributions may be present, (iii) the need to keep raw data to ensure the ability to reanalyze data as understanding of these issues develops, and (iv) correcting for spectral loss by scaling the eddy covariance fluxes to force energy balance closure. (i) The flux averaging periods (Tb) should no shorter than 30 and no longer than 60. The recommended length for Tb remains 30 minutes. However, longer averaging periods will be required to fully investigate low frequency contributions to the fluxes. Discussion/Summary: The potential loss of low frequency flux components increases as the flux averaging time becomes shorter. On the other hand, too long an averaging period reduces the ability to resolve (a) the daily cycle and (b) the influence that short-period sporadic events, like cloud passage, can have on the fluxes. A 30 minute averaging time for flux calculations is a reasonable compromise. However, in order to assess the influence of the low frequency flux components on the 30 fluxes there will occasions that several 30 raw data time series will have to be concatenated and studied specifically for low frequency content. This can be done in a variety of ways. One approach is to simply plot the flux as a function of increasing averaging time (Tb) and to find that value of Tb at which the flux no longer increases. Another approach is spectral decomposition of one or more contiguous half-hourly raw data sets (with a Fourier Transform or Wavelet Transform) and then, to calculate the partial sums of the spectral estimates as a function of frequency from the highest to lowest frequencies. The resulting curve is an ogive. (ii) Ogives are recommended as the diagnostic tool to determine the length of time necessary to capture the low frequency flux components. Discussion: If the slope of the ogive is flat at the lowest frequencies, then all high and low frequency flux components have been captured within the that particular period of time. On the other hand, if the ogive appears to be increasing at the lowest frequency, then the time period may be too short. The origin of low frequency contributions to the covariance is uncertain at present. Deep convective cells and roll modes within convective boundary layers are probably important at some sites, but tropospheric forcing may also play a significant role. The influence that these aspects of boundary layer dynamics have on the fluxes clearly needs to be better understood. Ultimately, some data exploration at individual sites, specific analyses for the influence of time of day, and cross-site comparisons will be needed to better understand issues surrounding low frequency corrections. Nevertheless, in order to ensure the ability to re-examine historical data in light of new findings it is recommended that all raw data be kept. (iii) Obtain, keep, and maintain all raw data records. Discussion: This has been an AmeriFlux standard since the inception of AmeriFlux, and it important to reiterate this requirement. However, should this prove impossible, all associated variances, covariances, skewness, and kurtosis must be kept to allow for future analyses. (iv) Scaling fluxes to close the energy balance is not recommended. Discussion: Given that many of the comparisons between different net radiometers show a ( 10-15% range of variation, correcting eddy covariance fluxes for spectral losses by scaling them to Rnet-G is not recommended. The uncertainties involved are not fully understood and may introduce biases into flux estimates that are completely unrelated to eddy covariance systems. Furthermore, what data are available suggest that the low frequency contributions to heat, water vapor, and CO2 fluxes are poorly correlated so that even if Rnet-G was known with enough confidence to scale (LE+H) the changes in LE+H required to close the energy balance would be a poor guide to the changes required in the CO2 flux. Nevertheless, it is strongly recommended that AmeriFlux participants determine and report the degree of energy balance closure at each AmeriFlux site. Including all minor energy balance (storage) terms in the final energy balance is also encouraged. Although these terms are not, by themselves, responsible for the frequent lack of closure, their inclusion in the site energy balance should help improve the final closure. The planar fit coordinate system is the preferred coordinate system. Discussion: Until recently the standard coordinate system for estimating fluxes has been the natural coordinate system, in which = 0, = 0, and sometimes = 0 for every averaging period. The angle brackets refer to block averaging and in unsteady flows the coordinate orientation depends on the averaging period Tb. However, the rotation that sets = 0 acts as a nonlinear high pass filter that (a) removes the contribution to the flux carried by motions with periods longer than the Tb and (b) distorts the contribution to the flux in the remaining frequencies. Recent analysis suggests that the = 0 coordinate system is a major contributor to the lack of surface energy balance closure seen at many tall forests sites. The low frequency contribution can be recovered by including the mean advective vertical flux, <w><c>, in the planar-fit coordinate system or by extending the averaging and rotation period as discussed above. Limited tests to date suggest that fluxes formed in the planar fit coordinate system and which include the <w><c> term are 5-10% higher (in magnitude) than in the natural coordinate system. The planar fit coordinate system is defined over a long period (months) and historical data sets should be reprocessed to re-estimate fluxes in the new coordinate system. Every time the sonic is moved the planar fit coordinate system must be recalculated. Investigations of the influence of atmospheric stability, strong winds, and changes in foliage morphology on the planar fit rotation angles also need to be carried out. A major strength of the planar fit coordinate system is that it decouples the process of defining the coordinate frame from that of forming the covariances. A large ensemble of block averaged mean velocities, , can be chosen specifically to define the coordinate frame. For example, because it is possible that airflow may follow the terrain more closely during high wind speed neutrally stratified flow than in low speed, during which conditions are more likely to be diabatically influenced, a subset of the ensemble of values, which excludes periods with low speeds, could be used to determine the coordinate frame. Similarly, the vertical tilt angle of a planar fit coordinate system may vary as a function of wind direction because of topographically induced flow distortions. Hence, subsets of the ensemble values from different azimuthal sectors could be used to define different planar-fit frames appropriate to each sector. Once the coordinate frame has been defined through the planar-fit process and the rotation angles have been determined, then providing the sonic has not moved, raw time series can be rotated into it either post facto or in real-time. Flux calculations can then be performed using block averaging, filtering or detrending with due attention to the issues discussed above. With the planar fit coordinate system it is possible to recover the mean vertical wind and the crosswind momentum flux, which may provide information on thermal circulations at the site and some measure of site heterogeneity. For investigations of spectra and cospectra, turbulence time series should be rotated into the planar fit coordinate system first. Because all natural flows are inherently three-dimensional, 1-D sonics should not be used for estimating fluxes unless errors associated with three-dimensional flow effects can be estimated. Zero-offsets and sonic inclination angles need to be checked periodically and the data recorded. For the reasons noted above it may also be necessary to examine the rotation angles as a function of wind speed. High frequency spectral losses are unavoidable, but can be minimized by careful design. It is recommended (1) that all separation distances, time constants, system sensor characteristics, and deployment height be recorded and preserved and (2) that such information be used to estimate high frequency spectral loss. Discussion: Because the minimum spectral loss is associated with heat flux measured by sonic thermometry, it is recommended that all sites estimate this using either (the corrected and updated) Moores method or Massmans analytical approximation of Moores method. Estimating spectral loss from sonic thermometry should provide a lower bound on spectral loss associated with any scalar flux, i.e., the effective filter time constant for heat flux should be much smaller than the equivalent time constant for any other scalar flux. This simple check should help decide if corrections associated with scalar fluxes are reasonable or not. The time constants associated with closed-path sensors should be determined empirically because time constants measured in situ usually exceed the value calculated from tube attenuation and the intrinsic response of the scalar sensor. Frequent checks on closed-path time constants are important because dust or other influences may cause time constants to change over time. Methods of correcting for spectral losses are different in closed- and open-path systems. Low pass filtering (degrading temperature time constants) and spectral ratioing are sometimes used with closed-path systems. The numerical or analytical form of Moores method is more appropriate to open-path systems. Both approaches have some inherent problems, but gave similar results for a few test cases. The analytical approach is the easiest to implement, is less restricted in its application, and includes attenuation effects not included in the other techniques; however, it does require specific knowledge of the properties of the spectra or cospectra and it makes no allowance for the inherent period-to-period variability in spectra or cospectra. It is recommended that all sites determine ensemble spectra and cospectra and that one specific mathematical formulation be used for comparing results between sites. It is also recommended when using the analytical approach that the spectral corrections be applied after coordinate rotation. This is likely to be the worst case scenario (slightly overcorrects the attenuation), but it also avoids significant (and possibly intractable) problems introduced by the need to consider anisotropy of the turbulent flow and its influence on the transfer functions associated with the sonics vertical and horizontal axes. Some concerns have been expressed when the correction factors are greater than 1.5 or so. However, these relatively large correction factors tend to occur more often at night when the flux is quite low so that their impact on the annual budget is not very large. The WPL terms are not a consequence of inadequate sensor performance, and in that sense they are not instrument related corrections. Rather they are a consequence of concurrent density fluctuations of the air sampled by an instrument that measures trace gas density rather than molar mixing ratio. Nevertheless, including spectral corrections with the WPL terms requires a re-examination of the theory originally developed in 1980 by Webb, Pearman, and Leuning. Discussion: A properly functioning CO2 instrument detects the number of absorbing CO2 molecules within the path of its infrared light beam. Assuming that an instrument detection volume is constant, then a CO2 instrument indirectly measures the density (or number density) of CO2 molecules in the sample. Consequently, the WPL terms are not required to correct the measured trace gas density, rather they are required to compensate for the concurrent density fluctuations in the air sampled with this type of instrument. These terms originate from the mathematical necessity imposed by Reynolds averaging. CO2 fluxes can be measured with either an open-path or a closed-path sensor. Both sensor types are similar in that they include an infrared gas analyzer that responds to the attenuation of infrared light. However, they are fundamentally different in their sampling strategy because the open-path system is a passive system, whereas the closed-path system is an active system. For application of spectral corrections and the WPL term to estimate fluxes from raw covariances this difference is critical. The active system fundamentally alters the sample, but the passive system does not, and it is important that this be kept in mind when considering how to include spectral corrections and the WPL when deriving flux estimates. This issue was not addressed in the original paper by WP&L. Open-Path Sensor: Attenuation of CO2 density fluctuations in an open-path sensor results from the sensors inability to resolve data on scales smaller than the detection volume. This is an instrument design issue and is not related to fundamentally altering the samples temperature, pressure, water vapor or CO2 content. Strictly speaking this last statement is not quite accurate, because the energy of the infrared signal absorbed by the CO2 molecules increases their vibrational and rotational energy (a quantum physical effect). In addition, the sensor can actually remove mass from the sample when condensation occurs on the lenses, which generally causes an easily diagnosed problem by rendering the data useless. There are also the possibilities that the sensor may distort the flow and that there are boundary-layer effects associated with flow near the flat surfaces that enclose the optical path. Further, open-path sensors are a heat source to the atmosphere because of their infrared signal generator and because (and maybe more importantly) the sensor body reradiates absorbed solar radiation as heat. Conceivably, these two effects could alter the temperature of the sample before or during its passage through the instruments optical path. However, these issues can be ignored for the present discussion. Since the WPL applies to trace gas fluxes, as well as to their density fluctuations, all raw covariances measured with an open-path system must be spectrally corrected before including the WPL term. A simple thought experiment should help to clarify this issue. Consider two cases. The first case is for the perfect instrument or system, for which no spectral corrections apply; i.e., all instruments are co-located and perfectly measure data at a point. In this case the WPL still must be included as part of the CO2 mass flux estimate because of the nature of the issues associated with Reynolds decomposition and nature of the atmospheric mass and heat transfer. The second case differs from the first only in that the CO2 measurement is attenuated by 25%. If one now applies the WPL and then corrects the resulting mass flux for spectral attenuation the result will be in error because it will differ significantly from the first or perfect case. This example can be extended to include any combination of imperfect (spectrally attenuated) measurements of water vapor and heat flux, etc. and in general one must conclude that for open-path sensors spectral corrections must be applied to the raw covariances first and then the WPL terms afterward for the final estimate of the trace gas flux. This also points out an important corollary. Spectral (or cospectral) corrections are specific to the instruments involved. They are not necessarily transferable from one covariance measurement to another. In other words individual instruments and their specific separation distances, time constants, etc. define system specific geometries and system specific spectral corrections. This corollary has relevance to closed-path systems and the estimating of fluxes by performing a point-by-point conversion of CO2 mass density measurements to CO2 dry-air mass mixing ratio. Closed-Path Sensor: Attenuation of the temperature fluctuations in a closed-path system results from a combination of molecular and turbulent diffusion within the intake tube and the associated heat exchange with that tube. In essence the tube acts as a heat exchanger and brings the sample to a uniform temperature before it is drawn into the detection chamber of the infrared gas analyzer. Consequently, the intake tube fundamentally alters the samples density by changing the samples temperature in such a way that fluxes measured with this instrument do not require the T term of the WPL. [However, it was suggested at the workshop that T associated with low frequency atmospheric motions may not be completely eliminated by the intake tube.] The tube should also attenuate the pressure fluctuations, however, they are not completely eliminated by the time the sample arrives at the detection chamber. [In fact, turbulent flow inside a tube actually generates p eddies due to tube flow boundary layer effects, but these are not expected to be significant to the WPL related issues discussed here.] Attenuation of fluctuations in (trace gas) mass density result from a combination of diffusional smoothing of density variations inside the flow path (defined by the tube and sampling chamber), possible interaction with the flow path walls, design (line or volume averaging) aspects of the infrared gas analyzer, and any signal processing or electronic filtering inherent to the instruments electronic circuitry. Of these only the tube and chamber flow effects qualify as the active part of the system, all others are `passive. Consequently, measuring trace gas concentration with a closed-path system combines active and passive sampling. Usually these effects are lumped together into a single time constant, which is then used to describe the closed-path system, and indeed that is what is recommended in at the workshop as discussed in the section on high frequency spectral corrections. However, including the WPL in a manner appropriate to a closed-path system requires careful consideration of the nature of the sampling and its associated spectral correction. In general the spectral corrections made to the WPL covariance terms should not include any active (diffusional or flow path) attenuation effects. Rather, they should include only passive attenuation effects associated with the other parts of the system. Nevertheless, diffusional and tube attenuation effects should be included in the raw covariance between the sonic and the trace gas being sampled. In other words, it is recommended (1) that only the spectral corrections associated with the passive sampling aspects of any closed-path system be applied to the WPL covariance terms and (2) that both active and passive spectral corrections be applied to the covariance between the trace gas instrument and the sonic. This recommendation applies to water vapor as well as carbon dioxide, although some confusion may arise because the flux estimate includes a water vapor term as part of the WPL as well as the measured covariance term. Nevertheless, the spectral corrections to the WPL terms associated with actively altering the sample should not apply even for water vapor. Summarizing the important distinctions between open- and closed-path systems: Open- and closed-path (CO2 and H2O) systems are similar in their use of infrared gas analyzers to measure trace gas fluctuations. But, they are different in their handling of the air being sampled. These differences are crucial when applying spectral corrections and the WPL terms for flux estimation. Open-path systems are purely passive, whereas closed-path systems combine passive and active sampling. Passive spectral corrections describe instrument or data processing compromises and they apply to all covariances (including the WPL terms) and to either an open- or closed-path system. However, these corrections are specific to a particular instrument and data processing system and they are not necessarily the same for , , , or . Active spectral corrections describe sample-handling compromises. They apply only to closed-path systems and only to the raw covariance term, not to the (closed-path-associated) WPL terms. This is a consequence of the fact that the WPL terms apply only to the environment in which the trace gas measurements are made. In the case of the open-path the WPL covariance terms can be interpreted as fluxes (after spectral correction). In the case of the closed-path the WPL covariance terms lose their interpretation as fluxes, because fluctuations in temperature, pressure, and water vapor of the air being sampled have been altered. Point-by-point conversion of the measured density to dry-air mixing ratio: In order to avoid issues involving the WPL, some researchers have used a point-by-point conversion of the measured density to dry-air mixing ratio. However, this is not recommended, as it does not avoid issues associated with spectral corrections nor does it fundamentally alter any instrument constraints, which is an issue that WP&L (1980) did not discuss. This is most easily seen for an open-path system. For this case measurements of ambient temperature, water vapor density, and pressure are used to compute the density of dry air, which in turn are then used to calculate the instantaneous dry-air mixing ratio for the trace gas [x(t)]. The time series x(t) is then decomposed using Reynolds decomposition into a mean, , and fluctuating component, x. However, the fluctuating component will suffer from spectral attenuation, but only in the proportion that its constituent elements are attenuated. In other words, if T is attenuated by 2%, H2O by 8%, and CO2 by 10%, then x will be attenuated by some combination of these percentages, depending upon their relative contribution of T, H2O and CO2 to the x(t). The same argument carries over to the covariances, . The situation is similar, but more favorable to a closed-path system. In this case the temperature fluctuations have (presumably) been eliminated, the pressure fluctuations have (presumably) been reduced to the point they are no longer significant, and the water vapor fluctuations associated with active sampling (intake tube attenuation) has tended to bring the sample into a state where the water vapor concentration is relatively more uniform than in the ambient atmosphere (which as argued above is accomplished by the physical mixing of the sample inside the intake tube). For this scheme the fluctuations in x(t) have also been attenuated, but again in proportion to the amounts that CO2, H2O, and maybe p are attenuated. It cannot be assumed a priori that these measured quantities have been equally attenuated (which is inconsistent with the observation that the time constant associated with water vapor attenuation for most closed-path sensors is at least 2 or 3 times the time constant for CO2 attenuation). As with the open-path example, the same argument applies when forming fluxes from the measured covariances. Therefore, to apply the appropriate spectral corrections requires that be expressed in terms of its constituent parts and the corrections applied according to the instruments and the system design used to calculate the various component covariances. General comments on open- and closed-path systems: Open-path sensors are more prone to data loss resulting from rain and snow interfering with the optical path than are closed-path sensors. To date, flux comparisons between open- and closed-path systems agree to within expected measurement errors. But, it is also possible that CO2 fluxes measured with closed-path systems are slightly biased because both passive and active spectral corrections may have been applied to the WPL covariance estimates, whereas only the passive corrections apply. This potential bias may be hard to detect by comparing open- and closed-path systems because it is expected to be of comparable magnitude to the expected measurement uncertainties. Careful calibration of either open- and closed-path sensors is essential for obtaining reliable flux estimates. Gap-filling open-path flux data lost as a result of rain is probably best done using a PAR-NEE relationship developed during rainless periods. All post field QA/QC should be documented and reported when publishing flux estimates. Discussion: No minimum set of QA/QC controls or tests were established. It was generally understood that more diagnostic tests are better than fewer. This is particularly true in complex terrain where comparison with flat terrain diagnostics is important. Three papers that discuss some post field QA/QC are: Foken and Wichura (1996: Agricultural and Forest Meteorology, 78, 83-105), Vickers and Mahrt (1997: Journal of Atmospheric and Oceanic Technology, 14, 512-526), and Finkelstein and Sims (2001: Journal of Geophysical Research, 106, 3503-3509). The first two of these papers discuss, among other things, tests for stationarity and the third paper discusses flux sampling error. It appears now that nonstationarity results in random error and not bias error. There are also spike detecting (and interpolation) subroutines that can be used to test time series for spikes: Hojstrup (1993: Measurement Science and Technology, 4, 153-157) and Brock (1986: Journal of Atmospheric and Oceanic Technology, 3, 51-58). It is also recommended that the standard deviation of all standard micrometeorological variables be recorded. The standard deviation (or variance) of net radiation or incoming solar radiation should be useful for diagnosing periods of nonstationarity associated with the passage of clouds. Nighttime u* thresholds associated with insufficient turbulence to use eddy covariance fluxes should be determined on a site be site basis. These thresholds should be established during periods of stationary turbulence. Employing the new planar fit coordinate system may require the re-establishment of the u* threshold because u* is dependent upon the coordinate system used. Gap filling should be done by the researchers themselves, not by the users. The gap filling strategy depends on the goal. Synthetic data gaps are recommended to determine if there are any systematic biases associated with different gap filling methods. Some cross-site comparisons should be done with different gap filling methods. Falge et al. (2001: Agricultural and Forest Meteorology, 107, 43-69 and 71-77) discuss some gap filling strategies. Recent modeling studies and field experiments show that horizontal and vertical advection terms tend to be of opposite sign and comparable magnitude. In complex terrain, however, their sum is not necessarily zero and it can make a significant contribution to the calculation of surface exchange. Discussion: Currently there are few published models of turbulent transport of scalars in canopies on hills although results from some developing models were presented or described at the workshop. These showed that heterogeneity in the flow field could generate significant advective terms even when the scalar sources and sinks are horizontally homogeneous. In real topography, of course, both effects are likely to be important. The model studies showed that horizontal and vertical advection terms over topography are of similar magnitude and their sum can be comparable to or even exceed the eddy flux at certain locations on low 2D ridges. Analogy with studies of momentum transport in complex terrain together with direct measurement at the Wind River Crane AmeriFlux site suggests that scalar advection in 3D topography will be smaller than over 2D ridges of the same steepness but can still be comparable to the eddy flux terms and must be estimated. In particular, these studies show that including vertical advection in surface exchange calculations while ignoring horizontal advection can introduce significant errors. The most common sources of flow heterogeneity and accompanying advection are topography and contrasts in surface energy balance (inland sea breezes). These must be considered along with variations in the surface distribution of scalar sources and sinks when estimating advective influence on measurements. The effect of topography is exacerbated by stable stratification. A series of studies were described at the workshop that pointed to nighttime drainage flows as a major source of error in nighttime respiration measurements on nights of low wind speed and strong stability. However, it is too early to make general recommendations for operational corrections for advection. At some well-equipped sites, continuous measurement of the dominant advective terms is being carried out, but at most sites it is likely that model-based corrections will be more cost effective. Nevertheless, it is clear that to address advection, particularly at night, a concerted measuring and modeling effort with site intercomparisons will be needed. It is important to be cognizant of emerging issues. Discussion: Some emerging issues were identified at the workshop. They tend to be associated with the influence of advection or complex terrain on measured fluxes. The following is a list of these issues: Drainage flows, which can be very thin, are expected to deplete near-soil CO2 from beneath the measurement height. This process needs to be better understood and quantified. Directional wind shear inside canopies makes comparison between above- and below-canopy flux measurements difficult. The above- and below-canopy flux footprints will not necessarily coincide or overlap. Nocturnal meandering motions of unknown origin can also lead to directional shear, further complicating the analysis and interpretation of eddy covariance data. Most (tower-deployed) eddy covariance systems are too high to measure fluxes when nocturnal drainage flows are confined to thin layers near the ground, such as over grassy surfaces or within the subcanopy. However, using an eddy covariance system near the ground where the transporting eddies are smaller may lead to significant high-frequency spectral losses. Very little work has been done on within-canopy or near-surface cospectra making it making it hard to estimate these spectral losses. Fundamental physical reasoning suggests that the u* threshold for data screening should be replaced with metrics based on the Froude or the bulk Richardson number. Regions of flow separation behind even gentle hills result from the presence of canopies on the hill. This can confound the interpretation of the measured fluxes and can result in significant biases. Stable or nighttime conditions can support different types of motions, which can impact fluxes in different ways. Ramps tend to dominate during slightly stable conditions and they will promote the vertical mixing of trace gases. For very stable highly stratified flows gravity waves, usually confined to regions just above the canopy top, will dominate. Under these conditions turbulence is nonexistent and the gravity waves usually do not support much vertical mixing. However, their presence may bias flux measurements. Nighttime conditions can shift between these two bounding states and can display features suggesting a mixture of the two. Inspection of turbulent time series is required to begin diagnosing these issues at different sites. Intermittency and the loss of stationarity are common at night. There is little evidence of a true spectral gap between turbulence and synoptic flow variations. There is recent work that shows that atmospheric motions of long period (~4 hours) can play an important role in surface exchange. We need a better understanding of the sources of low frequency turbulent transport, its interface with synoptic trends, and how to deal with this continuum of atmospheric transport processes at flux towers. Available eddy covariance software. Discussion: The exchange and testing of any (nonproprietary) eddy covariance software is encouraged. Rob Clement and John Moncrieff have made their eddy covariance software available to any one who may be interested. It contains many of the routines that are used or recommended for the gathering, processing, and analysis of eddy covariance data. The software can be downloaded from  HYPERLINK "http://www.ierm.ed.ac/research/edisol/htm" http://www.ierm.ed.ac/research/edisol/htm. PAGE  PAGE 1 5 ( ; w +SG]#HI}"#$)wyz"""8$;$$$N%O%&&**-5-6-2 244 4B4X8Y8}9>*aJH*aJ 5\aJCJaJaJ5\>*CJ 5CJ\U123456789\]   \u2345678/ 0    = Q l m  !!Tj<=^v  J}~  & FHI""<$=$''--//44 7 7;;==7$8$H$ h7$8$H$^h`h^h}99;;;===>>>>AAAA C!C"CCCGGHIXIcIIIJJ;K* 5H*\5\>*aJaJ B*aJphR=>>!C"CCCGGGGIIWIXIMMNNRRUUYY ^hh7$8$H$^h`h h7$8$H$^hh^h & F ^ ^I_J_aa7f8fiikkqq:v;v}} lmBD & Fh^hrrCxExyyW}X}x}y}}}mԊՊۊ܊DPQab./HI]^JNQ\Ůײ ()$&`aGHImyCJaJaJ>*aJ 5\aJCJOJQJaJ OJQJaJ5\>*H*TSTPQĮŮֲײ ۳ܳaHImn & F & F7$8$H$ h7$8$H$^h h(7$8$H$^`(h^hyz)*+TU[\]cdeghnopqruv0JmHnHu0J j0JU>*0JjU jU5\nZ\efgrstuv&`#$h^h 1h/ =!"#$%DyK *http://www.ierm.ed.ac/research/edisol/htmyK Thttp://www.ierm.ed.ac/research/edisol/htm i8@8 NormalCJ_HaJmH sH tH 6@6 Heading 1$@& 5CJ\<A@< Default Paragraph Font.U@. Hyperlink >*B*ph, @, Footer  !&)@& Page NumberZC@"Z Body Text Indent# h7$8$H$^h`aJ@R@2@ Body Text Indent 2 h^h7123456789\]   2345678/0   = Q l m  ! T j  < = ^ v J } ~  HI< = ##))++..11557788.=/===AA B BC CdCeCGG!H"HKKONPNRRvVwVWW8Z9Z^^b bccsitinnv vx|y|قڂ$%[\̩ͩNY"T ./&'(345800000000000000000000000000000000000000808080808080808080808080808080808080808080808080808080808080808080808080808080808080808080808 08080808080808080808080808080808080808080808080808080808 08080808 08080808 08080808 08080808 08080808080808080808 0808080808080808 080808080808080808080808080808080808 08080808080808080808 0 8080808 08 08 08 08 08 08 0808 0 8080808@0@0@0@0@0@0 0 }9ryveknp!= ^nvfhijlmoqug7X !!8@0(  B S  ?5858"&''vII{~l4)6) --..N.P.5555JJJJmKnKMLNLO*PAPBPCPCPiPjPzPzPPP 5̧֧9FĨŨ%(28 fsdefaultUserEC:\Documents and Settings\wmassman\My Documents\workshop2_summary.doc fsdefaultUserEC:\Documents and Settings\wmassman\My Documents\workshop2_summary.doc fsdefaultUsermC:\Documents and Settings\wmassman\Application Data\Microsoft\Word\AutoRecovery save of workshop2_summary.asd fsdefaultUserEC:\Documents and Settings\wmassman\My Documents\workshop2_summary.doc fsdefaultUserEC:\Documents and Settings\wmassman\My Documents\workshop2_summary.doc fsdefaultUsermC:\Documents and Settings\wmassman\Application Data\Microsoft\Word\AutoRecovery save of workshop2_summary.asd fsdefaultUsermC:\Documents and Settings\wmassman\Application Data\Microsoft\Word\AutoRecovery save of workshop2_summary.asd fsdefaultUserEC:\Documents and Settings\wmassman\My Documents\workshop2_summary.doc fsdefaultUsermC:\Documents and Settings\wmassman\Application Data\Microsoft\Word\AutoRecovery save of workshop2_summary.asd fsdefaultUserEC:\Documents and Settings\wmassman\My Documents\workshop2_summary.doc%"7^[P|6^F^`o(()^`.pLp^p`L.@ @ ^@ `.^`.L^`L.^`.^`.PLP^P`L.h ^`OJQJo(h ^`OJQJo(oh pp^p`OJQJo(h @ @ ^@ `OJQJo(h ^`OJQJo(oh ^`OJQJo(h ^`OJQJo(h ^`OJQJo(oh PP^P`OJQJo(h ^`OJQJo(h ^`OJQJo(oh pp^p`OJQJo(h @ @ ^@ `OJQJo(h ^`OJQJo(oh ^`OJQJo(h ^`OJQJo(h ^`OJQJo(oh PP^P`OJQJo(6^^[%"6                          @[[,[[X - .mKmL7P@P2Ph@PRP@PPT@UnknownGz Times New Roman5Symbol3& z Arial?5 z Courier New;Wingdings"hqifrFcqg }MB!r0dl 2Q~Summary and Synthesis of Recommendations of the AmeriFlux Workshop on Standardization of Flux Analysis and Diagnostics, Corval fsdefaultUser fsdefaultUserOh+'0 ,DPd |    Summary and Synthesis of Recommendations of the AmeriFlux Workshop on Standardization of Flux Analysis and Diagnostics, CorvalummfsdefaultUserntsde Normal.dotefsdefaultUsernt103Microsoft Word 9.0i@L@m@T_@Z}՜.+,D՜.+,x hp  WUSDA Forest ServiceBMl Summary and Synthesis of Recommendations of the AmeriFlux Workshop on Standardization of Flux Analysis and Diagnostics, Corval Title 8@ _PID_HLINKSAO*http://www.ierm.ed.ac/research/edisol/htm  !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrtuvwxyz|}~Root Entry FbData s1Table{,WordDocument"SummaryInformation(DocumentSummaryInformation8CompObjjObjectPoolbb  FMicrosoft Word Document MSWordDocWord.Document.89q