PRODUCT USAGE PATTERNS AT THE AWIPS BUILD 4.2 OT&E SITES

Patrice C. Kucera, Scott P. Longmore2, and William F. Roberts

National Oceanic and Atmospheric Administration
Forecast Systems Laboratory
Boulder, Colorado

2Joint collaboration with the Cooperative Institute for Research in the Atmosphere
Colorado State University/ NOAA Forecast Systems Laboratory
Boulder, Colorado

1. INTRODUCTION

Over the past decade, the Advanced Weather Interactive Processing System (AWIPS) has been under development for field operations at National Weather Service (NWS) Weather Forecast Offices (WFOs) across the nation. The AWIPS communications network, software, and hardware provide the NWS with a multitude of advanced datasets and capabilities that are integrated into a comprehensive system. A formal Operational Test and Evaluation (OT&E) of AWIPS software and its related components occurred from 24 May to 30 June1999. In addition to numerous hardware and software tests, product usage logs were collected to record what meteorological data and applications the forecasters were using to perform their routine forecasting and warning duties. To obtain a sample of product usage logs that is consistent with previous studies, we extended the data collection through 31 August 1999.

This paper summarizes product usage logs that were collected from the workstations at each of the OT&E sites, namely Bismark (BIS), North Dakota; Corpus Christi (CRP), Texas; Pleasant Hill (EAX), Missouri; Topeka (TOP), Kansas; and Tulsa (TSA), Oklahoma. Analysis of these logs determine which products (such as model, radar, surface, satellite, etc.) and workstation capabilities were most commonly used by the forecasters to perform routine shift duties and duties prescribed for severe weather operations. When appropriate, the results are compared to previous studies from Denver and Norman when the staff used the AWIPS-like workstations, known as DARE and Pre-AWIPS. Preliminary results are also presented on how workstation products were used when Hurricane Bret hit the CRP WFO area, with more detailed analysis to be presented at the conference.

2. DATA COLLECTION AND SAMPLE

Nearly every action taken by the forecasters on the workstations' graphics devices, including product retrieval, is recorded. The workstation-generated logs record the date and time products were displayed, along with product scale, map backgrounds, color tables, and other preferences like font magnification and density. Additional information, such as product overlays, animation, and zooming, is also recorded.

OT&E workstation usage log data were collected in two stages. The first stage involved the extraction of usage entries from all OT&E Display 2-Dimensional (D2D) workstation display logs. The extraction software, installed on each workstation, ran once per day (~0600 UTC) and staged the extracted usage data to a retrieval directory. The second stage utilized collection software that pulled the data to Boulder through the Wide Area Network (WAN). The collection software, connected to each OT&E workstation, downloaded usage data from the retrieval directory to a local staging directory. This software also ran once per day (~0630 UTC)

Usage log files were collected in this manner from 35 workstations over about 100 days. Due to network outages and other unforeseen problems, there were several missing days at varying sites. Of note, there were no usage logs collected for Bismark from 19 to 31 August. We estimated that this loss represented about 13% of the data from BIS. Also, the logs from two Topeka workstations were not collected after 30 June, which represented about 20% of the TOP data. Since this paper only summarizes product usage, and does not make rigid conclusions, we proceeded to analyze the data, keeping these disparities in mind. Perl scripts and the S+ Statistical software package were used to summarize and format the raw usage log data.

3. DIFFERENCES AMONG SITES

The differences among the OT&E sites that are likely to have an effect on product usage include software delivery dates and training, number of workstations, and the weather.

The actual time each site has used the AWIPS software and when the office staff received training could affect product usage. Previous studies (Lusk et al., 1999 and 1995) showed that the longer users were exposed to and worked with the software, the more proficient they became, and the more workstation products they used. As the AWIPS software matured to include more functionality, forecasters were using legacy systems less (Kucera et al., 1997 and 1995). The number of staff using the system can affect product usage as well. For example, during severe weather, when additional forecasters are typically called in for duty, usage patterns differ; an examination of this will be included during our conference presentation.

EAX received Build 1 software in July 1996; TOP and TSA followed in August 1996. As an early software release, Build 1 primarily demonstrated the network communications software, and was limited in product availability. In October 1997, Build 3 was installed at EAX and TOP. TSA followed in December 1997 and BIS in January 1998. Build 3 software integrated nearly all observational (radar, satellite, observations, soundings, profiler) and gridded model datasets using the new D2D interface. In September 1998, all the OT&E sites except CRP received Build 4. CRP's installation followed in December 1998. The more mature software for Build 4 integrated applications from several development laboratories, and also included more communications and networking software. All of the OT&E sites were upgraded to Build 4.1 in January 1999. In July 1999, the AWIPS "Commissioning Build," 4.2, was installed. The statistics for this paper were taken from a combination of the Builds 4.1 and 4.2, and represent mostly FSL-developed software.

About one month prior to the initial software delivery at each site, Centralized User Training (CUT) sessions were conducted at the NWS Training Center in Kansas City. Subsequent software upgrades included some on-site training, as well as updated user guide and system manager guide materials and release notes. Although difficult to quantify without more in-depth user surveys, the individual backgrounds and education of the staff likely have an effect on product usage.

4. WEATHER

The weather itself obviously dictates the frequency of product usage. We obtained summaries of weather information for each site from the National Climate Data Center's Local Climate Data and Storm Data Web sites (1999). In general, BIS experienced a warmer and wetter warm season, while CRP, TOP, and TSA had an average to slightly cooler and drier warm season. EAX data were unavailable at press time. During this OT&E study, several sites experienced significant severe weather. All the sites experienced an active June, but only BIS had a notable number of severe days in July. In mid-August, Hurricane Bret greatly affected product usage at CRP. Early results are presented in this paper, and more in-depth statistics will be presented at the conference.

5. PRODUCT USE BY CATEGORY

Similar types of workstation products are grouped by categories, including Surface, Satellite, Radar, Vertical, Upper Air, Models, and Extensions. Table 1 shows a distribution of all requested products by product category for the 1999 warm season at the OT&E sites. The actual frequency and the percentage of each product category are given, along with the daily mean, which was calculated over the number of days for each site. The first five categories include all observational datasets, followed by Models (gridded data and model families) and Extensions, which include WarnGen, Interactive Skew-T, and other application software.

Table 1. Distribution of products by product category
for 1999 warm season at the OT&E WFO sites.

BIS WFO (87 days) CRP WFO (100 days)
Category
mean
freq
% mean freq %
Surface 40 3501 10.2 76 7624 6.1
Satellite 17 1462 4.3 40 4042 3.2 
Radar 77 6664 19.4 85 8472 6.8
Upper Air ~1 81 0.2 5 528 0.4
Vertical 3 244 0.7 12 1235 1.0
Models 239 20,835 60.7 1017 101,777 81.6
Extensions 17 1518 4.4 11 1107 0.9
Total 394 34,305 100.0 1248 124,785 100.0

 
EAX WFO (100 days) TSA WFO (100 days)
Category mean freq % mean freq %
Surface 45 4545 22.5 63 6330 10.6
Satellite 23 2335 4.4 32 3185 5.3
Radar 41 4136 7.9 110 11,052 18.5
Upper Air 27 2727 5.2 17 1661 2.8
Vertical 5 473 0.9 10 1016 1.6
Models 377 37,718 71.6 359 35,907 60.0
Extensions 7 745 1.4 7 740 1.2
Total 527 52,679 100.0 599 59,891 100.0

 
TOP WFO (79 days)
Category mean freq % Legend:
Surface 80 6305 11.1 BIS Bismark, ND
Satellite 27 2100 3.7 CRP Corpus Christi, TX
Radar 59 4628 8.2 EAX Pleasant Hill, MO
Upper Air 25 1947 1.9 TSA Tulsa, OK
Vertical 4 348 0.6 TOP Topeka, KS
Models 518 40,890 72.2
Extensions 5 376 0.7
Total 716 56,594 100.0

In general, the Model category substantially dominated product usage at all of the sites, which agrees with results from previous studies. All model data are displayed "on the fly" from gridded data, and can be displayed through the Volume Browser Menu or from predefined model families. CRP averaged over 1000 model loads per day. The Radar category showed notable usage at all the sites. Those sites with a more active severe season showed higher usage of the radar products. Previous usage log studies (Kucera et al., 1997, 1996, 1995; and Roberts et al., 1992 and 1993; Steiner et al., 1992; and Walker 1990) have shown that during severe weather episodes, the overall product usage rises, but the variety of the types of products used declines. In our conference presentation, we plan to include results of how product usage varies on severe versus nonsevere days.

Surface data include all observational plots (such as METARS, lightning, local data, and precipitation) as well as surface analyses. This category showed moderate usage at all the sites. Satellite data usage was similar across sites, with the exception of CRP, which was relatively higher. Vertical data (skew-T's, and profiler time-height cross sections) and Upper Air data (plan view plots of rawinsonde or profiler data on constant altitude or pressure surfaces) showed relatively low usage at all sites. The Extensions category averaged low usage, which is typical, because these applications provide very specialized information (Kucera et al., 1995, 1996 and Roberts et al., 1992). Detailed information on the most used products at each site will be presented at the conference.

6. HURRICANE BRET AND PRODUCT USAGE AT CORPUS CHRISTI

As mentioned earlier, usage patterns can change before and during severe weather because forecasters are looking at more products and because there are typically more forecasters on shift. The Hurricane Bret case is used to illustrate what products forecasters used the most when trying to make urgent predictions.

Hurricane Bret made landfall in the United States near CRP during our data collection period. As reported by a forecaster (Nadler 1999) at CRP:
 

"On the morning of Wednesday, August 18, a tropical disturbance wobbled off the Yucatan Peninsula in the Bay of Campeche. The next day, August 19, this disturbance became Tropical Storm Bret, the second named storm of the 1999 Atlantic Hurricane Season..... Bret was classified a hurricane Friday evening, August 20. It was located 215 miles east of Tampico, Mexico with sustained winds of 80 mph.....By 4AM Saturday, August 21, the National Hurricane Center extended a Hurricane Warning up to Baffin Bay, 40 miles south of Corpus Christi.....By Saturday evening (August 21), Bret quickly intensified into a major hurricane, reaching Category 4 by 7PM....Late Sunday morning (August 22), Bret finally slowed and churned slowly west-northwest, focusing its eventual landfall between Brownsville and Corpus Christi. Exact landfall was made...about 60 miles south of Corpus Christi around 5:45 PM Sunday evening....Storm total precipitation amounts from Bret were impressive. In two days, reports in excess of 15 inches fell over the region. Flash flooding became a concern as extremely heavy rains within Bret's squalls persisted over the Coastal Bend area."
In general, product usage increased during the week of the event as shown in Table 2, which summarizes the weeks before, during, and after landfall. Radar usage increased dramatically during and after the event, while Model usage declined as the storm approached and finally dissipated. Satellite data usage peaked during the week of the event, as did the Extensions usage. WarnGen is included in the Extensions category, and CRP issued over 20 warnings during the event.

Table 2. Distribution of products by product category
at Corpus Christi, TX WFO for the weeks before,
during, after landfall of Hurricane Bret in August 1999.

Aug 9-15 Aug 16-22 Aug 23-29
Category % daily mean % daily mean % daily

mean

Surface 5.4 71 7.1 98 6.4 65
Satellite 3.4 44 3.9 53 3.3 33
Radar 0.5 7 6.8 93 9.8 99
Upper Air 0.6 8 0.4 6 0.0 1
Vertical 1.2 16 0.9 12 0.9 9
Model 88.3 1158 79.1 1085 78.6 794
Extensions 0.6 8 1.8 24.3 1.0 10
Total 100.0 1311 100.0 1371 100.0 1011

A more focused look at the days before, during, and after landfall is provided in Table 3. Note how as Radar and Surface data usage increased, Model data usage declined. Because flooding and tornadoes were occurring, the radar data and surface plots were utilized extensively.
 
 

Table 3. Distribution of products by product category
at Corpus Christi, TX WFO for the days before,
during, after landfall of Hurricane Bret in August 1999.

August 21 August 22 August 23
Category freq % freq % freq %
Surface 100 9.2 173 10.3 134 13.6
Satellite 73 6.7 79 4.7 38 3.9
Radar 52 4.8 370 22.1 405 41.0
Upper Air 6 0.5 4 0.2 0 0.0
Vertical 10 0.9 13 0.8 8 0.8
Model 825 75.6 926 55.3 376 38.1
Extensions 25 2.4 108 6.5 26 2.6
Total 1091 100.0 1673 100.0 987 100.0

With the issuance of numerous warnings on 22 August, the Extensions category showed a dramatic jump due to the heavy usage of WarnGen. The Vertical and Upper Air categories declined in usage during the event. The conference presentation will provide more details on the products that forecasters used to track Hurricane Bret, to examine how usage of the various products change during severe weather.

7. SUMMARY

This is the first opportunity we have had to compare product usage from more than two WFOs. Initial comparisons with previous studies indicate comparable product usage overall and during severe and nonsevere weather days. For the conference presentation, we plan to present material on the most frequently used products at each site, how product usage varied for severe versus nonsevere days with special emphasis on Hurricane Bret, and more in-depth information on how software installation and training may have influenced product usage.

8. ACKNOWLEDGMENTS

The authors wish to thank Dave Clark and Bob Glancy for retrieving important, yet obscure information on short notice. We also want to acknowledge the folks at the Network Control Facility who helped us facilitate the data collection over the WAN. Our thanks also goes to the OT&E NWS office staffs for their cooperation in this effort. We are grateful for the insightful reviews of Cindy Lusk, Vada Shea, and Nita Fullerton.

9. REFERENCES

Kucera, P.C., C.M. Lusk, W.F. Roberts, and L.E. Johnson, 1997: Warm Season Operational Use of the WFO-Advanced Workstation at the Denver WSFO. 14th International Conf. on Interactive Information and Processing Systems, Phoenix, AZ, Amer. Meteor. Soc., 325-329.

Kucera, P.C. and W.F. Roberts, 1995: Warm season product usage patterns from the DARE workstations at the Denver and Norman WSFOs. 14th Conf. on Weather Analysis and Forecasting, Dallas, TX, American Meteorological Society, 101-107.

Kucera, P.C. and C.M. Lusk, 1996: Cool season product usage patterns from the DARE workstations at the Denver and Norman WSFOs. NOAA Technical Memorandum ERL FSL-18, NOAA Forecast Systems Laboratories, Boulder, CO, 59pp

Lusk, C.M., P.C. Kucera, W.F. Roberts, L.E. Johnson, 1999: The Process and Methods Used to Evaluate Prototype Operational Hydrometeorological Workstations. Bulletin of the Amer. Meteor. Soc., 80, 57-65.

Lusk, C., W. Roberts, L. Johnson, D. Clark, P. Kucera, 1995: Evaluation of Pre-AWIPS Training, Final Report. NWS Norman Risk Reduction Project (unpublished document).

Nadler, D, "South Texas Avoids Catastrophe," South Texas Weather Journal. National Weather Service Southern Region, Vol.4, No. 3, pp.1-4 (1999).

National Climate Data Center, 1999: Local Climate Data Website, http://www5.ncdc.noaa.gov/pdfs/lcd/1999/

National Climate Data Center,1999: Storm Data Website, http://www.ncdc.noaa.gov/pdfs/sd/pre9908.pdf

Roberts, William F., E.J. Steiner, and C.M. Lusk, 1992: Product usage patterns from the DARE workstations at the Denver WSFO. Eighth International Conf. on Interactive Information and Processing Systems for Meteorology, Oceanography, and Hydrology, 5-10 Jan 1992, Atlanta, GA, Amer. Meteor. Soc., 247-252.

Roberts, William F. and P.C. Kucera, 1993: Cool season product usage patterns from the DARE workstations at the Denver and Norman WSFOs. 13th Conf. on Weather Analysis and Forecasting, 2-6 Aug 1993, Vienna, VA, Amer. Meteor. Soc., 522-525.

Steiner, Ellen J., W.F. Roberts, and C.M. Lusk, 1992. Use of DARE-II workstation products and capabilities in the summer of 1990. NOAA Tech. Rep. ERL FSL-2, NOAA Forecast Systems Laboratory, Boulder, CO, 46 pp.

Walker, D.C., 1990. DARE-I evaluation: forecasters' assessment and use of the DARE-I system during the 1988 warm season. NOAA Tech. Rep. ERL 441-FSL 4, NOAA Forecast Systems Laboratory, Boulder, CO, 55 pp.