Congressional Budget OfficeSkip Navigation
Home Red Bullet Publications Red Bullet Cost Estimates Red Bullet About CBO Red Bullet Press Red Bullet Employment Red Bullet Contact Us Red Bullet Director's Blog Red Bullet   RSS
The Army's Bandwidth Bottleneck
August 2003
PDF



APPENDIX
C
Compressing Data to Reduce
Bandwidth Demand

Data compression has been suggested as a means to reduce bandwidth demand. This Congressional Budget Office (CBO) analysis concludes, however, that the most likely effect of pending advances in data compression will not be to empty saturated trunk communications lines. Instead, those lines will probably remain saturated with increasingly compressed data.

Data compression techniques differ depending on whether or not some losses of data can be tolerated. If they can, so-called lossy techniques may be used; if no losses may occur, lossless techniques must be employed. The transmission of picture images (so-called imagery) or streams of pictures (known as streaming video) can have some data errors, on the order of a few percent, because the human eye and brain unconsciously correct such anomalies. But compression techniques that are used for transmitting military orders, network management and other control information, situation assessments, and much of the rest of military data must be lossless.

A number of fast techniques exist for lossless compression, but the prospects are small for major improvements by 2010 in the amount of compression such techniques can achieve. The best one can obtain is about a 2:1 compression, on average, and military computer systems today routinely employ such compression techniques for transferring large data files. Computer users typically attain lossless compression (and decompression) when they use computer applications that zip, gzip, and unzip files. Improvements on those techniques are being pursued, but they usually involve the sequential application of known techniques. Those techniques may produce improvements of about 15 percent relative to current performance.

In contrast, substantial improvements in lossy compression techniques will occur by 2010 because of an expected reduction, by an additional order of magnitude, in the transmitted data throughput. A key to that transition will be the change from MPEG-2 to MPEG-4 standards for data compression. (MPEG stands for Moving Picture Experts Group.) Associated with each MPEG standard is a different numerical algorithm. The MPEG-2 uses numerical algorithms based on the fast Fourier transform, which was optimized for computers about 40 years ago to recast data associated with a stream of pixels--or more properly, the pixel indices. (The data are recast as a properly weighted sum of trigonometric sines and cosines and their higher harmonics, which are related to the second, third, and higher powers of sines and cosines.) The trigonometric functions associated with the recasting are called the basis functions. The fast Fourier transform exactly evaluates multiplicative weights for each basis function so that the weighted sum of the basis functions exactly equals the original data when evaluated at the pixel indices. In that sense, the weights replace the original data on a one-for-one basis.

However, the mathematics of Fourier transforms proves that smaller weights can often be ignored (set to zero) while the remaining weights allow the original data stream to be approximated to within acceptable error tolerances. The ratio of the retained weights relative to the original number of weights is the compression ratio. For typical two-dimensional imagery and video, acceptable error levels can be maintained by setting about nine-tenths of the weights to zero. Therefore, under the MPEG-2 standard, the retained weights for typical two-dimensional images imply a data compression ratio of about 10:1.

The new algorithm underlying the MPEG-4 is called the wavelet transform. In addition to working with basis functions like the sines and cosines noted above, it employs additional, specially tailored basis functions that allow minimal errors with even fewer retained weights. (In more metaphorical language, each basis function is called a wavelet because graphs of some of them look like waves.) Fast algorithms for employing the wavelet transform have been developed over the past 20 years. With the MPEG-4 standard, typical data compression ratios for two-dimensional images can often be expected to improve to about 100:1.

In the military context, the Air Force expects to begin using MPEG-4 data compression on a limited experimental basis in 2003, a move that would achieve an order-of-magnitude reduction in the bandwidth required for transmitting video images collected by unmanned aerial vehicles (UAVs). But rather than using the techniques to reduce the total demand for bandwidth, the Air Force plans to increase the resolution of the images transmitted or the number of sensors (or both), which would result in no net decrease in total bandwidth demand.(1) After the effectiveness of UAVs in the recent Iraq and Afghanistan campaigns, together with their novelty and the rapidly changing doctrine for their employment by the Army, there is no reason to expect that the Army will use these data compression techniques differently.

Therefore, although improvements in data compression will occur, CBO believes that in the future, the improvement will be used to keep the communications pipes full of more and increasingly compressed data rather than allowing the pipes to be emptied. Thus, improved data compression is unlikely to affect the results of CBO's analysis regarding the 2010 mismatch between bandwidth supply and demand on the battlefield.


1.  Personal communication to the Congressional Budget Office from Col. Rhys MacBeth, Commander, Digital Imagery Video Compression and Object Tracking Battlelab, Eglin Air Force Base, Florida, August 10, 2002.

Previous Page Table of Contents Next Page