Over the past three decades, a quiet revolution has fundamentally changed
the way that much of the research in climate science works. Earlier, the
controlling science paradigm was the interchange between theory and observation
concerning the structure and behavior of natural phenomena. Today, much
climate research is driven by the interactions among theory, observation,
and modeling. By modeling, we mean computer-based simulations of various
phenomena based on numerical solutions of the theory-based equations governing
the phenomena under investigation. These combined approaches are now widespread
in the physical sciences. It is significant that mathematical modeling
of weather and climate literally pioneered this new approach to scientific
research.
Mathematical models of climate can range from simple descriptions of simple
processes to full-blown simulations of the astoundingly complex climate
system. Models of the coupled atmosphere-ocean-ice-land system lie close
to the most complex limit of such models. This very complexity of climate
models can lead to highly divergent human reactions to them, varying from
"garbage in, garbage out" to almost worshipful. The truth is
far from either of these unscientific characterizations.
Newcomers to the greenhouse warming problem tend to be unaware of the long
and rich history of mathematical modeling of the atmosphere and the ocean.
In the late 1940s and early 1950s, simple mathematical models were created
to attack the weather forecasting problem. More advanced models were built
in the late 1950s and early 1960s (6,
7) because of a strong research
interest in understanding the circulation of the atmosphere. Shortly thereafter,
the first model bearing a strong resemblance to today's atmospheric models
was created (8). That early model,
as well as all of today's models, solves the equations of classical physics
relevant for the atmosphere, ice, ocean, and land surface. These equations
are conservation of momentum (Newton's second law of motion), conservation
of heat (first lawof thermodynamics), and conservation of matter (air,
water, chemicals, etc, can be blown around by wind or currents, changed
in phase, transferred across boundaries, or converted chemically, but the
number of atoms of each kind remains unchanged).
The modeling approach thus provides high potential for fundamental tests
of applications of these theoretical first principles. Such modeling appears
deceptively simple: These equations are taught in high school physics.
There are some daunting challenges, however. When coupled and applied to
moving (and deforming) fluids such as air and water, these equations form
continuum systems that are intrinsically nonlinear and can exhibit surprisingly
counterintuitive behaviors. Moreover, their solution in a climate model
requires a reasonably fine-scale grid of computational points all over
the atmosphere-ice-ocean-land surface system. In addition, important small-scale
processes such as moist convection (e. g. thunderstorms) and turbulent
dissipation remain formidably difficult to incorporate on a first-principles
basis. Worse, no meaningful steady-state solutions solve directly for the
average climate. In effect, the average climate in such a model must be
described as a statistical equilibrium state of an un-stable system that
exhibits important natural variability on timescales of hours (thunderstorms),
days (weather systems), weeks to months (planetary-scale waves/ jet-stream
meanders), years (El Niño), and decades to centuries (ocean circulation
variations and glacial ice changes). Clearly, models of such a large and
complex system are intrinsically computer intensive. Fortunately, today's
supercomputers are over a thousand times faster than those of 30 years
ago. Because of today's widespread availability of relatively inexpensive
computer power, the number of fully coupled atmosphere-ocean climate models
in the world has increased from a few in the early 1980s to roughly 10
independently conceived models today. Roughly 20 more are essentially based
on these 10 models.
Over the last half century, use of these kinds of physically based mathematical
models has resulted in major improvements in the science of weather forecasting.
Sharp skill improvements have been achieved in finding the useful short-term
predictability in a fundamentally chaotic system (by which I mean that
the details of weather variations become essentially unpredictable after
a sufficient lapse of time, say a couple of weeks) (9).
For example, it has become almost routine to forecast the intensity and
path of a major winter storm system well before the surface low-pressure
area (so ubiquitously displayed in television weathercasts) has even formed.
Recently, it has become clear that slower variations of the coupled ocean-ice-atmosphere-
land surface system provide potential for finding useful predictability
on timescales longer than the couple of weeks characteristic of individual
weather systems. The most visible example is the realization that El Niño
events, which produce warming in the tropical eastern Pacific Ocean, may
be predictable a year or so in advance under certain circumstances (10).
The existence of such a "predictable spot" of warm ocean suggests
a "second-hand" improvement of prediction of seasonal weather
anomalies (e. g. a wetter-than-normal California winter).
The existence of such extended-range predictive potential in the climate
system leads to obvious questions about such models' validity for predicting
systematic changes in the statistical equilibrium climate (say a 20-year
running average) resulting from the inexorably increasing infrared-active
gases that are currently underway. First, we must recognize that these
are conceptually quite different things: Weather forecasting attempts to
trace and predict specific disturbances in an unstable environment; climate
projections attempt to calculate the changed statistical equilibrium climate
that results from applying a new heating mechanism (e. g. CO2
infrared absorption) to the system. Perhaps surprisingly, predicting the
latter is in many respects simpler than predicting the former.
As an example of the fundamental difference between weather forecasting
and climate change, consider the following simple and do-able "lab"
thought experiment that utilizes the common pinball machine.8
As the ejected ball in the pinball machine careens through
its obstacle-laden path toward its inevitable demise in the gutter, its
detailed path, after a couple of collisions with the bumpers, becomes deterministically
unpredictable. Think of this behavior as the "weather" of the
pinball machine. Of course, the odds against success can be changed dramatically
in favor of the player by raising the level of the machine at the gutter
end, in effect changing the "climate" of the pinball machine.
By reducing the slope of the playing field, the effective acceleration
of gravity has been reduced, increasing the number of point-scoring collisions
before the still inevitable final victory of gravity. Interestingly, in
this altered pinball machine "climate," the individual trajectories
of the balls are ultimately as unpredictable as they were in the unaltered
version. The diagnostic signal of an altered pinball "climate"
is a highly significant increase in the number of free games awarded. A
secondary diagnostic signal, of course, is a noticeable decrease in the
received revenues from the machine. It thus is conceptually easy to change
the pinball machine's "climate." Detecting changes in pinball
machine "climate" and attributing its causes, however, can be
easily obscured by the largely random statistics of a fundamentally chaotic
system, not unlike in the actual climate.
What do these pinball machine experiments have to do with understanding
models of the real climate? Projections for greenhouse warming scenarios
depend on a number of physical processes (see above) that are subtle, complex,
and not important to weather prediction. However, people outside the climate
field are frequently heard to say that climate models are ill posed and
irrelevant because they attempt to forecast climate behavior that is well
beyond the limits of deterministic predictability and that if one cannot
predict weather more than a week in advance, the climate change problem
is impossible. Such statements are scientifically incorrect. The "weather
prediction" problem is essentially an initial value problem in which
the predictability of interesting details (i. e. weather) is fundamentally
limited by uncertainty in initial conditions, model errors, and instabilities
in the atmosphere itself. In contrast, climate change projections are actually
boundary value problems, (e. g. interference with a pinball machine's acceleration
of gravity), where the objective is to determine the changes in average
conditions (including the average features of the evolution toward the
new equilibrium) as the planet is heated or cooled by newly added processes
(e. g. increased CO2 ).
The differences between weather and climate models are further instructive
when one considers how their strengths and weaknesses are evaluated. Thanks
to massive amounts of weather and climate data, both kinds of models can
be evaluated by careful comparison with data from the real world. In practice,
however, the approaches to improving these superficially similar models
are very different. The weather models are evaluated by comparing model-based
forecasts, started up from real data on a given day, with what happened
hours to weeks later. Interestingly, one of the key problems with such
weather models is that they can easily reject their initial conditions
by drifting toward a model climate that is quite different from that of
the real data that was used to start up the detailed forecast calculation.
In effect, such a weather forecast model is deficient in the climate that
it would produce if released from the constraints of its starting data.
In sharp contrast, a climate model has the responsibility of simulating
the time-averaged climate for, say, today's conditions (or for around,
say, the year 1800). In this case, the focus of the scientific inquiry
is quite different. Here, attention is directed toward proper simulation
of the statistics of climate, such as the daily and annual temperature
cycles forced by the sun, the number and intensity of extratropical cyclones,
locations of deserts and rainy areas, strength and location of jet streams
and planetary waves, fidelity of El Niño simulation, location and
characteristics of clouds and water vapor, strength and location of ocean
currents, magnitude and location of snow accumulation and snow melt, and,
finally, amplitudes and patterns of natural variability of all of these
on a wide range of timescales (days to centuries).
Achieving all of this in a climate model is a daunting task because the
enormous wealth of phenomena in the climate system virtually requires the
use of judicious tuning and/ or adjustment of various poorly defined processes
(such as clouds, or the fluxes of heat between atmosphere and ocean) to
improve the model's agreement with observed climate statistics. Such tunings
and adjustments are widespread, especially for the global-mean radiative
balance, and are often done to ensure that the model agrees with the global-mean
features of the climate. If this is not done, a coupled model started up
with today's climate will tend to drift toward a less realistic climate.
These practices have been criticized as evidence that climate models have
no credibility for addressing the greenhouse warming problem. Interestingly,
such tunings and adjustments (or lack thereof) may have little to do with
the ability of a model to reduce its fundamental uncertainty in predicting
anthropogenic climate change. Recall that the key uncertainties highlighted
above (water vapor, cloud, and ice albedo feedbacks) revolve around how
such properties might change under added greenhouse gases. This is a set
of modeling problems that cannot be evaded by judicious model tuning or
adjustments. Likely to prove much more fruitful in the long run would be
improved fundamental modeling of the key processes that govern the most
important climate feedback processes as CO2
increases (e. g. clouds, water vapor, ice, ocean circulation).
Thus, the models are imperfect tools with which to make such climate-change
predictions. Does this mean we should shift our focus to other tools? Definitely
not. Statistically based models that use historical data are possible alternatives,
but they are of marginal validity, mainly because the recent earth has
never experienced the rate of warming expected to result from the current
runup of infrared-active greenhouse gases. In this sense, the large, but
very slow, global-mean climate excursions of the past geological epochs
are instructive, but they are far from definitive as guidelines or analogs
for the next century.
The above considerations make it clear that there is no viable alternative
to coupled climate models for projecting future climate states and how
they might unfold. The physically based climate models have the huge advantage
of being fundamentally grounded in known theory as evaluated against all
available observations. There are indeed reasons to be skeptical of the
ability of such models to make quantitatively accurate projections of the
future climate states that will result from various added greenhouse gas
scenarios. Fortunately, the weak points of such climate models can be analyzed,
evaluated, and improved with properly focused, process-oriented measurements,
complemented by well-posed numerical experiments with various formulations
of the climate models.9
In short, the use of such climate models allows a systematic approach to
close the gap between theory and observations of the climate system. No
alternative approach comes close.
8 The
pinball machine is a device designed for recreation and amusement that
allows the player to shoot steel balls (of roughly 1-in diameter) into
an obstacle-strewn field of electronic bumpers that, when struck by the
ball, act to increase the net speed of the ball (super elastic rebound).
The playing field is slanted so that the ball enters at the highest point.
When all five balls have been trapped in the gutter, the game is over.
The object of the game is to keep the balls in play as long as possible
(through adroit use of flippers near the gutter that propel the ball back
uphill and away from the dreaded gutter). The longer the ball is in play,
the more it is in contact with bumper collisions that add to the number
of points earned. A sufficiently high score wins free replays. Thus, the
object of the game is for the player's skill to overcome gravity for as
long as possible, somewhat analogous to the efforts of ski jumpers and
pole vaulters.
9Out of many
such examples, one of the more interesting is provided by the Department
of Energy's Atmospheric Radiation Measurements Program. At a heavily instrumented
site in Oklahoma (and at some lesser sites), intensive measurements are
made of horizontal wind, vertical velocity, temperature, water vapor, clouds,
latent heating, precipitation, short-and long-wave radiative fluxes, and
surface fluxes of heat, momentum, and water vapor. This comprehensive set
of measurements is being used to evaluate our current modeling capabilities
and deficiencies on cloud processes, "cloudy" radiative transfer,
convection (thunderstorm scale), and turbulence. These areas represent
some of the weakest aspects of the atmospheric parts of climate models.