Jump to content

Climate Change/Science/Climate Modeling

From Wikibooks, open books for an open world

Climate models come in many forms, from very simple energy-balance models to fully coupled, three dimensional atmosphere-ocean-land numerical models. As computers have become faster, climate science has advanced commensurately. The equations that govern how fluids move in time and space (Navier-Stokes Equations) are complicated to solve, and when all the scales of motion and physical processes (radiative transfer, precipitation, etc.) are incorporated, the resulting problem is impossible to carry out analytically. Instead, climate scientists turn these systems of equations into a series of computer programs. The resulting set of programs is, in some cases, a "climate model."

When the model is used to approximate the equations of motion on a sphere, it can be called a general circulation model (GCM). These are generic models, which can be specialized to simulate the ocean, atmosphere, or other fluid problems. Mostly because of limitations on computer power, these models do not resolve all scales of motion; instead a grid of points is established as an array of points where the equations are solved. Most modern atmospheric GCMs are run with horizontal grid spacing (distance between adjacent grid points) around 100 km, and with a number of vertical levels (usually around 30). The exact resolution depends on details of the model and the application. Because of this coarse grid spacing, small-scale (or "sub-grid-scale") phenomena (like individual clouds or even hurricanes) are not explicitly resolved. For detailed calculations of smaller scales, more specialized numerical models are often employed, though there are some very high resolution GCMs (e.g. Japan's NICAM[1]). To incorporate the effects of sub-grid-scale phenomena, conventional GCMs rely on statistical rules, parameterizations, that describe how the processes work on average given the conditions within the grid cell. Parameterizations can be very simple or very complicated, depending on the complexity of the process and the level of understanding of the statistical behavior of the process. Much of the improvement in GCMs today is directly related to improving parameterizations, either by incorporating more elaborate rules to match measured quantities better or by using more sophisticated theoretical arguments for how the physics should work.

Kinds of Models

[edit | edit source]

There are many classes of models, and within each class there are many implementations and variations. It is impossible to enumerate and describe every climate model that has ever been developed; even doing so for the published literature would be prohibitively difficult. In fact, there are entire volumes devoted to the history of numerical modeling of just the atmosphere; the American Institute of Physics has a brief description of AGCMs available online [2]. Here we discuss several classes of models, with an emphasis on atmospheric models. The discussion closely follows that of Henderson-Sellers and McGuffie (1987), which is an excellent resource on the subject (and has an updated edition).

First of all, we restrict ourselves to numerical models, specifically those designed to be solved with computers. More generally, any equation or set of equations that represents the climate system is a climate model. Some of these can be solved analytically, but those are highly simplified models, which are sometimes incorporated in numerical models.

The ultimate goal of climate models is to represent all physical processes that are important for the evolution of the climate system. This is a lofty goal, and will never truly be realized. The climate system contains important contributions and interactions among the lithosphere (the solid Earth), the biosphere (e.g., marine phytoplankton, tropical rainforests), atmospheric and oceanic chemistry (e.g., stratospheric ozone), and even molecular dynamics (e.g. radiative transfer). In fluid dynamics, some systems are now modeled using "direct numerical simulation" (DNS), in which (nearly) all the active scales are explicitly resolved. This is will never be feasible for the climate system. We cannot possible represent every atom in the climate system, it would essentially take the same number of electrons in the computer. Instead, climate modeling is limited to truly modeling the system; simplifying assumptions and empirical laws are used, the resolved motions are chosen to match the problem and/or the computing resources, and other processes are parameterized. However, these comprehensive climate models are not the only way to model the climate system.

Simpler models have been developed over the years for many reasons. One common reason historically was the computational cost of running large computers; simpler models have fewer processes to represent, and often have fewer space and time points (lower resolution). Two extremely simple classes of climate models are one-dimensional energy balance models and one-dimensional radiative-convective models. The single dimension in each is typically latitude (north-south direction) and altitude (vertical column), respectively.

A typical energy balance model (EBM) solves a small set of equations for the average temperature, T, as a function of latitude, . These models were introduced in 1969 by Budyko and Sellers independently. They are solved for the equilibrium temperature at each latitude based on the incoming and outgoing radiative fluxes and the horizontal transport of energy. The radiative fluxes are simple schemes (usually) for the radiation reaching the surface, and often include some temperature dependent albedo (reflectivity) to represent ice-albedo feedback. The horizontal transport is typically given by an eddy diffusion term, which is just a coefficient multiplied by the meridional (north-south) temperature gradient. One of the most interesting aspects of these simple models is that they already produce multiple equilibria, having solutions for ice-free and ice-covered Earths as well as a more temperate solution (like the current climate). This result spurred much research in the sensitivity of the climate system.

Radiative-convective models (RCM) are essentially models of an atmospheric column. They can be used to represent the global average atmosphere, a particular latitude (zone), or a particular location. The resolved dimension is vertical, so all the horizontal fluxes (like winds and advected scalars like temperature and moisture) must be passed to the column somehow. The early RCMs (due largely to S. Manabe and colleagues) have a background temperature structure (lapse rate) and a treatment of radiative fluxes through he column. When the radiative heating of the column brings the lapse rate beyond a critical or threshold lapse rate, a "convective adjustment" is used to reduce the instability. Given constant boundary conditions, the model will equilibrate such that the energy budget is balanced, giving a model of the vertical (especially temperature) structure of the atmosphere. The early RCMs were used to explore the effects of increasing carbon dioxide in the atmosphere.

There are also combinations of EBMs with RCMs that give a simple two-dimensional representation of radiative-convective equilibrium.

Another class of two-dimensional model is the axially symmetric model used, for example, by Held & Hou (1980) and Lindzen & Hou (1988). This is a dynamical model only, and has been used to study the meridional circulation in the absence of baroclinic eddies (midlatitude storm systems). While not truly climate models, these simple dynamical models have provided important theoretical understanding of the atmospheric circulation.

In the ocean, there are simple box models that are somewhat analogous to the axially symmetric models of the meridional circulation of the atmosphere. These box models are traced back at least to Stommel, who used one to show the multiple equilibria of the thermohaline circulation in the Atlantic Ocean.

Other two-dimensional models also exist. For example there are simple equivalent barotropic models of the atmosphere. However these have mostly been used in numerical weather prediction and theoretical atmospheric dynamics.

Occupying a higher region of the modeling hierarchy are three-dimensional numerical models. In terms of dynamics, these are usually fully turbulent fluids, and can be applied to spherical geometry or some simplied geometry like the beta-plane. This class of models should probably be divided into several subclasses. Some are coupled models (atmosphere + ocean, for example) while others only contain a single component of the climate system. Some are described as climate models of intermediate complexity, which covers a large range of models.

At and around the top of the climate model hierarchy are general circulation models (GCM), sometimes called global climate models. These are fully three-dimensional representations of the atmosphere and/or ocean solved in spherical geometry. They are designed to conserve energy (1st law of thermodynamics), momentum (Newton's 2nd law of motion), mass (continuity equation), and (usually) moisture. We will discuss GCMs in much greater detail later, including the primary assumptions that they include, and the uncertainty associated with the results. GCMs are the best available tools for studying climate change.

What Models Tell Us

[edit | edit source]

What Uncertainty in Simulations Means

[edit | edit source]

Why can't climate models predict climate change perfectly? There are many answers to this question, and most of them are at least partly true! Here we briefly describe what is meant by "uncertainty" in climate modeling.

Before starting to describe the uncertainty associated with climate models, it is important to emphasize that climate models are the best tools currently available for studying the climate of Earth and other planets. Although they are far from perfect, sophisticated climate models embody the physical processes thought to be important in the real climate. That there is some uncertainty most decidedly does not mean that we can't trust climate model results, nor does it mean there is built in "wiggle room" in the models.

Different climate models, and here we want to imply sophisticated numerical models (usually of the whole globle), get different results for the same experiment. These differences are due largely to different ways of representing physical processes that happen on scales smaller than the distance between model points. These processes are usually called sub-gridscale processes, and the representations for them in numerical models are known as parameterizations. The main idea of a parameterization is that uses information from the large scale, and infers (based on some rules) what is likely happening on smaller scales. A good example of this is wind near mountains. GCMs might have grid points only every 100 km, but mountain ranges can have very drastic elevation changes over much shorter distances. Rather than try to represent the scale of the mountains, which would be very hard with current computers, GCMs have a sub-gridscale topography parameterization. Depending on the details, it may affect the "roughness" of the surface or gravity waves induced by terrain, but the idea is the same, given that mountains change height on small scales, the GCM tries to model that behavior, at least to capture how mountains can affect the large-scale circulations the GCM does resolve. Since parameterizations are different, and given the large number of parameterized processes, it is no wonder GCMs get different results. The fact that the results are not more different is a testament to our current level of understanding of climate processes.

Imagine taking a large number of GCMs and running them all the same way. For example, the IPCC had modeling centers around the world run the same experiments (basically increasing CO2 concentration) to compare each model's climate response. Because the models are built differently, in some cases the are very fundamental differences between models, the results vary from model to model. If we just take the climate response, for example the change in surface air temperature for a given change in radiative forcing, from each model and find the average and standard deviation, that gives us an estimate of the "uncertainty" in the climate response. This is done in lieu of a real experiment because we only have one Earth, and definitely not enough time to run so many experiments!

The above method gives a measure of expected climate response based on very different models. Another method is to use the same GCM, but slightly change parameters or even parameterizations to determine the strength of different processes. As a simple example, imagine that some GCM uses a single value for the albedo (reflectivity) of stratocumulus clouds. If the GCM is run ten times, each with a different value for that parameter, the results of a climate change experiment will change. How much the results differ will determine that GCM's sensitivity to stratocumulus albedo. This gives another measure of "uncertainty," since that model assumes there is only one value of the albedo, which may not be true in the real atmosphere. The distributed computing project ClimatePrediction.net uses such a methodology to study processes important to climate sensitivity.

Another answer to our original question is that the system is not perfectly predictable. The climate system is chaotic, or at least "sensitive to initial data." This just means that we know the equations that govern fluid motion, and we have a pretty good idea of the physical processes that need to be included, but the system of equations has many solutions, and even if the system is perfectly deterministic (no random fluctuations), unless we also perfectly know the initial conditions, we may not get the right answer. In fact, in chaotic systems it has been shown that arbitrarily small errors in the initial conditions can give wildly different results after some amount of time. While the case of Earth's climate is unlikely to be that sensitive, it does mean that we shouldn't expect a perfect long-term (greater than 2 weeks) weather forecast to be on the local television station any time soon (or ever). Note that the science of climate prediction differs fundamentally from that of weather prediction. In weather prediction, the sensitivity to initial conditions is a basic limitation, as perfect knowledge of initial conditions is impossible. Climate models are not sensitive to initial condition; the problem changes from an initial-value problem to a boundary-value problem.

References

[edit | edit source]
  1. ^ M. Satoh, T. Matsuno, H. Tomita, H. Miura, T. Nasuno and S. Iga, "Nonhydrostatic icosahedral atmospheric model (NICAM) for global cloud resolving simulations," Journal of Computational Physics Volume 227, Issue 7, 20 March 2008, Pages 3486-3514