WHAT IS AMR?

In this overview, we give the details of AMR as used at Berkeley Lab. AMR stands for Adaptive Mesh Refinement, and is a technique for automatically refining (or de-refining) certain regions of the physical domain in a finite difference calculation. For time-dependent calculations, the time step as well as the grid spacing may also be a function of the level of refinement. Below we define the grid structure of AMR, how the grids are created and evolved to follow the flow, and an example of how an algorithm might look in the AMR framework. Details of the algorithms we use are actually quite involved, and are fully documented in the publications available on this disk, but some simple examples here will serve to illustrate some of the more important aspects of adaptive algorithm design.

History of AMR

AMR has been under development for more than 15 years. The hierarchical structured grid approach now known as AMR was first developed by Berger and Oliger (1984) for hyperbolic partial differential equations. The approach to adaptive gridding used here was developed for conservation laws and demonstrated to be highly successful for gas dynamics by Berger and Colella (1989) in two dimensions. Bell, Berger, Saltzman and Welcome (1991) extended the methodology to three dimensions. More recently, AMR has been extended to a variety of problems and algorithm choices, including, but not limited to, solving the variable coefficient Poisson equation, Helmholtz equation, system of hyperbolic conservation laws governing inviscid gas dynamics, compressible and incompressible Navier-Stokes equations, and the equations that govern reacting flows, such as those that occur in premixed and nonpremixed combustion. For more information about current and past applications, see the publications and the AMR research highlights.

Definition of Grid Hierarchy

Our approach to AMR is based on a nested hierarchy of logically-rectangular grids. Increasingly finer grids are embedded recursively inside the coarse grids until the solution is sufficiently resolved at the finest level. An error estimation procedure based on user-specified criteria evaluates where additional refinement is needed and grid generation procedures dynamically create or remove rectangular fine grid patches as resolution requirements change.

The grid hierarchy is composed of different levels of refinement ranging from coarsest ( tex2html_wrap_inline14 ) to finest ( tex2html_wrap_inline16 ). Each level is represented as the union of rectangular grid patches of a given resolution. In our implementation, the refinement ratio is always 2 or 4, and the refinement ratio may differ in the different coordinate directions. For clarity of exposition, however, we will assume a constant refinement ratio, r, such that i.e. tex2html_wrap_inline18 The refinement ratio may also differ between levels; e.g., one may use a factor of two refinement to define the first refined level, then a factor of four to define the next level. The grids are properly nested, in the sense that the union of grids at level tex2html_wrap_inline7 is contained in the union of grids at level tex2html_wrap_inline9 for tex2html_wrap_inline11. Furthermore, the containment is strict in the sense that, except at physical boundaries, the level tex2html_wrap_inline9 grids are large enough to guarantee that there is a border at least one level tex2html_wrap_inline9 cell wide surrounding each level tex2html_wrap_inline7 grid. However, a fine grid can cross a coarser grid boundary and still be properly nested. In this case, the fine grid has more than one parent grid. This is illustrated below.

TwoLevels Gif
Two AMR levels of refined grids

Grids are properly nested, but may have more that one parent grid. The thick lines represent grids at the coarse level; the thin lines, grids at the fine level. (This set of grids was created for a problem with initial conditions specifying a circular discontinuity.). Grids at all levels are allowed to extend to the physical boundaries so the proper nesting is not strict there.

Construction of Grid Hierarchy

An initial grid hierarchy is created at the start of any calculation, based on the initial data. As the simulation progresses, the grids may dynamically change to reflect the evolving solution. In both cases, the same procedures are used to create new grids. Construction of the grid hierarchy is based on error estimation criteria specified by the user to indicate where additional resolution is required. Cells requiring additional refinement at a given level are identified and tagged using these criteria. Error estimation may use Richardson extrapolation as described in Berger and Colella (1984), or it may use some other user-supplied criteria. The issue of how to best construct refinement criteria is still an open research question. The tagged cells are then grouped into rectangular patches using the clustering algorithm given in Berger and Rigoustsos (1991). The generated patches will, in general, contain cells that were not tagged for refinement. The grid efficiency is the fraction of the cells in a new grid that are tagged by the error estimation process. A grid efficiency criterion (typically 70%) determines the minimum grid efficiency that is acceptable. These rectangular patches are refined to form the grids at the next level. The process is repeated until either the error criteria are satisfied or a user-specified maximum level of refinement is reached. The proper nesting requirement is imposed at this stage.

For time-dependent problems, at t=0 the initial data is used to create grids at level 0 through tex2html_wrap_inline14 (Grids have a user-specified maximum size, therefore more than one grid may be needed to cover the physical domain.) As the solution advances in time, the regridding algorithm is called every tex2html_wrap_inline16 (also user-specified) level tex2html_wrap_inline18 steps to redefine grids at levels tex2html_wrap_inline20 to tex2html_wrap_inline22 . Level 0 grids remain unchanged throughout the calculation. Grids at level tex2html_wrap_inline20 are only modified at the end of level tex2html_wrap_inline18 time steps, but because we subcycle in time, i.e. tex2html_wrap_inline28 level tex2html_wrap_inline30 grids can be created and/or modified in the middle of a level tex2html_wrap_inline18 time step if tex2html_wrap_inline34 For elliptic problems, an initial grid hierarchy is defined (often a single level), the solution is computed, then a regridding operation can be done to adjust the grid hierarchy, after which the solution is recomputed. This can be done multiple times until the grid hierarchy is suitably refined.

When new grids are created at level tex2html_wrap_inline36 the data on these new grids are copied from the previous grids at level tex2html_wrap_inline20 if possible, otherwise the data are interpolated in space from the underlying level tex2html_wrap_inline18 grids.

We note here that while there is a user-specified limit to the number of levels allowed, at any given time in the calculation there may not be that many levels in the hierarchy, i.e. tex2html_wrap_inline22 can change dynamically as the calculation proceeds, as long as it does not exceed the user-specified limit.

Time Integration with AMR

For the algorithms and calculations described on this CD, subcycling in time is used. Typically, then, the ratio of the time step to the grid spacing is constant at each level. The time integration algorithm on the grid hierarchy is a recursive procedure in which coarse grids are advanced in time, fine grids are advanced multiple steps to reach the same time as the coarse grids and the data at different levels are then synchronized. A pseudocode version of an AMR algorithm is shown below.

   figure3

When solving the governing partial differential equations on a fine grid (i.e. one finer than the base grid), Dirichlet data obtained from coarser levels are used as boundary conditions for coarse/fine boundaries not coinciding with boundaries of the physical domain. This results in differences between the fluxes computed at the coarse level, and the time average of the fluxes computed at the finer level. This mismatch is corrected in the synchronization step.

In the case of hyperbolic conservation laws, the synchronization step is fairly simple. Essentially all of the inter-level data communications occurs in two phases of the algorithm. The coarser grids supply boundary data in order to integrate finer grids, and the coarse and fine grids are synchronized at the end of fine grid time steps when the coarse- and fine-grid solutions have reached the same time. For the case considered here, boundary data is provided by filling ghost cells in a band around the fine grid data whose width is determined by the stencil of the finite difference scheme. If this data is available from grids at the same level of refinement the data is provided by a simple copy. If the data is not available from grids at the same level, it is obtained by interpolation of the coarse-grid data in time and space.

When the coarse and fine grids reach the same time and we synchronize, there are two corrections that we need to make. First, for all coarse cells covered by fine grid cells, we replace the coarse data by the volume weighted average of the fine grid data. Second, because coarse cells adjacent to the fine cells were advanced using different fluxes than were used for the fine cells on the other side of the interface, we must correct the coarse cell values by adding the difference between the coarse- and fine-grid fluxes. We note that for the case of explicit algorithms for conservation laws, the synchronization only involves corrections to the coarse grid.

Extensions of the adaptive methodology to more complex systems has shown that the basic methodology underlying the synchronization procedure for coarse and fine data provides a robust and powerful construct for designing adaptive algorithms. Although the algorithm becomes considerably more complex, this approach works not only for elliptic and parabolic partial differential equations but also for hybrid discretizations when different types of models are used at different levels of refinement. See ``A Conservative Adaptive Projection Method for the Variable Density Incompressible Navier-Stokes Equations'' for the details on adaptive projection methods for incompressible flow, and `` Adaptive Mesh and Algorithm Refinement'' for details on the hybrid algorithm.

Parallelization of AMR Algorithms

Our approach to parallelization is a coarse-grained, grid-based parallelism, i.e. it is based on an efficient distribution of grids to processors. The key elements of the approach to parallelization are a dynamic programming load-balancing technique, based on the classical knapsack algorithm, to distribute work to processors and a software methodology for managing data distribution and communications. The methodology is based on a message-passing model that exploits the coarse-grained parallelism inherent in the algorithms.

The dynamic nature of the adaptivity in time-dependent simulations leads to a dynamic and heterogeneous work load, making it considerably more difficult to implement this type of methodology on modern parallel computers, particularly, distributed memory architectures. The software framework that facilitates the development of adaptive algorithms for multiple-instruction, multiple-data (MIMD) architectures is described in detail in "Parallelization of structured, hierarchical adaptive mesh refinement algorithms ".

A major departure of our current approach from previous versions is the development of software infrastructure that provides a general framework for parallel, block-structured AMR algorithms. In the present approach, data distribution and communication are hidden in C++ class libraries that isolate the applications developer from the details of the parallel implementation.

References

M. J. Berger and J. Oliger, Adaptive Mesh Refinement for Hyperbolic Partial Differential Equations, J. Comput. Phys., 53, pp. 484-512, 1984.

M. J. Berger and P. Colella, Local Adaptive Mesh Refinement for Shock Hydrodynamics, J. Comput. Phys., 82, pp. 64-84, 1989.

M. J. Berger and I. Rigoustsos, An algorithm for point clustering and grid generation, New York University - CIMS Report NYU-501, 1991.

J. B. Bell, M. J. Berger, J. S. Saltzman and M. Welcome, Three Dimensional Adaptive Mesh Refinement for Hyperbolic Conservation Laws, LLNL Report UCRL-JC-108794, Dec. 1991.


Top of Page

Back to Overview

Home Page