High-Performance Computing in Groundwater Modeling
By Laura Toran, Jack Gwo, G. (Kumar) Mahinthakumar, Eduardo D’Azevedo, and Chuck Romine

photo

From left: Chuck Romine, G. (Kumar) Mahinthakumar, Laura Toran, Eduardo D’Azevedo, and Jack Gwo discuss a poster on groundwater remediation. They have developed several models of groundwater contaminant flow using high-performance computing. Photograph by Tom Cerniglio.

High-performance computing is opening up new opportunities for modeling and understanding complex groundwater systems. Computationally intense models that describe concentrations and movements of specific contaminants in groundwater may be more effective than simple groundwater models in evaluating remediation options.


photo
Fig. 1. Burial grounds for low-level radioactive waste on the Oak Ridge Reservation. Leaching of this waste is partly responsible for the area’s contaminated groundwater.

Consider the plight of a hazardous waste site manager supervising cleanup and containment operations for an area having contaminated groundwater (see Fig. 1). She faces a variety of scientific, engineering, and political challenges. Typically, the waste source is unknown, the complex geology of the site creates uncertainty about the transport of groundwater contaminants, and the funding to find answers is severely limited. In the past, available modeling tools were narrowly focused predictive models, fraught with uncertainty. Today, however, new computationally intense groundwater models that account for uncertainty may provide new insight.

High-performance computing can improve groundwater modeling by providing the ability to (1) use larger and more detailed computational grids, (2) run more simulations under different conditions to evaluate remediation options and determine uncertainty, and (3) develop more complete computational models of physical processes. High-performance computing uses high-speed, large-memory-capacity computers such as the parallel computers at ORNL’s Center for Computational Sciences (CCS). These parallel computers link processors or individual workstations together to increase computational power.

Not all modeling problems would benefit from high-performance computing, but one of the key benefits of research in this area is that the lessons learned can provide lessons in other areas. We give several examples of practical groundwater contaminant transport problems that have been solved with high-performance computing at ORNL. These serve to illustrate both the advantages and the difficulties in using high-performance computing for environmental problems.

Subsurface Heterogeneity and Groundwater Remediation

drawing

Fig 2. Various remediation strategies being analyzed by ORNL researchers using high-performance computing.

Geologic heterogeneity is one of the most important factors affecting groundwater remediation. Two of the common remediation techniques are pump-and-treat remediation and bioremediation (see Fig. 2). In pump-and-treat remediation—which involves pumping the contaminated groundwater out, cleaning it up, and (sometimes) reinjecting it into the ground—extraction of dissolved contaminants from regions of low permeability is difficult. Bioremediation—the use of bacteria and other microbes to break down pollutants—is hampered because regions of low permeability can affect delivery of nutrients or cause microbial clogging. On the other hand, regions of high permeability can shorten residence times for bacteria, preventing them from effectively degrading the contaminants.

Although it is very difficult to obtain fine-scale heterogeneity information directly by field measurements, geo-statistical techniques may be used to generate this information synthetically (with a degree of uncertainty) from sparse field data. Once we generate such data, attempting to resolve fine-scale heterogeneity effects on large three-dimensional field-scale systems can result in very large problems requiring hundreds of millions of grid points. A combination of powerful algorithms and current-generation high-performance computers provides means of solving these systems very efficiently.

drawing

Fig. 3. Simulations on the Intel Paragon analyze the impact of heterogeneity on pump-and-treat remediation.

ORNL researchers have developed parallel-solution algorithms based on multigrid and Krylov subspace methods that can solve groundwater flow and transport problems requiring tens of millions of finite elements in less than a minute on the 1024-processor, massively parallel Intel Paragon supercomputer. These groundwater codes are now being used to study the impact of soil heterogeneity on various remediation strategies. Figure 3 shows the results from a hypothetical simulation that was performed to determine the viability of pump-and-treat remediation under heterogeneous conditions. The simulations used about 3.2 million finite elements requiring 128 parallel processors of the Intel Paragon XP/S 150 at ORNL. The heterogeneous permeability field was generated with a geostatistical simulator using sparse field data obtained from a sandy aquifer in New England. Contamination occurs over approximately 24 years of landfill leaching while remediation takes approximately 5 years to remove 95% of the contaminants. The extraction well was placed at the vertically averaged concentration weighted centroid of the plume.


Cost-Benefit Analysis

One of the most effective ways of reducing remediation costs is to improve decision making in the presence of uncertainty. The complexity of underground environments and the paucity of geologic data, the various efficiencies of remediation techniques under differing conditions, the ever-changing federal and state regulations, even the fluctuation of interest rates— all contribute to uncertainties. Designing a decision rule for even a moderately complex site such as ORNL has proven to be overwhelming. Moreover, the situation may get worse because of dwindling federal funds for site characterization and remediation. Today’s waste management professionals are expected to do a better job than their predecessors with fewer resources at their disposal. One improvement in their arsenal is the emergence of highly efficient computational algorithms on high-performance computers that form the basis for new decision-making tools.

Researchers at ORNL have developed an economic decision framework for improving aquifer remediation designs. Our approach seeks to adopt field and laboratory data into groundwater contaminant transport models running on ORNL high-performance parallel computers. The framework accounts for variations in aquifer properties, remediation designs, and compliance limits by running hundreds of computer simulations on the Intel Paragon and Kendall Square (KSR1) computers at CCS.

drawing

Fig. 4. High-performance computing can analyze the costs and benefits of remediation alternatives in light of changing regulations. An accurate cost-benefit analysis can reduce remediation costs.

Results from these simulations are used to determine the effect of regulatory change on the choice of remediation designs (Fig. 4), the size of the exploration budget, and the most sensitive aquifer parameters that may warrant further explorations. A change in the compliance limit (e.g., the maximum allowable radioactivity of groundwater discharge) can alter a previous decision and result in modification of the remediation strategy. Our study suggests that at or near the most strict compliance limit, the best remediation alternative is to contain the waste plume. However, at less strict compliance limits, the best remediation alternative is to monitor only, accepting the risk of failure to meet the regulatory requirement. Results shown in Fig. 4 can be translated into maximum exploration budgets based on the opportunity cost of choosing one remediation alternative against another and can therefore be used to guide site characterization efforts.

This economic framework is only an initial effort to guide site remediation decision-making, yet the computational needs have already become immense. For future large-scale, long-term decision-making analyses, studies of this nature may well exceed the capacity of today’s largest computational facility. Future high-performance computational algorithms and supercomputers will become increasingly important tools for our complicated environmental restoration problems.

Uranium Concentration Study

One promising way to advance our understanding of contaminant transport is to use high-performance computing to conduct more realistic simulations of transport processes. Geochemical transport modeling offers a good example of this potential. With limited computing power, transport models typically must lump the complex geochemistry into a single term (i.e., the retardation factor), a rather crude approximation. Accurate geochemical transport modeling requires that interactions between chemicals that can change the mobility of the contaminant (for example, alteration of the acidity level, or pH) be treated explicitly at each node, a computationally intensive task. High-performance machines such as the Intel Paragon at ORNL’s CCS enable the development of such models.

We used a parallel code developed by researchers at the University of Texas and Pennsylvania State University—the Parallel Aquifer and Reservoir Simulator linked to the Kinetic Equilibrium Model or PARSim-KEMOD—to evaluate whether fissile uranium in a low-level waste (LLW) facility can be concentrated by hydrogeochemical processes to levels that might lead to nuclear criticality—the point at which a nuclear reaction is self-sustaining and potentially hazardous. Increases in concentration could not be predicted by simple transport models alone. Our analysis using the code was completed in less than an hour on four processors of an Intel Paragon.

This investigation represents the first attempt to jointly study the potential for nuclear criticality at LLW facilities using both quantitative hydrogeochemical modeling and nuclear criticality safety calculations. We postulated that uranium concentration results from sequential processes of mobilization of uranium by formation of soluble complexes, followed by immobilization of the soluble species by processes of adsorption or precipitation. The goal of our preliminary study was to conduct a sensitivity analysis of factors that could influence the mobilization and immobilization of uranium. Our results indicate that very few model runs could produce both mobilization and immobilization conditions. Adsorption typically does not concentrate uranium sufficiently to create levels of concern for criticality. Precipitation of uranium under reducing conditions is possible, but the stability of reducing zones in LLW facilities has not yet been evaluated. A further limiting factor is that the mass density of uranium required to reach criticality safety concerns is greater than the source term in many cases. This modeling can be used to suggest ways to limit mobilization and immobilization of uranium and to guide the development of design regulations to improve the safety of LLW facilities.

Grand Challenges: Groundwater Transport and Remediation

Computational scientists from ORNL have been part of a multidisciplinary team that has developed a suite of codes for a sophisticated parallel super-computer that can simulate ground-water flow and contaminant transport. The codes incorporate many important complex geochemical and biological processes and solve these problems efficiently on a variety of parallel architectures. The code is designed to solve Grand Challenge problems in groundwater contaminant transport and remediation—highly difficult problems that cannot be solved any other way.

A user-friendly graphical interface will allow the user to “"steer” the simulation by changing input parameters and, at the same time, view the simulation output. Such interactive modeling is particularly difficult to support in a parallel environment. The innovative interface and communication libraries that have emerged from this project are adaptable to other projects as well. For example, the DOLIB library (for shared-memory emulation on the Intel distributed-memory Paragon supercomputer) and the EDONIO library (for highly efficient disk input/output, or I/O) that ORNL researchers developed for this project are being used to benefit several other projects.

The Future

Although we have available enormous computational power, improvements in algorithms and solvers are still needed to tackle finest-resolution (submeter-scale) simulations. State-of-the-art numerical techniques such as adaptive meshing, multigrid, and particle methods are needed in the next generation of high-performance groundwater codes. Advances in modeling complex remediation strategies such as in situ bacterial bioremediation, chemical treatment, or soil venting requires a better fundamental understanding of biological and geochemical processes. We envision closer multidisciplinary collaboration between environmental scientists and mathematicians in the future. Model formulation and field experiments will go hand-in-hand, and high-performance computing will be a valuable tool for understanding reaction pathways or estimating variables that are difficult to measure experimentally.

A new research emphasis is on building an integrated problem-solving environment (PSE) that makes high-performance computing readily accessible for scientists. The PSE may offer services that simplify submission and monitoring of multiple simulations and provide services for fault-tolerance and task migration, real-time visualization, and computational steering to explore remediation questions such as “What happens if we place a well here?” The PSE may be a combination of extended versions of an existing graphical preprocessor such as the Groundwater Modeling System (GMS) and a visualization tool such as G3D.

Perhaps the most challenging problem to overcome is lack of application software. Writing efficient parallel programs to take full advantage of a high-performance multiprocessor like the Intel Paragon requires a new way of thinking (programming methodology) and effective software tools to deal with the inherent complexities. While automatic parallelization tools such as Forge or High-Performance Fortran (HPF) are now commercially available, these tools are in their infancy. They still require substantial programmer intervention either in adding compiler directives or restructuring code. Without this intervention, the resulting compiled code usually performs poorly.

Massively parallel multiprocessors such as the Intel Paragon XP/S 150 machine at CCS currently represent the pinnacle of high-performance computing. However, shared-memory symmetric multiprocessors with only a handful of processors (e.g., the SGI Power Challenge) or network clusters of fast workstations (e.g., the IBM SP2) bring high-performance parallel computing to the masses. At the low end, a two-processor Intel 266-megahertz Pentium PC costing a few thousand dollars offers the equivalent performance of several Paragon nodes.

High-performance computing is an emerging new technology that opens up new opportunities for modeling and understanding complex groundwater systems.

BIOGRAPHICAL SKETCHES

LAURA TORAN is a hydrogeologist formerly in ORNL’s Environmental Sciences Division. She came to the Laboratory in 1986 as a Wigner Fellow after receiving a Ph.D. degree in geology from the University of Wisconsin. Her research interests include coupled geochemistry and transport modeling, application of supercomputers to groundwater problems, fracture flow and transport, and groundwater microbiology. She recently joined the Geology Department of Temple University in Philadelphia.

JIN-PING GWO is a computational hydrogeologist and staff researcher at ORNL’s Center for Computational Sciences (CCS). He has a Ph.D. degree in civil engineering from Pennsylvania State University. He specializes in multiple-domain fracture flow and solute transport, development of supercomputer models, and the application of these models to risk-based cost-benefit analysis. His research interests also include hydrogeochemistry, interactions of surface and subsurface waters, reservoir simulations and multiphase mass transfer, optimization of hydrological systems, and global climate modeling.

G. MAHINTHAKUMAR is a research staff member in ORNL’s CCS. In 1994 he received a Ph.D. degree in civil engineering from the University of Illinois at Urbana–Champaign (1994). His research specialty is high-performance computing applications in groundwater remediation.

EDUARDO D’AZEVEDO is a research staff member in ORNL’s Computer Science and Mathematics Division. He has a Ph.D. degree in computer science from the University of Waterloo in Canada. In 1990 he came to the division’s Mathematical Sciences Section under an Oak Ridge Associated Universities postdoctoral fellowship. Since that time he has been involved in research in numerical linear algebra, triangular mesh generation, and high-performance computing with applications in modeling groundwater flow and contaminant transport.

CHUCK ROMINE joined the Mathematical Sciences Section in 1986 after receiving his Ph.D. degree in applied mathematics from the University of Virginia. His main areas of research interest include parallel numerical linear algebra and software tools for parallel numerical computation. Most recently, as a member of the Partnership in Computational Science (PICS) team, he has been developing software tools to support parallel models for groundwater flow and contaminant transport on high-performance supercomputers such as the Intel Paragon at ORNL.

 

Where to?

Next article | Contents | Search | Mail | Review Home Page | ORNL Home Page