Search  

Highlights

See also…

DOE Pulse
  • Number 301  |
  • December 7, 2009

Taking charge of scientific supercomputing

Taking on a charged particle physics calculation previously labeled “impossible,” a team of reseachers from DOE's Lawrence Livermore National Laboratory and IBM have developed an unorthodox strategy to fully exploit the power of massively parallel supercomputers and break new ground in scientific simulation. The strategy will have an impact on the future of high-performance scientific computing.

The multidisciplinary team presented its methodology at the Super Computing Conference (SCO9), the premier high-performance computing conference held in November in Portland, Ore. They were finalists for the prestigious Gordon Bell Prize.

Using this strategy, scientists are able to run larger simulations at higher resolution over longer time scales. Such detailed simulations are critical to a broad set of applications vital to Laboratory missions, from stockpile stewardship to fusion energy. The method used to achieve those calculations also represents an important step for next generation supercomputers, which are expected to expand from thousands to millions of CPU cores.

This new capability was developed by scientists in LLNL’s Institute for Scientific Computing Research  (ISCR) on two IBM BlueGene/P systems: the 500-teraFLOP/S (trillions of floating point operations per second) Dawn at LLNL and the 1.03-petaFLOP/S (quadrillion floating point operations per second) JUGENE at Germany’s Julich Supercomputing Center.

As supercomputing moves into the petascale (quadrillions of operations per second) era, scientists face the growing challenge of how to effectively use the increasing number of CPU cores to run more detailed simulations of scientific phenomena over longer time scales. A central processing unit, or CPU, is the part of the system that carries out the instructions of a computer program or application. The process of adapting a computer algorithm or code to a more powerful computer to increase the capability or detail of simulations is called “scaling.”

“Our Institute’s mission is to advance the state-of-the-art for applications of national interest,” said Fred Streitz, director of the ISCR. “In this case we focused on scaling a simulation that involved the calculation of long-range forces.”

The ISCR team took on a problem that has long challenged scientist—a full understanding of the interaction of highly correlated charged particles. Until the team’s recent simulations, molecular dynamics simulations involving electrostatic interactions were of insufficient length and time scale to fill the gaps in theoretical and experimental research. Simulating these charged particle interactions is important to a range of scientific disciplines including biology, chemistry and physics, notably to fusion energy experiments planned for LLNL’s National Ignition Facility.

In a reversal of the conventional practice of dividing up a problem and distributing it equally across the machine, Lab scientists carved up the problem according to the varied computational requirements needed to scale up the individual component algorithms of the simulation. New BlueGene/P node technology, allowed them to use this new approach—called heterogeneous decomposition—to more fully exploit the system’s capabilities.

Jim Glosli, project leader, said this development has far reaching implications for scientific computing and will likely affect the way future codes are developed. “What is innovative is the different way we broke up the machine to run the calculation. This allows more complicated models and this approach can be applied to other applications,” Glosli said.

Submitted by DOE's Lawrence Livermore National Laboratory