Search Magazine  
   
Features Next Article Previous Article Comments Review Home

MYTH:
Enormous supercomputers
are making research impractical

REALITY: New techniques make it possible
to handle staggering amounts of data

In April 2000, when UT-Battelle assumed the management of Oak Ridge National Laboratory, ORNL's supercomputer was measured at one teraflop, a then-unimaginable one trillion floating point operations per second.

A few years later, the same machine did not rank among the world's top 500 supercomputers. In a truly international competition that includes Japan, Spain and a host of other nations investing in massive computational power for scientific research, high-performance computers have become so big so fast that among some a myth is taking hold: We have raced past the point at which researchers can practically manage such mind-boggling volumes of data generated by trillions of calculations each second.

inside a supercomputer

 

ORNL's Jaguar system, for instance, is capable of more than 260 trillion calculations per second, making the machine (in a constantly shifting ranking) in June 2008 the fifth most powerful computer in the world and the third most powerful for open scientific research. Funded as part of the Department of Energy's Leadership Computing Facility, ORNL's Jaguar is expected to surpass one thousand trillion calculations per second, or one petaflop, by year's end. Taking advantage of what would again be the world's most powerful open computer involves challenges as daunting as designing the machine itself.

Just as the typical motorist cannot handle a racecar and the weekend pilot cannot fly an F-15 fighter jet, a researcher using a modern supercomputer is thrust into a world far beyond the desktop machine with which most of us are familiar. Producing the quality of cutting-edge science for which the machines were designed requires the ability not only to design the calculations, but also to get information in and out without compromising the system's blistering speed. Ultimately, the most important aspect of a simulation is not the supercomputer's speed, but rather the often unwieldy volume of calculation results that represents the most important aspect of a simulation.

"For most of the codes I work with, the data that comes out of the simulation tells us about the science," explains Scott Klasky, a computational physicist with DOE's National Center for Computational Sciences at ORNL. "We run a simulation, analyze the results, and from that analysis we publish the findings. In effect, we have a computational laboratory that conducts a large computational experiment, along with the associated diagnostics, analysis and visualization that lead to the major scientific insights."

Klasky is working with colleagues from Georgia Institute of Technology, Rutgers University and the Scientific Data Management Center—sponsored by DOE's Scientific Discovery through Advanced Computing (SciDAC) program—to make the basic process of getting information in and out of a supercomputer easier and more effective. Their approach is known as Adaptable I/O [input/output] System, or ADIOS, an application designed to give researchers fast, easy-to-use, portable performance.

ADIOS is an Input/Output system broken down into components. The system has simple application programming interfaces and an external XML description of the data. The system's distinct advantage lies in the fact that researchers can change the I/O implementation through the XML code and not go through the actual source code of their applications. This flexibility affords researchers the ability to move easily from one implementation to another when they switch between supercomputers or, even more important, when their I/O is not behaving properly.

With ADIOS, Klasky and his colleagues hope scientists will no longer be forced to choose between the performance of a simulation and the quality of its data output. The choice is a quandary Klasky has faced over years as a fusion researcher working with a team from DOE's Princeton Plasma Physics Laboratory. The team's Gyrokinetic Toroidal Code—simulating the dynamics of turbulence in a fusion reactor—is consistently among the most productive applications running on Jaguar. In recent runs, the code ran on 29,000 of Jaguar's 31,000 processing cores and wrote out 90 terabytes of data in two days—or the equivalent of 520 megabytes every second.

"ADIOS grew out of our pain in working with I/O and trying to produce good data from our codes," Klasky explains. "We write a tremendous amount of data. The restart data are large. We vary run to run like everyone else, but we have other data which are also used for analysis."

…scientists will no longer be forced to choose between the performance of a simulation and the quality of its data output

Data coming out of a supercomputer simulation typically fall into two general types: restart files and analysis data. A restart file is the system's version of a "save" command, writing out the state of the simulation at a given time. Super-computers, just like home computers, are subject to unexpected burps and hiccups. Like the home computer anything that has not been saved will be lost, with one very significant difference: The loss of an hour on 30,000 processors is the equivalent of 3 years lost on a home machine.

Data for analysis, on the other hand, contain the critical information from which a scientist may make a break-through. Researchers regularly find themselves having to choose between the performance of their applications and the amount and quality of the data they write. Klasky and his colleagues faced this challenge in his early years with the project.

"We found we were spending more than 20% of our computational time writing the analysis files. For scientists competing for valuable computing time, this was viewed as an unacceptable waste."

Another challenge confronting researchers is the need to include in their results sufficient metadata, or "data about the data." Including items such as labels and explanatory notes, metadata tell the researchers what they are looking at when they examine data a day, a week or even a year after the simulation.

By placing the information separate from the actual source code, ADIOS makes it easier for researchers to make additions to the metadata. As Klasky explains, metadata also helps restart files do double duty, serving a useful role in analysis of a simulation.

"We want our data to be metadata-rich," Klasky explains, "with lots of annotations that can be helpful much later. For some researchers the restart file contains the state of your code, which is useful data. A lot of people write restarts and then do analysis from the restarts, so they blur the line."

In addition to ORNL, support for ADIOS comes from several SciDAC centers—including the Center for Plasma Edge Simulation, the Scientific Data Management Center and the Gyrokinetic Particle Simulation Center—and from the National Science Foundation's High End Computing University Research Activity program.

Klasky and his colleagues have tested ADIOS with a variety of the leading applications that use Jaguar, including several fusion codes, a leading combustion code,and an astrophysics code. On Chimera, an astrophysics code used to simulate core-collapse supernovas, the team was able to improve the application's performance a hundredfold with a test run using 2,048 processors.

"Chimera is one instance," Klasky notes. "In other instances the system writes out data at about the same speed as we wrote before, but adds extra annotations instead of raw binary. We now have really fast Input/Output that is going to be portable and scalable."

Klasky's team is working to extend ADIOS to as many systems and applications as possible. To date, they have validated ADIOS on the Cray supercomputers at ORNL and on Linux clusters. By September 2008 they expect to be applying ADIOS's unique assets to IBM Blue Gene supercomputers such as ORNL's Eugene system.

Eventually, Klasky says, they want to release the software as open source. While this goal would mean more work for the team—documentation, tutorials, bug searchers, etc.—the effort would also accelerate ADIOS development.

"We first are making sure our initial codes run on different architectures," Klasky says. "As we open up the system to more codes—and I've had lots of requests—that's when we'll get lots of error reports, and that's when people will use ADIOS differently."

Given the pace of high-performance computing, the work cannot come fast enough. —Leo Williams

 

Search Magazine 
 
Features Index Next Article Previous Article Comments Review Home

Web site provided by Oak Ridge National Laboratory's Communications and External Relations
ORNL is a multi-program research and development facility managed by UT-Battelle for the US Department of Energy
[ORNL Home] [Communications] [Privacy and Security Disclaimer]