Search  
DOE Pulse
  • Number 392  |
  • July 8, 2013

New ultra-efficient HPC data center debuts

Steve Hammond, director of NREL's Computational Science Center, stands in front of air-cooled racks in the high performance computing (HPC) data center in the Energy Systems Integration Facility (ESIF). The rest of the system will be built out this summer using warm-water liquid cooling to reach an annualized average power usage effectiveness (PUE) rating of 1.06 or better. Credit: Dennis Schroeder

Steve Hammond, director of NREL's Computational
Science Center, stands in front of air-cooled racks
in the high performance computing (HPC) data
center in the Energy Systems Integration Facility
(ESIF). The rest of the system will be built out this
summer using warm-water liquid cooling to reach
an annualized average power usage effectiveness
(PUE) rating of 1.06 or better.
Credit: Dennis Schroeder

Scientists and researchers at DOE's National Renewable Energy Laboratory (NREL) are constantly innovating, integrating novel technologies, and "walking the talk."

When it came time for the lab to build its own high performance computing (HPC) data center, the NREL team knew it would have to be made up of firsts: The first HPC data center dedicated solely to advancing energy systems integration, renewable energy research, and energy efficiency technologies. The first petascale HPC to use warm-water liquid cooling and reach an annualized average power usage effectiveness (PUE) rating of 1.06 or better.

To accomplish this, NREL worked closely with industry leaders to track rapid technology advances and to develop a holistic approach to data center sustainability in the lab's new Energy Systems Integration Facility (ESIF).

"We took an integrated approach to the HPC system, the data center, and the building as part of the ESIF project," NREL's Computational Science Center Director Steve Hammond said. "First, we wanted an energy-efficient HPC system appropriate for our workload. This is being supplied by HP and Intel. A new component-level liquid cooling system, developed by HP, will be used to keep computer components within safe operating range, reducing the number of fans in the backs of the racks.

“We wanted to capture and use the heat generated by the HPC system. Most data centers simply throw away the heat generated by the computers. An important part of the ESIF is that we will capture as much of the heat as possible that is generated by the HPC system in the data center and reuse that as the primary heat source for the ESIF office space and laboratories. These three things manifest themselves in an integrated 'chips-to-bricks' approach."

Like NREL's Research Support Facility, the ESIF HPC data center did not cost more to build than the average facility of its kind. It actually cost less to construct than comparable data centers and will be much cheaper to operate. NREL's approach was to minimize the energy needed, supply it as efficiently as possible, and then capture and reuse the heat generated.

"Compared to a typical data center, we may save $800,000 of operating expenses per year," Hammond said. "Because we are capturing and using waste heat, we may save another $200,000 that would otherwise be used to heat the building. So, we are looking at saving almost $1 million per year in operation costs for a data center that cost less to build than a typical data center."

While some of the NREL HPC components may be off the shelf, the team is taking a different approach in cooling this supercomputer.

"NREL's ultimate HPC system is currently under development and will be a new, warm-water cooled high-performance system," said Ed Turkel, group manager of HPC marketing at HP. "It will be a next-generation HPC solution that's specifically designed for high power efficiency and extreme density, as well as high performance — things that NREL requires."

[Heather Lammers, 303.275.4084,
heather.lammers@nrel.gov]