LLNL Dedicates New Supercomputer Facility

computerbuilding1_700x425_0Today officials from the Department of Energy’s National Nuclear Security Administration and government representatives today dedicated a new supercomputing facility at Lawrence Livermore National Laboratory (LLNL).

High performance computing is absolutely essential to the science and engineering that underpins our work in stockpile stewardship and national security. The unclassified computing capabilities at this facility will allow us to engage the young talent in academia on which NNSA’s future mission work will depend,” said NNSA Administrator Lt. Gen. Frank G. Klotz USAF (Ret.).

The $9.8 million modular and sustainable facility provides the Laboratory flexibility to accommodate future advances in computer technology and meet a rapidly growing demand for unclassified high-performance computing (HPC). The facility houses supercomputing systems in support of NNSA’s Advanced Simulation and Computing (ASC) program. ASC is an essential and integral part of NNSA’s Stockpile Stewardship Program to ensure the safety, security and effectiveness of the nation’s nuclear deterrent without additional underground testing. Also in attendance at the dedication was Livermore Mayor John Marchand. Charles Verdon, LLNL principal associate director for Weapons and Complex Integration, presided over the ceremony.

The opening of this new facility underscores the vitality of Livermore’s world-class efforts to advance the state of the art in high performance computing,” said Bill Goldstein, LLNL director. “This facility provides the Laboratory the flexibility to accommodate future computing architectures and optimize their efficient use for applications to urgent national and global challenges.”

The new dual-level building consists of a 6,000-square-foot machine floor flanked by support space. File photo/LLNL

The new dual-level building consists of a 6,000-square-foot machine floor flanked by support space. File photo/LLNL

Located on Lawrence Livermore’s east side, the new facility adjoins the Livermore Valley Open Campus. Located outside LLNL’s high-security perimeter, the open campus is home to LLNL’s High Performance Computing Innovation Center and facilitates collaboration with industry and academia to foster the innovation of new technologies.

The new dual-level building consists of a 6,000-square-foot machine floor flanked by support space. The main computer structure is flexible in design to allow for expansion and the testing of future computer technology advances.

The facility is now home to some of the systems acquired as part of the Commodity Technology Systems-1 (CTS-1) procurement announced in October. Delivery of those systems began in April. The Laboratory also intends to house in FY18 a powerful, but smaller, unclassified companion to the IBM “Sierra” system. It will support academic alliances, as well as other efforts of national importance, including the DOE-wide exascale computing project. The Sierra supercomputer will be delivered to Livermore starting in late 2017 under the tri-lab Collaboration of Oak Ridge, Argonne and Livermore (CORAL) multi-system procurement announced in November 2014. The Sierra system is expected to be capable of about 150 petaflops (quadrillion floating operations per second).

Kim Cupps, Computing department head at Lawrence Livermore National Laboratory, gives a tour of the new computing facility.

  Kim Cupps, Computing department head at Lawrence Livermore National Laboratory, gives a tour of the new computing facility.

In-house modeling and simulation expertise in energy-efficient building design was used in drawing up the specifications for the facility; heating, ventilation and air conditioning systems to meet federal sustainable design requirements to promote energy conservation. The flexible design will accommodate future liquid cooling solutions for HPC systems. The building is able to scale to 7.5 megawatts of electric power to support future platforms and was designed so that power and mechanical resources can be added as HPC technologies evolve.

Sign up for our insideHPC Newsletter