Today the Texas Advanced Computing Center (TACC) announced that the Lonestar 5 supercomputer is in full production and is ready to contribute to advancing science across the state of Texas. Managed by TACC, the center’s second petaflop system is primed to be a leading computing resource for the engineering and science research community.
The supercomputer is sponsored by UT System in partnership with UT Austin, Texas Tech University, Texas A&M University, and the Institute for Computational Engineering and Sciences (ICES) and the Center for Space Research at The University of Texas at Austin. The technology partners are Cray Inc., Intel Inc. and DataDirect Networks.
XSEDE is designed for academic researchers, serving as the primary high performance computing resource in the UT Research Cyberinfrastructure (UTRC) initiative. Sponsored by The University of Texas System (UT System), UTRC provides a combination of advanced computational systems, a large data storage opportunity, and high bandwidth data access. UTRC enables researchers within all 14 UT System institutions to collaborate with each other and compete at the forefront of science and discovery. The new Lonestar 5 Cray XC40 supercomputer, which contains more than 30,000 Intel Xeon processing cores from the E5-2600 v3 product family, provides a peak performance of 1.25 petaflops. With 24 processing cores per compute node, Lonestar 5 follows the trend of more cores per node that the industry sees in every generation of microprocessors.
The system is the fifth in a long line of systems available for Texas researchers, dating back over 15 years to the original Lonestar 1 system (also a Cray). The system will continue to serve its mainstay user communities with an emphasis on addressing a wide variety of research areas in engineering, medicine, and the sciences.
A number of researchers have been using Lonestar 5 in an “early user” mode over the last few months.
Christopher Simmons with ICES used Lonestar 5 to generate computational results that will guide the development of new experimental methods to be applied to a range of cutting-edge problems in chemistry and physics.
Lonestar 5 has quickly become one of my favorite tools in my computational toolbox,” said Simmons. “The teams from TACC and Cray have put together a world-class resource that allowed me to be productive immediately. Lonestar 5 has the perfect combination of very fast Intel Haswell processors, larger than average memory per core, and the top-notch Aries interconnect that work seamlessly together to create an amazing resource. I look forward to working on this machine for many years to come.”
Researchers in Texas, California, Oregon and New Zealand are modeling the collapse of the ice sheet covering West Antarctica. In the ice flow modeling simulations they are running on Lonestar 5 the researchers consider the question of how uncertainties in inferences of the bedrock topography affect this stability. Their ice flow model uses adaptive mesh refinement to resolve the sharp gradients in flow velocity and forces that control where the ice sheet comes afloat. Even with adaptive mesh refinement, a single ice flow simulation requires several weeks of continuous model integration using several hundred processors.
An analysis of strong-scaling on Lonestar 5 shows gains over other comparable resources,” said Scott Waibel, a graduate student in the Department of Geological Sciences at Portland State University. “Lonestar 5 provides the perfect high performance computing resource for our efforts.”
In addition, the researchers said they have been impressed with the level of support received from the TACC staff with regard to Lonestar 5. “It’s a new machine and a new configuration, so there are bumps in the road to be expected, but I’ve watched as the staff has been very dedicated and responsive to fixing issues which have arisen,” said Daniel Martin, a senior research scientist at Lawrence Berkeley National Laboratory.
Improving the circulation model for high-resolution ocean modeling is another example of early science done on Lonestar 5. Under Professor Clint Dawson’s direction at UT Austin, researchers are focusing on the prediction of oil-slick movements during an oil-spill accident in the Gulf of Mexico, similar to the Deepwater Horizon oil-spill event. Their computer model uses unstructured finite elements that vary from 50 meters at the coast to 3-4 kilometers in the middle of the Gulf. Computing at this level of resolution is only possible by having access to state-of-the-art supercomputers.
Says Arash Fathi, a post-doctoral researcher in Dawsons’ group: “Being an early user on Lonestar 5 was a privilege and a wonderful experience. We were able to run very large models easily without waiting in the normal queue, which was very helpful for model testing and parameter tuning. I would like to thank TACC for giving us this opportunity.”
As a last example, Todd Oliver, a research scientist with the Center for Predictive Engineering and Computational Sciences at UT Austin, has been using Lonestar 5 to perform numerical simulations on turbine blades and engines. The results of these numerical experiments will be used to help understand the driving features of these flows, leading to improved engineering models and more efficient film cooling designs.
TACC and Cray have put together a world-class machine with the user-friendly environment that TACC users have come to expect,” Oliver said. “This enabled me to quickly get my code running and performing well without any difficulty. Lonestar 5 will be a phenomenal resource for the Texas computational science community.”
- 1252 Cray XC40 compute nodes, each with two 12-core Intel Xeon processing cores for a total of 30,048 compute cores
- 2 large memory compute nodes, each with 1TB memory
- 8 large memory compute nodes, each with 512GB memory
- 16 Nodes with NVIDIA K-40 GPUs
- 5 Petabyte DataDirect Networks storage system
- Cray-developed Aries interconnect
Researchers from UT System institutions and contributing partners wishing to request access to Lonestar 5 should do so via the TACC User Portal.