“Solve builds on over a decade of Council leadership to ensure the United States acts strategically to leverage HPC for competitiveness,” said Deborah L. Wince-Smith, President & CEO of the Council on Competitiveness.
The SOLVE Report is here! The Council on Competitiveness, with support from the US Department of Energy, engaged Intersect360 Research to interview 100+ companies whose use of HPC increases their competitiveness in industries such as manufacturing, finance, pharmaceuticals, and chemical engineering. These findings were published in Solve, a publication exploring how U.S. investment in HPC benefits America’s industrial and economic competitiveness.
Over at the Xcelerit Blog, Jörg Lotze benchmarks Intel’s new Haswell (Xeon E5 v3 series) against the company’s flagship Xeon Phi coprocessor using a popular computational finance code. As the test application, he use a Monte-Carlo simulation used to price a portfolio of LIBOR swaptions. “The Xeon Phi accelerator wins the race clearly for double precision, reaching around 1.8x speedup vs. the Haswell CPU. However, this drops to 1.2x in single precision. The main reason is that the single precision version requires only half the memory and hence makes better use of the cache.”
“This report is part of our HPC User Site Census series and provides an examination of the primary application software found at a sample of HPC user sites. We surveyed a broad range of users about their current computer system installations, storage systems, networks, middleware, and applications software supporting these computer installations. Our goal in this analysis of applications is to examine the suppliers, products, and primary usage of the application software packages in use at all HPC sites.”
The software defined data center is the underlying data center architecture that allows most IT infrastructure to be defined in software and to function as enterprise-wide resources. This approach enables ITaaS to be delivered in a virtualized environment with greater agility, speed and quality of service.
“The HPC community has had a long-standing interest in creating scale-out environments for running throughput-oriented and parallel distributed workloads. Both large-scale environments (for example, cloud computing facilities) and scale-out workloads (such as Big Data) are becoming more important in the enterprise. In fact, with the rise of Big Data, the advent of affordable, powerful clusters, and strategies that take advantage of commodity systems for scale-out applications, these days the enterprise computing environment is looking a lot like HPC.”
A new report on the problems and opportunities that will drive the need for next generation HPC has been released by the Task Force on High Performance Computing of Secretary of Energy Advisory Board. Commissioned by Secretary of Energy, Dr. Ernest J. Moniz, the report includes recommendations as to where the DOE and the NNSA should invest to deliver the next class of leading edge machines by the middle of the next decade.
In an unprecedented collaboration, eight national laboratories will apply supercomputing resources to a new climate study with the National Center for Atmospheric Research. The project, called Accelerated Climate Modeling for Energy, or ACME, is designed to accelerate the development and application of fully coupled, state-of-the-science Earth system models for scientific and energy applications.