Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Aquila Launches Liquid Cooled OCP Server Platform

“The drive towards Exascale computing requires cooling the next generation of extremely hot CPUs, while staying within a manageable power envelope,” said Bob Bolz, HPC and Data Center Business development at Aquila. “Liquid cooling holds the key. Aquarius is designed from the ground up to meet reliability and the feature-specific demands of high performance and high density computing. Our design goal was to reduce the cost of cooling server resources to well under 5% of overall data center usage.”

Video: Intel Scalable System Framework

Gary Paek from Intel presented this talk at the HPC User Forum in Austin. “Traditional high performance computing is hitting a performance wall. With data volumes exploding and workloads becoming increasingly complex, the need for a breakthrough in HPC performance is clear. Intel Scalable System Framework provides that breakthrough. Designed to work for small clusters to the world’s largest supercomputers, Intel SSF provides scalability and balance for both compute- and data intensive applications, as well as machine learning and visualization. The design moves everything closer to the processor to improve bandwidth, reduce latency and allow you to spend more time processing and less time waiting.”

PRACE Awards Time on Marconi Supercomputer

PRACE has announced the winners of its 13th Call for Proposals for PRACE Project Access. Selected proposals will receive allocations to the following PRACE HPC resources: Marconi and MareNostrum.

HPC4mfg Seeks New Proposals to Advance Energy Technologies

Today the Energy Department’s Advanced Manufacturing Office announced up to $3 million in available funding for manufacturers to use high-performance computing resources at the Department’s national laboratories to tackle major manufacturing challenges. The High Performance Computing for Manufacturing (HPC4Mfg) program enables innovation in U.S. manufacturing through the adoption of high performance computing (HPC) to advance applied science and technology in manufacturing, with an aim of increasing energy efficiency, advancing clean energy technology, and reducing energy’s impact on the environment.

The Future of HPC Application Management in a Post Cloud World

The prevalency of cloud computing has changed the HPC landscape necessaiting HPC management tools that can manage and simplify complex enviornments in order to optimize flexibility and speed. Altair’s new solution PBS Cloud Manager makes it easy to build and manage HPC application stacks.

DOE Funds Asynchronous Supercomputing Research at Georgia Tech

“More than just building bigger and faster computers, high-performance computing is about how to build the algorithms and applications that run on these computers,” said School of Computational Science and Engineering (CSE) Associate Professor Edmond Chow. “We’ve brought together the top people in the U.S. with expertise in asynchronous techniques as well as experience needed to develop, test, and deploy this research in scientific and engineering applications.”

Measuring HPC: Performance, Cost, & Value

Andrew Jones from NAG presented this talk at the HPC User Forum in Austin. “This talk will discuss why it is important to measure High Performance Computing, and how to do so. The talk covers measuring performance, both technical (e.g., benchmarks) and non-technical (e.g., utilization); measuring the cost of HPC, from the simple beginnings to the complexity of Total Cost of Ownership (TCO) and beyond; and finally, the daunting world of measuring value, including the dreaded Return on Investment (ROI) and other metrics. The talk is based on NAG HPC consulting experiences with a range of industry HPC users and others. This is not a sales talk, nor a highly technical talk. It should be readily understood by anyone involved in using or managing HPC technology.”

Funding Boosts Exascale Research at LANL

“Our collaborative role in these exascale applications projects stems from our laboratory’s long-term strategy in co-design and our appreciation of the vital role of high-performance computing to address national security challenges,” said John Sarrao, associate director for Theory, Simulation and Computation at Los Alamos National Laboratory. “The opportunity to take on these scientific explorations will be especially rewarding because of the strategic partnerships with our sister laboratories.”

Radio Free Looks at Exascale Application Challenges in the Wake of XSEDE 2.0 Funding

In this podcast, the Radio Free HPC team discusses the recent news that Intel has sold its controlling stake in McAfee and that NSF has funded the next generation of XSEDE.

EU HPC Strategy and the European Cloud Initiative

Leonardo Flores from the European Commission presented this talk at the HPC User Forum. “The Cloud Initiative will make it easier for researchers, businesses and public services to fully exploit the benefits of Big Data by making it possible to move, share and re-use data seamlessly across global markets and borders, and among institutions and research disciplines. Making research data openly available can help boost Europe’s competitiveness, especially for start-ups, SMEs and companies who can use data as a basis for R&D and innovation, and can even spur new industries.”