MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


SuperMUC Upgraded to 6.8 Petaflops

Hoechstleistungsrechners SuperMUC am LRZ Leibniz-Rechenzentrum
Foto: Andreas Heddergott

On Monday, the Leibniz Supercomputing Centre (LRZ) celebrated the expansion of their SuperMUC cluster. Now in production mode, the 6.8 Petaflop “Phase 2″ supercomputer is powered by over 241,000 Intel processor cores.

South Africa’s CHPC Builds a Better Infrastructure with Altair PBS Works

CHPC South Africa

At the Centre for High-Performance Computing (CHPC) in South Africa, the mission is to enable cutting-edge research by supporting the highest levels of HPC available. That means ensuring researchers – who often are not experienced with computers let alone HPC systems – to get their work done with the HPC getting in the way.

New HP Apollo 4000 Systems Fulfill Booming Big Data Analytics & Object Storage Requirements

Apollo4530_Gen9_Server_TOP

High Performance Computing and Big Data analytics touch us every day. We each rely on daily weather forecasts, banking and financial information, scientific and health analyses, and thousands of other activities that involve HPC and Big Data analysis.

HPC in Medical Applications

onestop1

Medical applications like CT (computed tomography) scanning and MRI (magnetic resonance imaging) require quick, accurate results from processing complex algorithms. So reducing the compute time required is a primary challenge to manufacturers of CT and MRI equipment.

Hero Performance is about the Applications

David Lecomber, CEO, Allinea

During last month’s PRACE Days in Dublin – where I enjoyed talks on improvements in codes and methods in areas as diverse as CFD, RTM in geophysics, and in genomics – I saw once again that “hero” performance improvements happen and happen regularly.

Solving Eight Constraints of Today’s Data Center

data center cooling

With the growth of big data, cloud and high performance computing, demands on data centers around the world are expanding every year. Unfortunately, these demands are coming up against significant opposition in the form of operating constraints, capital constraints, and sustainability goals. In this article, we look at 8 of these constraints and how direct-to-chip liquid cooling is solving them.

HPC Appliance Computing Goes Virtual

HPC Cloud

Altair’s HyperWorks Unlimited Virtual Appliance goes fully into the cloud with an Amazon-hosted option that lets users get started with HPC in just minutes.

How HPC is increasing speed and accuracy

Mark Gunn, Sr. VP, One Stop Systems

The overwhelming task of high performance computing today is the processing of huge amounts of data quickly and accurately. Just adding greater numbers of more intensive, sophisticated servers only partially solves the problem.

BlueTides on Blue Waters: The First Galaxies

Figure 1 from The BlueTides Simulation paper; reproduced with permission; 
z = 8 refers to a redshift of 8 when the universe was a little over 1/2 billion years old.

“The largest high-redshift cosmological simulation of galaxy formation ever has been recently completed by a group of astrophysicists from the U.S. and the U.K. This tour-de-force simulation was performed on the Blue Waters Cray XE/XK system at NCSA and employed 648,000 cores.”

Benefits of RackCDU D2C for High Performance Computing

DC2 Liquid Cooling

From bio-engineering and climate studies to big data and high frequency trading, HPC is playing an even greater role in today’s society. Without the power of HPC, the complex analysis and data driven decisions that are made as a result would be impossible. Because these super computers and HPC clusters are so powerful, they are expensive to cool, use massive amounts of energy, and can require a great deal of space.