MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Stepping up to the performance challenge

David Lecomber, CEO, Allinea

The performance-savvy HPC developer is in high demand today. Leaps in intra-node parallelism, memory performance and capacity are set to meet applications struggling to exploit existing systems head-on.

Video: On the Role of Flash in Large-Scale Storage Systems


Nathan Rutman from Seagate presented this talk at the LAD’15 Conference. “So why is a spinning disk company talking about Flash? Last year, Seagate acquired Avago LSI’s flash division. We now have an array of flash-based storage. So I have nothing against Flash. This presentation is really on: Where does Flash make sense? I also have a personal agenda because I hate the term “Burst Buffer.” Everyone says “Burst Buffer” instead of saying “Flash.” It drives me crazy. So I’m going to explain what a Burst Buffer is and what it is not.”

HPC in Seismic Processing and Interpretation and Reservoir Modeling

Katie Garrison, Marketing Commuincations, One Stop Solutions

Oil and gas are becoming increasingly harder to find. This article looks at how oil and gas companies are using cutting edge technology, like HPC servers, compute accelerators and flash storage arrays for applications such as seismic processing, seismic interpretation and reservoir modeling.

Altair, Intel and Amazon Offer HPC Challenge


For companies looking to test the viability of engineering in the cloud, Altair has teamed with Intel and Amazon Web Services (AWS) to offer an “HPC Challenge” for product design. In a nutshell, the program provides free cycles on AWS for up to 60 days, where users can run compute-intensive jobs for computer-aided engineering (CAE).

Reducing Your Data Center “Water Guilt”


Concerns over data center water usage have become topical both in the industry and even in the general press of late. This is not a bad thing as data center water usage is a legitimate concern. The reality is that the problem is rooted in today’s established approaches to data center cooling.

Video: Argonne’s Pete Beckman Describes the Challenges of Exascale


“Argonne National Laboratory is one of the laboratories helping to lead the exascale push for the nation with the DOE. We lead in a numbers of areas with software and storage systems and applied math. And we’re really focusing, our expertise is focusing on those new ideas, those novel new things that will allow us to sort of leapfrog the standard slow evolution of technology and get something further out ahead, three years, five years out ahead. And that’s where our research is focused.”

New Intel® Omni-Path White Paper Details Technology Improvements

Rob Farber

The Intel Omni-Path Architecture (Intel® OPA) whitepaper goes through the multitude of improvements that Intel OPA technology provides to the HPC community. In particular, HPC readers will appreciate how collective operations can be optimized based on message size, collective communicator size and topology using the point-to-point send and receive primitives.

Pushing the Boundaries of Combustion Simulation with Mira


“Researchers at the U.S. Department of Energy’s Argonne National Laboratory will be testing the limits of computing horsepower this year with a new simulation project from the Virtual Engine Research Institute and Fuels Initiative (VERIFI) that will harness 60 million computer core hours to dispel those uncertainties and pave the way to more effective engine simulations.”

Research Demands More Compute Power and Faster Storage for Complex Computational Applications


Many Universities, private research labs and government research agencies have begun using High Performance Computing (HPC) servers, compute accelerators and flash storage arrays to accelerate a wide array of research among disciplines in math, science and engineering. These labs utilize GPUs for parallel processing and flash memory for storing large datasets. Many universities have HPC labs that are available for students and researchers to share resources in order to analyze and store vast amounts of data more quickly.

Lustre* at the Core of HPC and Big Data Convergence

HPC BIGDATA Convergence

Companies already using High-performance Computing (HPC) with a Lustre file system for simulations, such as those in the financial, oil and gas, and manufacturing sectors, want to convert some of their HPC cycles to Big Data analytics. This puts Lustre at the core of the convergence of Big Data and HPC.