Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Video: Analysis of SSDs on SGI UV 300

Nikos Trikoupis from the City University of New York gave this talk at the HPC User Forum in Austin. “We focus on measuring the aggregate throughput delivered by 12 Intel SSD DC P3700 for NVMe cards installed on the SGI UV 300 scale-up system in the CUNY High Performance Computing Center. We establish a performance baseline for a single SSD. The 12 SSDs are assembled into a single RAID-0 volume using Linux Software RAID and the XVM Volume Manager. The aggregate read and write throughput is measured against different configurations that include the XFS and the GPFS file systems.”

CloudLightning Report Looks at Barriers to HPC in the Cloud

The CloudLightning Project in Europe has published preliminary results from a survey on Barriers to Using HPC in the Cloud. “Trust in cloud computing would appear to be a significant barrier to adopting cloud computing for HPC workloads. Data management concerns dominate the responses.”

Cycle Computing Publishes Cloud-Agnostic Glossary

Today Cycle Computing announced the Cloud-Agnostic Glossary, a solution brief written by Cycle Computing executives to help customers understand the different terms the different providers use and how they relate. “Technology keeps evolving, terms keep changing, and because of this, we were inspired to stop and take a moment to develop a glossary to keep track of meanings in real time, and according to vendor,” said Jason Stowe, CEO, Cycle Computing. “We ended up with this great solution brief, worthy of reading and sharing. It’s a useful document that we plan to update regularly.”

Putting HPC into the Hands of Every Engineer and Scientist

In this special guest feature from Scientific Computing World, Wolfgang Gentzsch explains the role of HPC container technology in providing ubiquitous access to HPC. “The advent of lightweight pervasive, packageable, portable, scalable, interactive, easy to access and use HPC application containers based on Docker technology running seamlessly on workstations, servers, and clouds, is bringing us ever closer to the democratization of HPC.”

Co-Design Offloading

The move to network offloading is the first step in co-designed systems. A large amount of overhead is required to service the huge number of packets required for modern data rates. This amount of overhead can significantly reduce network performance. Offloading network processing to the network interface card helped solve this bottleneck as well as some others.

Requesting Your Input on the HPC Customer Experience Survey

We’d like to invite our readers to participate in our new HPC Customer Experience Survey. It’s an effort to better understand our readers and what is really happening out there in the world of High Performance Computing. “This survey should take less than 10 minutes to complete. All information you provide will be treated as private and kept confidential.”

Report: Using HPC for Public Policy Analysis & Water Resource Management

Researchers from the RAND Corporation and LLNL have joined forces to combine HPC with innovative public policy analysis to improve planning for particularly complex issues such as water resource management. By using supercomputer simulations, the participants were able to customize and speed up the analysis guiding the deliberations of decision makers. “In the latest workshop we performed and evaluated about 60,000 simulations over lunch. What would have taken about 14 days of continuous computations in 2012 was completed in 45 mins — about 500 times faster,” said Ed Balkovich, senior information scientist at the RAND Corporation, a nonprofit research organization.

New Report Looks at European Exascale Projects

“Between 2011 and 2016, eight projects, with a total budget of more than €50 Million, were selected for this first push in the direction of the next- generation supercomputer: CRESTA, DEEP and DEEP-ER, EPiGRAM, EXA2CT, Mont- Blanc (I + II) and Numexas. The challenges they addressed in their projects were manifold: innovative approaches to algorithm and application development, system software, energy efficiency, tools and hardware design took centre stage.”

Components For Deep Learning

The recent introduction of new high end processors from Intel combined with accelerator technologies such as NVIDIA Tesla GPUs and Intel Xeon Phi provide the raw ‘industry standard’ materials to cobble together a test platform suitable for small research projects and development. When combined with open source toolkits some meaningful results can be achieved, but wide scale enterprise deployment in production environments raises the infrastructure, software and support requirements to a completely different level.

The Core Technologies for Deep Learning

Given the compute and data intensive nature of deep learning which has significant overlaps with the needs of the high performance computing market, theTOP500 list provides a good proxy of the current market dynamics and trends. From the central computation perspective, today’s multicore processor architectures dominate the TOP500 with 91% based on Intel processors. However, looking forwards we can expect to see further developments that may include core CPU architectures such as OpenPOWER and ARM.