Sign up for our newsletter and get the latest big data news and analysis.

insideHPC Special Report Accelerate WRF Performance – Expedite Predictions with In-Depth Workload Characterization Knowledge

A popular application that simulates climate change is the Weather and Research Forecasting (WRF) model. This white paper discusses how QCT can work with leading research and commercial organizations to lower the Total Cost of Ownership by supplying highly tuned applications that are optimized to work on leading-edge infrastructure.

Radio Free HPC:  MLperf Wars and AMD’s Gawdy Earnings

Summer inventory clearance days for Radio Free HPC! In this episode, we talk about the spate of MLperf benchmarks and how AMD hit it out of the park on their most recent quarterly earnings.

Inspur Storage System achieves record performance on SPC Benchmark 1

An Inspur storage system achieved world record performance and top two ranking in a recent SPC Benchmark 1 performance test. The SPC Benchmark 1 results are a valuable reference for the selection of storage systems for businesses critical applications like OLTP systems, databases systems, and server applications. “To achieve these results, the AS5600G2 adopted iTurbo acceleration engine technology from Inspur and used the four core algorithms of intelligent data path acceleration — intelligent multi-core scheduling, intelligent hot and cold data stream separation, and iMASP random to sequential transformation technology.”


In this special guest feature, Gilad Shainer from Mellanox Technologies writes that the new GPCNeT benchmark is actually a measure of relative performance under load rather than a measure of absolute performance. “When it comes to evaluating high-performance computing systems or interconnects, there are much better benchmarks available for use. Moreover, the ability to benchmark real workloads is obviously a better approach for determining system or interconnect performance and capabilities. The drawbacks of GPCNeT benchmarks can be much more than its benefits.”

Podcast: The Dos and Don’ts of RFP Benchmarks

In this podcast, reviews a presentation on Benchmarks in HPC Procurement Tenders by Tricia Balle of Cray. She gave the talk at the recent Perth HPC Conference. “We discuss how benchmarks should and shouldn’t be used in RFPs, and the relevant best practices; important stuff whether you are on the customer side or the vendor side.”

Benchmarking Optimized 3D Electromagnetic Simulation Tools

New benchmarks from Computer Simulation Technology on their recently optimized 3D electromagnetic field simulation tools compare the performance of the new Intel Xeon Scalable processors with previous generation Intel Xeon processors. “Our team works with the customers in terms of testing of models and configuration settings to make good recommendations for customers so they get a well performing system and the best performance when running the models.”

Video: Benchmarking AMD EPYC on Memory-Bound HPC Applications

In this video, Joshua Mora demonstrates how the new AMD EPYC processor delivers excellent performance for memory-bound HPC workloads including ANSYS and FLUENT. “EPYC strikes the perfect balance of cores/threads, memory, I/O bandwidth and security to deliver excellent performance for many High Performance Computing (HPC) workloads. AMD’s state-of-the-art GPUs combined with EPYC provide excellent solutions for your most demanding HPC applications.”

Transaction Processing Performance Council Launches TPCx-HS Big Data Benchmark

Today the Transaction Processing Performance Council (TPC) announced the immediate availability of TPCx-HS Version 2, extending the original benchmark’s scope to include the Spark execution framework and cloud services. “Enterprise investment in Big Data analytics tools is growing exponentially, to keep pace with the rapid expansion of datasets,” said Tariq Magdon-Ismail, chairman of the TPCx-HS committee and staff engineer at VMware. “This is leading to an explosion in new hardware and software solutions for collecting and analyzing data. So there is enormous demand for robust, industry standard benchmarks to enable direct comparison of disparate Big Data systems across both hardware and software stacks, either on-premise or in the cloud. TPCx-HS Version 2 significantly enhances the original benchmark’s scope, and based on industry feedback, we expect immediate widespread interest.”

Measuring HPC: Performance, Cost, & Value

Andrew Jones from NAG presented this talk at the HPC User Forum in Austin. “This talk will discuss why it is important to measure High Performance Computing, and how to do so. The talk covers measuring performance, both technical (e.g., benchmarks) and non-technical (e.g., utilization); measuring the cost of HPC, from the simple beginnings to the complexity of Total Cost of Ownership (TCO) and beyond; and finally, the daunting world of measuring value, including the dreaded Return on Investment (ROI) and other metrics. The talk is based on NAG HPC consulting experiences with a range of industry HPC users and others. This is not a sales talk, nor a highly technical talk. It should be readily understood by anyone involved in using or managing HPC technology.”

Nvidia Disputes Intel’s Maching Learning Performance Claims

“Few fields are moving faster right now than deep learning,” writes Buck. “Today’s neural networks are 6x deeper and more powerful than just a few years ago. There are new techniques in multi-GPU scaling that offer even faster training performance. In addition, our architecture and software have improved neural network training time by over 10x in a year by moving from Kepler to Maxwell to today’s latest Pascal-based systems, like the DGX-1 with eight Tesla P100 GPUs. So it’s understandable that newcomers to the field may not be aware of all the developments that have been taking place in both hardware and software.”