Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


MIT Lincoln Laboratory Takes the Mystery Out of Supercomputing

“Many supercomputer users, like the big DOE labs, are implementing these next generation systems. They are now engaged in significant code modernization efforts to adapt their key present and future applications to the new processing paradigm, and to bring their internal and external users up to speed. For some in the HPC community, this creates unanticipated challenges along with great opportunities.”

Video: A Look at the Mogon II HPC Cluster at Johannes Gutenberg University

In this video, Prof. Dr.-Ing. André Brinkmann from the JGU datacenter describes the Mogon II cluster, a 580 Teraflop system currently ranked #265 on the TOP500. “Built by MEGWARE in Germany, the Mogon II system consists of 814 individual nodes each equipped with 2 Intel 2630v4 CPUs and connected via OmniPath 50Gbits (fat-tree). Each CPU has 10 cores, giving a total of 16280 cores.”

Penguin Computing Releases Scyld ClusterWare 7

“The release of Scyld ClusterWare 7 continues the growth of Penguin’s HPC provisioning software and enables support of large scale clusters ranging to thousands of nodes,” said Victor Gregorio, Senior Vice President of Cloud Services at Penguin Computing. “We are pleased to provide this upgraded version of Scyld ClusterWare to the community for Red Hat Enterprise Linux 7, CentOS 7 and Scientific Linux 7.”

Fast Networking for Next Generation Systems

“The Intel Omni-Path Architecture is an example of a networking system that has been designed for the Exascale era. There are many features that will enable this massive scaling of compute resources. Features and functionality are designed in at both the host and the fabric levels. This enables very large scaling when all of the components are designed together. Increased reliability is a result of integrating the CPU and fabric, which will be critical as the number of nodes expands well beyond any system in operation today. In addition, tools and software that have been designed to be installed and managed at the very large number of compute nodes that will be necessary to achieve this next level of performance.”

Why Intel Omni-Path is Growing Fast on the TOP500

In this video from SC16, Joe Yaworsky describes how Intel Omni Path is gaining traction on the TOP500. As the interconnect for the Intel Scalable System Framework, Omni-Path is focused on delivering the best possible application performance. “In the nine months since Intel Omni-Path Architecture (Intel OPA) began shipping, it has become the standard fabric for 100 gigabit systems. Intel OPA is featured in 28 of the top 500 most powerful supercomputers in the world announced at Supercomputing 2016 and now has 66 percent of the 100Gb market. Top500 designs include Oakforest-PACS, MIT Lincoln Lab and CINECA.”

Radio Free HPC Reviews the New TOP500

The new TOP500 list is out, and Rad is Free HPC is here podcasting the scoop in their own special way. With two new systems in the TOP10, there are many different perspectives to share. “The Cori supercomputer, a Cray XC40 system installed at Berkeley Lab’s National Energy Research Scientific Computing Center (NERSC), slipped into the number 5 slot with a Linpack rating of 14.0 petaflops. Right behind it at number 6 is the new Oakforest-PACS supercomputer, a Fujitsu PRIMERGY CX1640 M1 cluster, which recorded a Linpack mark of 13.6 petaflops.”

Intel Omni-Path Architecture Fabric, the Choice of Leading HPC Institutions

Intel Omni-Path Architecture (Intel OPA) volume shipments started a mere nine months ago in February of this year, but Intel’s high-speed, low-latency fabric for HPC has covered significant ground around the globe, including integration in HPC deployments making the Top500 list for June 2016. Intel’s fabric makes up 48 percent of installations running 100 Gbps fabrics on the Top500 June list, and they expect a significant increase in Top500 deployments, including one that could end up in the stratosphere among the top ten machines on the list.

Cray and Intel Double Down on Next-gen HPC Systems

Since 2008, the Intel and Cray have rapidly increased their collaboration to the benefit of the supercomputing market and customers. “Most recently, Cray has announced win after win for its Cray XC series systems that feature the Intel Xeon Phi processor, code-named Knights Landing and Knights Hill, which offers peak performance of over half-a-petaflop per cabinet—a 2X performance boost over previous generations. Cray is leading the charge toward many-core-CPU systems that boost application performance without the aid of GPUs.”

It’s Here: The Print ‘n Fly Guide to SC16 in Salt Lake City

At insideHPC, are very pleased to publish the Print ‘n Fly Guide to SC16 in Salt Lake City. We designed this Guide to be an in-flight magazine custom tailored for your journey to SC16 — the world’s largest gathering of high performance computing professionals. “Inside this guide you will find technical features on supercomputing, HPC interconnects, and the latest developments on the road to exascale. It also has great recommendations on food, entertainment, and transportation in SLC.”

A New Generation of Performance with Intel Xeon Phi

“With up to 72 out-of-order cores, the new Intel Xeon Phi processor delivers over 3 teraFLOPS (floating-point operations per second) of double-precision peak while providing 3.5 times higher performance per watt than the previous generation. As a bootable CPU with integrated architecture, the Intel Xeon Phi processor eliminates PCIe* bottlenecks, includes on-package high-bandwidth memory, and available integrated Intel Omni-Path fabric architecture to deliver fast, low-latency performance.”