Sign up for our newsletter and get the latest HPC news and analysis.

NCI Powers Research in the Cloud with Mellanox, RDMA, and OpenStack

logo_nci

Today Mellanox announced that the National Computational Infrastructure (NCI) at the Australia National University has selected the company’s interconnect technologies to support the nation’s researchers.

It’s Time for MPI Over Everything (MPIoE)

jeff

Over at Google Plus, Jeff Squyres has posted a somewhat belated April 1 post announcing an exciting new technology called the MPI over Everything Project (MPIoE). “One key innovative technology developed as part of the MPIoE effort is the Far Reaching Internet Datagram Efficiency (FRIDGE) framing protocol.”

HPCAC Swiss Conference Cluster Competition

cluster

In this video from the HPC Advisory Council Swiss Conference, HPC System Administrators compete in a Cluster Competition. “Administering a cluster with InfiniBand can be tricky. In this competition, contestants have just 10 minutes to answer a series of questions with only the command line at their disposal. It’s a battle; Who will win the iPad Air?”

Integrating Array Management into Lustre

roger

In this video from LUG 2014, Roger Ronald from System Fabric Works presents: Integrating Array Management into Lustre. “Intel Enterprise Edition for Lustre Plug-ins address a significant adoption barrier by improving ease of use. Now, System Fabric Works has implemented a NetApp plug-in for Intel EE Lustre and additional plug-ins for storage, networks, and servers are being encouraged.”

Video: The Future of Interconnect

dror

“The emerging large-scale Data Centers for high-performance computing, clouds, and Web 2.0 infrastructures span tens and hundreds-of-thousands of nodes, all connected together via high-speed connectivity solutions. With the growing size of systems and CPU cores per server node, not only the traditional demands from the interconnect increase dramatically, but also new demand is introduced. Traditional interconnect solutions do not scale out to deliver efficient and balanced throughput and scalable latency at reasonable power and cost. This session introduces Mellanox interconnect solutions based on InfiniBand and Ethernet networks, presents multiple deployment examples, and outlines a vision for the future of interconnects and a roadmap of the company’s products.”

Applications Performance Optimizations – Best Practices

Pak Lui

“To achieve good scalability performance on the HPC scientific applications typically involves good understanding of the workload through performing profile analysis, and comparing behaviors of using different hardware which pinpoint bottlenecks in different areas of the HPC cluster. In this session, a selection of HPC applications will be shown to demonstrate various methods of profiling and analysis to determine the bottleneck, and the effectiveness of the tuning to improve on the application performance.”

Hot Interconnects Issues Call for Papers

google

The Hot Interconnects Conference 2014 conference has announced their Call for Papers.

insideHPC Performance Guru Looks at Nvidia’s New NVLink

Bill D'Amico

“For NVLink to have its highest value it must function properly with unified memory. That means that the Memory Management Units in the CPUs have to be aware of NVLink DMA operations and update appropriate VM structures. The operating system needs to know when memory pages have been altered via NVLink DMA – and this can’t be solely the responsibility of the drivers. Tool developers also need to know details so that MPI or other communications protocols can make use of the new interconnect.”

High Performance Computing at CSCS

Schulthess

“With around 3.2 billion computer operations (3.2 gigaflops) per watt, the combination of GPUs CPUs makes “Piz Daint” one of the world’s most energy-­efficient supercomputers in the petaflop performance class.”

Negative Latency and Einstein Express

“I’ve been thinking for a while that our obsession with reduction of latency in computing and storage could be ameliorated by exploiting a negative latency design. A negative latency design would be one where a hypothetical message would arrive at a receiver before the sender completed sending it.”