Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Nvidia Expands GTC to Eight Global Events

Nvidia is expanding its popular GPU Technology Conference to eight cities worldwide. “We’re broadening the reach of GTC with a series of conferences in eight cities across four continents, bringing the latest industry trends to major technology centers around the globe. Beijing, Taipei, Amsterdam, Melbourne, Tokyo, Seoul, Washington, and Mumbai will all host GTCs. Each will showcase technology from NVIDIA and our partners across the fields of deep learning, autonomous driving and virtual reality. Several events in the series will also feature keynote presentations by NVIDIA CEO and co-founder Jen-Hsun Huang.”

Agenda Posted for HPC User Forum in Oxford

IDC has published the preliminary agenda for the next international HPC User Forum. The event will take place Sept. 29-30 in Oxford, UK.

Video: Stampede II Supercomputer to Advance Computational Science at TACC

In this video, Dan Stanzione from TACC describes how the Stampede II supercomputer will driving computational science. “Announced in June, a $30 million NSF award to the Texas Advanced Computing Center will be used acquire and deploy a new large scale supercomputing system, Stampede II, as a strategic national resource to provide high-performance computing capabilities for thousands of researchers across the U.S. This award builds on technology and expertise from the Stampede system first funded in by NSF 2011 and will deliver a peak performance of up to 18 Petaflops, over twice the overall system performance of the current Stampede system.”

Mangstor MX6300 NVMe SSDs Power One Stop Systems FSAe-4 Flash Storage Array

Today One Stop Systems announced the 4U Flash Storage Array with Mangstor MX6300 NVMe SSDs. OSS’ FSAe-4 can accommodate 32 of the MX6300 providing up to 172TB of shared Flash storage. The FSAe-4 is a fully redundant, hot serviceable configuration with 4 independent 1U servers attached to the PCIe expansion chassis. The expansion system can support Ethernet (RoCE) or Infiniband fabrics and network speeds up to 100Gb/s.

Intel to Bolster Machine Learning with Nervana Acquisition

Today Intel announced plans to acquire startup Nervana Systems as part of an effort to bolster the company’s artificial intelligence capabilities. “Nervana has a fully-optimized software and hardware stack for deep learning,” said Intel’s Diane Bryant in a blog post. “Their IP and expertise in accelerating deep learning algorithms will expand Intel’s capabilities in the field of AI. We will apply Nervana’s software expertise to further optimize the Intel Math Kernel Library and its integration into industry standard frameworks

Components For Deep Learning

The recent introduction of new high end processors from Intel combined with accelerator technologies such as NVIDIA Tesla GPUs and Intel Xeon Phi provide the raw ‘industry standard’ materials to cobble together a test platform suitable for small research projects and development. When combined with open source toolkits some meaningful results can be achieved, but wide scale enterprise deployment in production environments raises the infrastructure, software and support requirements to a completely different level.

Nimbus Data Rolls Out ExaFlash Storage Platform

“The ExaFlash Platform is an historic achievement that will reshape the storage and data center industries,” said Thomas Isakovich, CEO and Founder of Nimbus Data. “It offers unprecedented scale (from terabytes to exabytes), record-smashing efficiency (95% lower power and 50x greater density than existing all-flash arrays), and a breakthrough price point (a fraction of the cost of existing all-flash arrays). ExaFlash brings the all-flash data center dream to reality and will help empower humankind’s innovation for decades to come.”

Fujitsu Develops High-Speed Software for Deep Learning

“Fujitsu Laboratories has newly developed parallelization technology to efficiently share data between machines, and applied it to Caffe, an open source deep learning framework widely used around the world. Fujitsu Laboratories evaluated the technology on AlexNet, where it was confirmed to have achieved learning speeds with 16 and 64 GPUs that are 14.7 and 27 times faster, respectively, than a single GPU. These are the world’s fastest processing speeds(2), representing an improvement in learning speeds of 46% for 16 GPUs and 71% for 64 GPUs.”

Nimbix Speeds Cloud-based Machine Learning

Today HPC cloud provider Nimbix announced a significant increase in their presence in the machine learning market space as more customers are using their JARVICE platform to help address the need for an easier, more cost efficient way of working with machine learning. “The Nimbix Cloud was a great choice for our research tasks in conversational AI. They are one of the first cloud services to provide NVIDIA Tesla K80 GPUs that were essential for computing neural networks that are implemented as part of Luka’s AI,” said Phil Dudchuck, Co-Founder at Luka.ai.

Netlist HybriDIMM Memory Unifies DRAM-NAND

Today Netlist announced the first public demonstration of its HybriDIMM Storage Class Memory (SCM) product at the upcoming Flash Memory Summit. Using an industry standard DDR4 LRDIMM interface, HybriDIMM is the first SCM product to operate in current Intel x86 servers without BIOS and hardware changes, and the first unified DRAM-NAND solution that scales memory to terabyte storage capacities and accelerates storage to nanosecond memory speeds.