Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

D-Wave Demonstrates Large-Scale Programmable Quantum Simulation

Today D-Wave Systems announced the publication of a significant scientific result in the peer-reviewed journal Science. The article, titled “Phase transitions in a programmable spin glass simulator,” details how a D-Wave 2000Q quantum computer was used to predict phase transitions within a particular quantum mechanical system known as the transverse field Ising model. “This work represents an important milestone for quantum computing, because it is the first time physics of this kind has been simulated in a scalable architecture at such a large scale,” said Vern Brownell, CEO of D-Wave.

ISC 2018: NVIDIA DGX-2 — The World’s Most Powerful AI System on Display

In this video, Satinder Nijjar from NVIDIA describes the new DGX-2 GPU supercomputer. “Experience new levels of AI speed and scale with NVIDIA DGX-2, the first 2 petaFLOPS system that combines 16 fully interconnected GPUs for 10X the deep learning performance. It’s powered by NVIDIA DGX software and a scalable architecture built on NVIDIA NVSwitch, so you can take on the world’s most complex AI challenges.”

Supermicro Workstation Optimized for Intel’s New Xeon E-2100 Processors

Today Supermicro introduced first-to-market workstation optimized for the new Intel Xeon E-2100 processors. “From professionals to content creators, our customers will benefit from the performance and reliability that these new workstation solutions offer. Furthermore, our new compact IoT system will surely be deployed in a wide range of applications from medical and surveillance appliances to robotic and industrial environments.”

DDN Steps Up to HPC & AI Workloads at ISC 2018

In this video from ISC 2018, James Coomer from DDN describes the company’s latest high performance storage technologies for AI and HPC workloads. “Attendees at ISC 2018 learned how organizations around the world are leveraging DDN’s people, technology, performance and innovation to achieve their greatest visions and make revolutionary insights and discoveries! Designed, optimized and right-sized for Commercial HPC, Higher Education and Exascale Computing, our full range of  DDN products and solutions are changing the landscape of HPC and delivering the most value with the greatest operational efficiency.”

NVIDIA Offers Framework to Solve AI System Challenges

At the recent NVIDIA GPU Technology Conference (GTC) 2018, Jensen Huang, NVIDIA President and CEO, during his presentation focused on a new framework designed to contextualize the key challenges using AI systems and delivering deep learning-based solutions. A new white paper sponsored by NVIDIA outlines these requirements — coined PLASTER.

Video: Kathy Yelick from LBNL Testifies at House Hearing on Big Data Challenges and Advanced Computing

In this video, Kathy Yelick from LBNL describes why the US needs to accelerate its efforts to stay ahead in AI and Big Data Analytics. “Data-driven scientific discovery is poised to deliver breakthroughs across many disciplines, and the U.S. Department of Energy, through its national laboratories, is well positioned to play a leadership role in this revolution. Driven by DOE innovations in instrumentation and computing, however, the scientific data sets being created are becoming increasingly challenging to sift through and manage.”

Video: Lustre / ZFS at Indiana University

Steve Simms from Indiana University gave this talk at the DDN User Group meeting in Frankfurt. “ZFS backed OST’s can be migrated to new hardware or to existing reconfigured hardware by leveraging ZFS snapshots and ZFS send/receive operations. The ZFS snapshot/send/receive migration method leverages incremental data transfers, allowing an initial data copy to be “caught up” with subsequent incremental changes.”

Podcast: Evolving MPI for Exascale Applications

In this episode of Let’s Talk Exascale, Pavan Balaji and Ken Raffenetti describe their efforts to help MPI, the de facto programming model for parallel computing, run as efficiently as possible on exascale systems. “We need to look at a lot of key technical challenges, like performance and scalability, when we go up to this scale of machines. Performance is one of the biggest things that people look at. Aspects with respect to heterogeneity become important.”

How to Become a Dynamic Technology Leader

The difference between a digitally dynamic business, and one who can’t keep up, is the inclusion of a capable technology leader. As the role has evolved, so too have the demands on Chief Information Officers. “Awareness of digital systems is now a key contribution to developing business strategy in several areas, including operations, expansion and marketing. Uncover the skills for success today.”

HPE Data Management Framework – Tiering, Organizing and Protecting your Data

In this video from ISC 2018, Mark Seamans from HPC describes how the HPE Data Management Framework optimizes data accessibility and storage resource utilization by enabling a hierarchical, tiered storage management architecture. “With HPE DMF, Data is allocated to tiers based on service level requirements defined by the administrator. For example, frequently accessed data can be placed on a flash, high-performance tier, less frequently accessed data on hard drives in a capacity tier, and archive data can be sent off to tape storage.”