Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


D-Wave Completes Prototype of Next-Gen Quantum Processor

Today D-Wave Systems announced that the company has completed fabrication and testing of a working prototype next-generation processor, and the installation of a D-Wave 2000Q system for a customer. The prototype processor uses an advanced new architecture that will be the basis for D-Wave’s next-generation quantum processor. The D-Wave 2000Q system, the fourth generation of commercial products delivered by D-Wave, was installed at the Quantum Artificial Intelligence Lab run by Google, NASA, and Universities Space Research Association.

Highest Peformance and Scalability for HPC and AI

Scot Schultz from Mellanox gave this talk at the Stanford HPC Conference. “Today, many agree that the next wave of disruptive technology blurring the lines between the digital, physical and even the biological, will be the fourth industrial revolution of AI. The fusion of state-of-the-art computational capabilities, extensive automation and extreme connectivity is already affecting nearly every aspect of society, driving global economics and extending into every aspect of our daily life.”

Lenovo ThinkSystem Servers Power 1.3 Petaflop Supercomputer at University of Southampton

OCF in the UK has deployed a new supercomputer at the University of Southampton. Named Iridis 5, the 1.3 Petaflop system will support research demanding traditional HPC as well as projects requiring large scale deep storage, big data analytics, web platforms for bioinformatics, and AI services. “We’ve had early access to Iridis 5 and it’s substantially bigger and faster than its previous iteration – it’s well ahead of any other in use at any University across the UK for the types of calculations we’re doing.”

WekaIO: Making Machine Learning Compute-Bound Again

We are going to present WekaIO, the lowest latency, highest throughput file system solution that scales to 100s of PB in a single namespace supporting the most challenging deep learning projects that run today. We will present real life benchmarks comparing WekaIO performance to a local SSD file system, showing that we are the only coherent shared storage that is even faster than the current caching solutions, while allowing customers to linearly scale performance by adding more GPU servers.”

Call for Participation: MSST Mass Storage Conference 2018

The 34th International Conference on Massive Storage Systems and Technologies (MSST 2018) has issued its Call for Participation. The event takes place May 14-16 in Santa Clara, California. “The conference invites you to share your research, ideas and solutions, as we continue to face challenges in the rapidly expanding need for massive, distributed storage solutions. Join us and learn about disruptive storage technologies and the challenges facing data centers, as the demand for massive amounts of data continues to increase. Join the discussion on webscale IT, and the demand on storage systems from IoT, healthcare, scientific research, and the continuing stream of smart applications (apps) for mobile devices.”

Agenda Posted: OpenPOWER 2018 Summit in Las Vegas

The OpenPOWER Summit has posted its speaker agenda. Held in conjunction with IBM Think 2018, the event takes place March 19 in Las Vegas. “The OpenPOWER Foundation is an open technical community based on the POWER architecture, enabling collaborative development and opportunity for member differentiation and industry growth. The goal of the OpenPOWER Foundation is to create an open ecosystem, using the POWER Architecture to share expertise, investment, and server-class intellectual property to serve the evolving needs of customers and industry.”

Video: Computing Challenges at the Large Hadron Collider

CERN’s Maria Girona gave this talk at the HiPEAC 2018 conference in Manchester. “The Large Hadron Collider (LHC) is one of the largest and most complicated scientific apparata ever constructed. “In this keynote, I will discuss the challenges of capturing, storing and processing the large volumes of data generated at CERN. I will also discuss how these challenges will evolve towards the High-Luminosity Large Hadron Collider (HL-LHC), the upgrade programme scheduled to begin taking data in 2026 and to run into the 2030s, generating some 30 times more data than the LHC has currently produced.”

Video: Deep Reinforcement Learning and Systems Infrastructure at DeepMind

In this video from HiPEAC 2018 in Manchester, Dan Belov from DeepMind describe the company’s machine learning technology and some of the challenges ahead. “DeepMind Inc. is well known for state of the art Deep Reinforcement Learning (DRL) algorithms such as DQN on Atari, A3C on DMLab and AlphaGo Zero. I would like to take you on a tour of challenges we encounter when training DRL agents on large workloads with hundreds of terabytes of data. I’ll talk about why DRL poses unique challenges when designing distributed systems and hardware as opposed to simple supervised learning. Finally I’d like to discuss opportunities for DRL to help systems design and operation.”

Adaptive Computing rolls out Moab HPC Suite 9.1.2

Today Adaptive Computing announced the release of Moab 9.1.2, an update which has undergone thousands of quality tests and includes scores of customer-requested enhancements. “Moab is a world leader in dynamically optimizing large-scale computing environments. It intelligently places and schedules workloads and adapts resources to optimize application performance, increase system utilization, and achieve organizational objectives. Moab’s unique intelligent and predictive capabilities evaluate the impact of future orchestration decisions across diverse workload domains (HPC, HTC, Big Data, Grid Computing, SOA, Data Centers, Cloud Brokerage, Workload Management, Enterprise Automation, Workflow Management, Server Consolidation, and Cloud Bursting); thereby optimizing cost reduction and speeding product delivery.”

Intel Rolls out new 3D NAND SSDs

Today, Intel announced the Intel SSD DC P4510 Series for data center applications. As a high performance storage device, the P4510 Series uses 64-layer TLC Intel 3D NAND to enable end users to do more per server, support broader workloads, and deliver space-efficient capacity. “The P4510 Series enables up to four times more terabytes per server and delivers up to 10 times better random read latency at 99.99 percent quality of service than previous generations. The drive can also deliver up to double the input-output operations per second (IOPS) per terabyte.”