Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

New Report Looks at European Exascale Projects

“Between 2011 and 2016, eight projects, with a total budget of more than €50 Million, were selected for this first push in the direction of the next- generation supercomputer: CRESTA, DEEP and DEEP-ER, EPiGRAM, EXA2CT, Mont- Blanc (I + II) and Numexas. The challenges they addressed in their projects were manifold: innovative approaches to algorithm and application development, system software, energy efficiency, tools and hardware design took centre stage.”

Components For Deep Learning

The recent introduction of new high end processors from Intel combined with accelerator technologies such as NVIDIA Tesla GPUs and Intel Xeon Phi provide the raw ‘industry standard’ materials to cobble together a test platform suitable for small research projects and development. When combined with open source toolkits some meaningful results can be achieved, but wide scale enterprise deployment in production environments raises the infrastructure, software and support requirements to a completely different level.

The Core Technologies for Deep Learning

Given the compute and data intensive nature of deep learning which has significant overlaps with the needs of the high performance computing market, theTOP500 list provides a good proxy of the current market dynamics and trends. From the central computation perspective, today’s multicore processor architectures dominate the TOP500 with 91% based on Intel processors. However, looking forwards we can expect to see further developments that may include core CPU architectures such as OpenPOWER and ARM.

White House Releases Strategic Plan for NSCI Initiative

This week the White House Office of Science and Technology Policy released the Strategic Plan for the NSCI Initiative. “The NSCI strives to establish and support a collaborative ecosystem in strategic computing that will support scientific discovery and economic drivers for the 21st century, and that will not naturally evolve from current commercial activity,” writes Altaf Carim, William Polk, and Erin Szulman from the OSTP in a blog post.

The Industrialization of Deep Learning – Intro

Deep learning is a method of creating artificial intelligence systems that combine computer-based multi-layer neural networks with intensive training techniques and large data sets to enable analysis and predictive decision making. A fundamental aspect of deep learning environments is that they transcend finite programmable constraints to the realm of extensible and trainable systems. Recent developments in technology and algorithms have enabled deep learning systems to not only equal but to exceed human capabilities in the pace of processing vast amounts of information.

3D Printing Survey Provides Insight on First Adopters

While the National Labs are known for their supercomputers, some are also tasked with helping US industry advance digital manufacturing. The 3D printed car and Jeep projects were done to demonstrate Oak Ridge’s Big Area Additive Manufacturing technology, which the lab says could bring a whole new meaning to the phrase “rapid prototyping.” A new report by a 3D printing service called Sculpteo offers some insight into who is using 3D printing. They surveyed 1,000 respondents from 19 different industry online from late January to late March 2016.

OrionX Reports Position InfiniBand as the Leading HPI Technology and Mellanox the Leading Vendor

“For now, InfiniBand and its vendor community, notably Mellanox appear to have the upper hand from a performance and market presence perspective, but with Intel entering the HPI market, and new server architectures based on ARM and Power making a new claim on high performance servers, it is clear that a new industry phase is beginning. A healthy war chest combined with a well-executed strategy can certainly influence a successful outcome.”

Mellanox Technology Accelerates the World’s Fastest Supercomputer

Today Mellanox announced that the company’s interconnect technology accelerates the world’s fastest supercomputer at the supercomputing center in Wuxi, China. The new number one supercomputer delivers 93 Petaflops (3 times higher compared to the previous top system), connecting nearly 41 thousand nodes and more than ten million CPU cores. The offloading architecture of the Mellanox interconnect solution is the key to providing world leading performance, scalability and efficiency, connecting the highest number of nodes and CPU cores within a single supercomputer.

How to Control Your Supercomputing Programs

Pressures by management for cost containment are answered by improving software maintenance procedures and automating many of the repetitive activities that have been handled manually. This lowers Total Cost of Ownership (TCO), boosting IT productivity, and increasing return on investment (ROI).

New Report Charts Future Directions for NSF Advanced Computing Infrastructure

A newly released report commissioned by the National Science Foundation (NSF) and conducted by National Academies of Sciences, Engineering, and Medicine examines priorities and associated trade-offs for advanced computing investments and strategy. “We are very pleased with the National Academy’s report and are enthusiastic about its helpful observations and recommendations,” said Irene Qualters, NSF Advanced Cyberinfrastructure Division Director. “The report has had a wide range of thoughtful community input and review from leaders in our field. Its timing and content give substance and urgency to NSF’s role and plans in the National Strategic Computing Initiative.”