Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


The Core Technologies for Deep Learning

Given the compute and data intensive nature of deep learning which has significant overlaps with the needs of the high performance computing market, theTOP500 list provides a good proxy of the current market dynamics and trends. From the central computation perspective, today’s multicore processor architectures dominate the TOP500 with 91% based on Intel processors. However, looking forwards we can expect to see further developments that may include core CPU architectures such as OpenPOWER and ARM.

White House Releases Strategic Plan for NSCI Initiative

This week the White House Office of Science and Technology Policy released the Strategic Plan for the NSCI Initiative. “The NSCI strives to establish and support a collaborative ecosystem in strategic computing that will support scientific discovery and economic drivers for the 21st century, and that will not naturally evolve from current commercial activity,” writes Altaf Carim, William Polk, and Erin Szulman from the OSTP in a blog post.

The Industrialization of Deep Learning – Intro

Deep learning is a method of creating artificial intelligence systems that combine computer-based multi-layer neural networks with intensive training techniques and large data sets to enable analysis and predictive decision making. A fundamental aspect of deep learning environments is that they transcend finite programmable constraints to the realm of extensible and trainable systems. Recent developments in technology and algorithms have enabled deep learning systems to not only equal but to exceed human capabilities in the pace of processing vast amounts of information.

3D Printing Survey Provides Insight on First Adopters

While the National Labs are known for their supercomputers, some are also tasked with helping US industry advance digital manufacturing. The 3D printed car and Jeep projects were done to demonstrate Oak Ridge’s Big Area Additive Manufacturing technology, which the lab says could bring a whole new meaning to the phrase “rapid prototyping.” A new report by a 3D printing service called Sculpteo offers some insight into who is using 3D printing. They surveyed 1,000 respondents from 19 different industry online from late January to late March 2016.

OrionX Reports Position InfiniBand as the Leading HPI Technology and Mellanox the Leading Vendor

“For now, InfiniBand and its vendor community, notably Mellanox appear to have the upper hand from a performance and market presence perspective, but with Intel entering the HPI market, and new server architectures based on ARM and Power making a new claim on high performance servers, it is clear that a new industry phase is beginning. A healthy war chest combined with a well-executed strategy can certainly influence a successful outcome.”

Mellanox Technology Accelerates the World’s Fastest Supercomputer

Today Mellanox announced that the company’s interconnect technology accelerates the world’s fastest supercomputer at the supercomputing center in Wuxi, China. The new number one supercomputer delivers 93 Petaflops (3 times higher compared to the previous top system), connecting nearly 41 thousand nodes and more than ten million CPU cores. The offloading architecture of the Mellanox interconnect solution is the key to providing world leading performance, scalability and efficiency, connecting the highest number of nodes and CPU cores within a single supercomputer.

How to Control Your Supercomputing Programs

Pressures by management for cost containment are answered by improving software maintenance procedures and automating many of the repetitive activities that have been handled manually. This lowers Total Cost of Ownership (TCO), boosting IT productivity, and increasing return on investment (ROI).

New Report Charts Future Directions for NSF Advanced Computing Infrastructure

A newly released report commissioned by the National Science Foundation (NSF) and conducted by National Academies of Sciences, Engineering, and Medicine examines priorities and associated trade-offs for advanced computing investments and strategy. “We are very pleased with the National Academy’s report and are enthusiastic about its helpful observations and recommendations,” said Irene Qualters, NSF Advanced Cyberinfrastructure Division Director. “The report has had a wide range of thoughtful community input and review from leaders in our field. Its timing and content give substance and urgency to NSF’s role and plans in the National Strategic Computing Initiative.”

Report: US At Risk of Falling Behind in Supercomputing

Today the Information Technology and Innovation Foundation (ITIF) published a new report that urges U.S. policymakers to take decisive steps to ensure the United States continues to be a world leader in high-performance computing. “While America is still the world leader, other nations are gaining on us, so the U.S. cannot afford to rest on its laurels. It is important for policymakers to build on efforts the Obama administration has undertaken to ensure the U.S. does not get out paced.”

Understanding Your HPC Application Needs

Many HPC applications began as single processor (single core) programs. If these applications take too long on a single core or need more memory than is available, they need to be modified so they can run on scalable systems. Fortunately, many of the important (and most used) HPC applications are already available for scalable systems. Not all applications require large numbers of cores for effective performance, while others are highly scalable. Here is how to better understand your HPC application needs.