MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


The Industrialization of Deep Learning – Intro

Deep learning is a method of creating artificial intelligence systems that combine computer-based multi-layer neural networks with intensive training techniques and large data sets to enable analysis and predictive decision making. A fundamental aspect of deep learning environments is that they transcend finite programmable constraints to the realm of extensible and trainable systems. Recent developments in technology and algorithms have enabled deep learning systems to not only equal but to exceed human capabilities in the pace of processing vast amounts of information.

3D Printing Survey Provides Insight on First Adopters

While the National Labs are known for their supercomputers, some are also tasked with helping US industry advance digital manufacturing. The 3D printed car and Jeep projects were done to demonstrate Oak Ridge’s Big Area Additive Manufacturing technology, which the lab says could bring a whole new meaning to the phrase “rapid prototyping.” A new report by a 3D printing service called Sculpteo offers some insight into who is using 3D printing. They surveyed 1,000 respondents from 19 different industry online from late January to late March 2016.

OrionX Reports Position InfiniBand as the Leading HPI Technology and Mellanox the Leading Vendor

“For now, InfiniBand and its vendor community, notably Mellanox appear to have the upper hand from a performance and market presence perspective, but with Intel entering the HPI market, and new server architectures based on ARM and Power making a new claim on high performance servers, it is clear that a new industry phase is beginning. A healthy war chest combined with a well-executed strategy can certainly influence a successful outcome.”

Mellanox Technology Accelerates the World’s Fastest Supercomputer

Today Mellanox announced that the company’s interconnect technology accelerates the world’s fastest supercomputer at the supercomputing center in Wuxi, China. The new number one supercomputer delivers 93 Petaflops (3 times higher compared to the previous top system), connecting nearly 41 thousand nodes and more than ten million CPU cores. The offloading architecture of the Mellanox interconnect solution is the key to providing world leading performance, scalability and efficiency, connecting the highest number of nodes and CPU cores within a single supercomputer.

How to Control Your Supercomputing Programs

Pressures by management for cost containment are answered by improving software maintenance procedures and automating many of the repetitive activities that have been handled manually. This lowers Total Cost of Ownership (TCO), boosting IT productivity, and increasing return on investment (ROI).

New Report Charts Future Directions for NSF Advanced Computing Infrastructure

A newly released report commissioned by the National Science Foundation (NSF) and conducted by National Academies of Sciences, Engineering, and Medicine examines priorities and associated trade-offs for advanced computing investments and strategy. “We are very pleased with the National Academy’s report and are enthusiastic about its helpful observations and recommendations,” said Irene Qualters, NSF Advanced Cyberinfrastructure Division Director. “The report has had a wide range of thoughtful community input and review from leaders in our field. Its timing and content give substance and urgency to NSF’s role and plans in the National Strategic Computing Initiative.”

Report: US At Risk of Falling Behind in Supercomputing

Today the Information Technology and Innovation Foundation (ITIF) published a new report that urges U.S. policymakers to take decisive steps to ensure the United States continues to be a world leader in high-performance computing. “While America is still the world leader, other nations are gaining on us, so the U.S. cannot afford to rest on its laurels. It is important for policymakers to build on efforts the Obama administration has undertaken to ensure the U.S. does not get out paced.”

Understanding Your HPC Application Needs

Many HPC applications began as single processor (single core) programs. If these applications take too long on a single core or need more memory than is available, they need to be modified so they can run on scalable systems. Fortunately, many of the important (and most used) HPC applications are already available for scalable systems. Not all applications require large numbers of cores for effective performance, while others are highly scalable. Here is how to better understand your HPC application needs.

HPC Benchmarking Results for Intel Broadwell Processors

Over at the Dell HPC Community Blog, Ashish Kumar Singh, Mayura Deshmukh and Neha Kashyap discuss the performance characterization of Intel Broadwell processors with High Performance LINPACK (HPL) and STREAM benchmarks. “The performance of all Broadwell processor used for this study is higher for both HPL and STREAM benchmarks. “There is ~12% increase in measured memory bandwidth for Broadwell processors compared to Haswell processors. Broadwell processors measure better power efficiencies than the Haswell processors. In conclusion, Broadwell processors may fulfill the demands of more compute power for HPC applications.”

Intersect360 Publishes New Report on the Hyperscale Market

Today Intersect360 Research published a new research report on the Hyperscale market. “This report provides definitions, segmentations, and dynamics of the hyperscale market and describes its scope, the end-user applications it touches, and the market drivers and dampers for future growth. It is the foundational report for the Intersect360 Research hyperscale market advisory service.”