Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Video: Intel Scalable System Framework

Gary Paek from Intel presented this talk at the HPC User Forum in Austin. “Traditional high performance computing is hitting a performance wall. With data volumes exploding and workloads becoming increasingly complex, the need for a breakthrough in HPC performance is clear. Intel Scalable System Framework provides that breakthrough. Designed to work for small clusters to the world’s largest supercomputers, Intel SSF provides scalability and balance for both compute- and data intensive applications, as well as machine learning and visualization. The design moves everything closer to the processor to improve bandwidth, reduce latency and allow you to spend more time processing and less time waiting.”

PRACE Awards Time on Marconi Supercomputer

PRACE has announced the winners of its 13th Call for Proposals for PRACE Project Access. Selected proposals will receive allocations to the following PRACE HPC resources: Marconi and MareNostrum.

Bright Computing Announces Reseller Agreement with SGI

“SGI and Bright Computing have been working together for the last year to provide our joint customers with enterprise level clustered infrastructure management software for production supercomputing,” said Gabriel Broner, vice president and general manager of HPC, SGI. “By partnering with Bright Computing, our customers are able to select the cluster management tool that best suits their needs.”

HPC4mfg Seeks New Proposals to Advance Energy Technologies

Today the Energy Department’s Advanced Manufacturing Office announced up to $3 million in available funding for manufacturers to use high-performance computing resources at the Department’s national laboratories to tackle major manufacturing challenges. The High Performance Computing for Manufacturing (HPC4Mfg) program enables innovation in U.S. manufacturing through the adoption of high performance computing (HPC) to advance applied science and technology in manufacturing, with an aim of increasing energy efficiency, advancing clean energy technology, and reducing energy’s impact on the environment.

DDN Appliance Speeds WOS Object Storage

“The growing number of use cases that object storage can satisfy represents a huge opportunity for DDN – especially as cases like collaboration and active archive for large and ‘forever’ data sets are concentrated in DDN customer sites and well-established DDN markets,” said Molly Rector, CMO, executive vice president product management and worldwide marketing at DDN. “WOS’ differentiated benefits give it a strong competitive advantage for current and emerging use cases, and with multiple appliance and software-only options customers have complete architectural flexibility and choice.”

Nvidia Unveils World’s First GPU Design for Inferencing

Nvidia’s GPU platforms have been widely used on the training side of the Deep Learning equation for some time now. Today the company announced a new Pascal-based GPU tailor-made for the inferencing side of Deep Learning workloads. “With the Tesla P100 and now Tesla P4 and P40, NVIDIA offers the only end-to-end deep learning platform for the data center, unlocking the enormous power of AI for a broad range of industries,” said Ian Buck, general manager of accelerated computing at NVIDIA.”

DOE Funds Asynchronous Supercomputing Research at Georgia Tech

“More than just building bigger and faster computers, high-performance computing is about how to build the algorithms and applications that run on these computers,” said School of Computational Science and Engineering (CSE) Associate Professor Edmond Chow. “We’ve brought together the top people in the U.S. with expertise in asynchronous techniques as well as experience needed to develop, test, and deploy this research in scientific and engineering applications.”

Measuring HPC: Performance, Cost, & Value

Andrew Jones from NAG presented this talk at the HPC User Forum in Austin. “This talk will discuss why it is important to measure High Performance Computing, and how to do so. The talk covers measuring performance, both technical (e.g., benchmarks) and non-technical (e.g., utilization); measuring the cost of HPC, from the simple beginnings to the complexity of Total Cost of Ownership (TCO) and beyond; and finally, the daunting world of measuring value, including the dreaded Return on Investment (ROI) and other metrics. The talk is based on NAG HPC consulting experiences with a range of industry HPC users and others. This is not a sales talk, nor a highly technical talk. It should be readily understood by anyone involved in using or managing HPC technology.”

Examples of Deep Learning Industrialization

Humans are very good at visual pattern recognition especially when it comes to facial features and graphic symbols and identifying a specific person or associating a specific symbol with an associated meaning. It is in these kinds of scenarios where deep learning systems excel. Clearly identifying each new person or symbol is more efficiently achieved by a training methodology than by needing to reprogram a conventional computer or explicitly update database entries.

Funding Boosts Exascale Research at LANL

“Our collaborative role in these exascale applications projects stems from our laboratory’s long-term strategy in co-design and our appreciation of the vital role of high-performance computing to address national security challenges,” said John Sarrao, associate director for Theory, Simulation and Computation at Los Alamos National Laboratory. “The opportunity to take on these scientific explorations will be especially rewarding because of the strategic partnerships with our sister laboratories.”