Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Video: Azure High Performance Computing

“Run your Windows and Linux HPC applications using high performance A8 and A9 compute instances on Azure, and take advantage of a backend network with MPI latency under 3 microseconds and non-blocking 32 Gbps throughput. This backend network includes remote direct memory access (RDMA) technology on Windows and Linux that enables parallel applications to scale to thousands of cores. Azure provides you with high memory and HPC-class CPUs to help you get results fast. Scale up and down based upon what you need and pay only for what you use to reduce costs.”

Parallel Multiway Methods for Compression of Massive Data and Other Applications

Tamara Kolda from Sandia gave this Invited Talk at SC16. “Scientists are drowning in data. The scientific data produced by high-fidelity simulations and high-precision experiments are far too massive to store. For instance, a modest simulation on a 3D grid with 500 grid points per dimension, tracking 100 variables for 100 time steps yields 5TB of data. Working with this massive data is unwieldy and it may not be retained for future analysis or comparison. Data compression is a necessity, but there are surprisingly few options available for scientific data.”

Asia’s First Supercomputing Magazine Launches from Singapore

Singapore-based publisher Asian Scientist has launched Supercomputing Asia, a new print title dedicated to tracking the latest developments in high performance computing across the region and making supercomputing accessible to the layman. “Aside from well-established supercomputing powerhouses like Japan and emerging new players like China, Asian countries like Singapore and South Korea have recognized the transformational power of supercomputers and invested accordingly. We hope that this new publication will provide a unique insight into the exciting developments in this region,” said Dr. Rebecca Tan, Managing Editor of Supercomputing Asia.

Penguin Computing Releases Scyld ClusterWare 7

“The release of Scyld ClusterWare 7 continues the growth of Penguin’s HPC provisioning software and enables support of large scale clusters ranging to thousands of nodes,” said Victor Gregorio, Senior Vice President of Cloud Services at Penguin Computing. “We are pleased to provide this upgraded version of Scyld ClusterWare to the community for Red Hat Enterprise Linux 7, CentOS 7 and Scientific Linux 7.”

Dell & Intel Collaborate on CryoEM on Intel Xeon Phi

In this video from SC16, Janet Morss from Dell EMC and Hugo Saleh from Intel discuss how the two companies collaborated on accelerating CryoEM. “Cryo-EM allows molecular samples to be studied in near-native states and down to nearly atomic resolutions. Studying the 3D structure of these biological specimens can lead to new insights into their functioning and interactions, especially with proteins and nucleic acids, and allows structural biologists to examine how alterations in their structures affect their functions. This information can be used in system biology research to understand the cell signaling network which is part of a complex communication system.”

In-Memory Computing for HPC

To achieve high performance, modern computer systems rely on two basic methodologies to scale resources: scale-up or scale-out. The scale-up in-memory system provides a much better total cost of ownership and can provide value in a variety of ways. “If the application program has concurrent sections then it can be executed in a “parallel” fashion. Much like using multiple bricklayers to build a brick wall. It is important to remember that the amount and efficiency of the concurrent portions of a program determine how much faster it can run on multiple processors. Not all applications are good candidates for parallel execution.”

Co-Design 3.0 – Configurable Extreme Computing, Leveraging Moore’s Law for Real Applications

Sadasivan Shankar gave this Invited Talk at SC16. “This talk will explore six different trends all of which are associated with some form of scaling and how they could enable an exciting world in which we co-design a platform dependent on the applications. I will make the case that this form of “personalization of computation” is achievable and is necessary for applications of today and tomorrow.”

Call for Papers: International Workshop on High-Performance Big Data Computing (HPBDC)

The 3rd annual International Workshop on High-Performance Big Data Computing (HPBDC) has issued its Call for Papers. Featuring a keynote by Prof. Satoshi Matsuoka from Tokyo Institute of Technology, the event takes place May 29, 2017 in Orlando, FL.

Video: How AI can bring on a second Industrial Revolution

“The AI is going to flow across the grid — the cloud — in the same way electricity did. So everything that we had electrified, we’re now going to cognify. And I owe it to Jeff, then, that the formula for the next 10,000 start-ups is very, very simple, which is to take x and add AI. That is the formula, that’s what we’re going to be doing. And that is the way in which we’re going to make this second Industrial Revolution. And by the way — right now, this minute, you can log on to Google and you can purchase AI for six cents, 100 hits. That’s available right now.”

Podcast: Do It Yourself Deep Learning

In this AI Podcast, Bob Bond from Nvidia and Mike Senese from Make magazine discuss the Do It Yourself movement for Artificial Intelligence. “Deep learning isn’t just for research scientists anymore. Hobbyists can use consumer grade GPUs and open-source DNN software to tackle common household tasks from ant control to chasing away stray cats.”