Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Dell EMC Powers HPC at University of Connecticut

The University of Connecticut has partnered with Dell EMC and Intel to create a high performance computing cluster that students and faculty can use in their research. With this HPC Cluster, UConn researchers can solve problems that are computationally intensive or involve massive amounts of data in a matter of days or hours, instead of weeks. The HPC cluster operated on the Storrs campus features 6,000 CPU cores, a high-speed fabric interconnect, and a parallel file system. Since 2011, it has been used by over 500 researchers, from each of the university’s schools and colleges, for over 40 million hours of scientific computation.

Programming for High Performance Processors

“Managing the work on each node can be referred to as Domain parallelism. During the run of the application, the work assigned to each node can be generally isolated from other nodes. The node can work on its own and needs little communication with other nodes to perform the work. The tools that are needed for this are MPI for the developer, but can take advantage of frameworks such as Hadoop and Spark (for big data analytics). Managing the work for each core or thread will need one level down of control. This type of work will typically invoke a large number of independent tasks that must then share data between the tasks.”

New ReRAM Memory Can Process Data Where it Lives

A team of international scientists have found a way to make memory chips perform computing tasks, which is traditionally done by computer processors like those made by Intel and Qualcomm. This means data could now be processed in the same spot where it is stored, leading to much faster and thinner mobile devices and computers. This type of chip is one of the fastest memory modules that will soon be available commercially.

Video: Azure High Performance Computing

“Run your Windows and Linux HPC applications using high performance A8 and A9 compute instances on Azure, and take advantage of a backend network with MPI latency under 3 microseconds and non-blocking 32 Gbps throughput. This backend network includes remote direct memory access (RDMA) technology on Windows and Linux that enables parallel applications to scale to thousands of cores. Azure provides you with high memory and HPC-class CPUs to help you get results fast. Scale up and down based upon what you need and pay only for what you use to reduce costs.”

Asia’s First Supercomputing Magazine Launches from Singapore

Singapore-based publisher Asian Scientist has launched Supercomputing Asia, a new print title dedicated to tracking the latest developments in high performance computing across the region and making supercomputing accessible to the layman. “Aside from well-established supercomputing powerhouses like Japan and emerging new players like China, Asian countries like Singapore and South Korea have recognized the transformational power of supercomputers and invested accordingly. We hope that this new publication will provide a unique insight into the exciting developments in this region,” said Dr. Rebecca Tan, Managing Editor of Supercomputing Asia.

Penguin Computing Releases Scyld ClusterWare 7

“The release of Scyld ClusterWare 7 continues the growth of Penguin’s HPC provisioning software and enables support of large scale clusters ranging to thousands of nodes,” said Victor Gregorio, Senior Vice President of Cloud Services at Penguin Computing. “We are pleased to provide this upgraded version of Scyld ClusterWare to the community for Red Hat Enterprise Linux 7, CentOS 7 and Scientific Linux 7.”

Dell & Intel Collaborate on CryoEM on Intel Xeon Phi

In this video from SC16, Janet Morss from Dell EMC and Hugo Saleh from Intel discuss how the two companies collaborated on accelerating CryoEM. “Cryo-EM allows molecular samples to be studied in near-native states and down to nearly atomic resolutions. Studying the 3D structure of these biological specimens can lead to new insights into their functioning and interactions, especially with proteins and nucleic acids, and allows structural biologists to examine how alterations in their structures affect their functions. This information can be used in system biology research to understand the cell signaling network which is part of a complex communication system.”

Co-Design 3.0 – Configurable Extreme Computing, Leveraging Moore’s Law for Real Applications

Sadasivan Shankar gave this Invited Talk at SC16. “This talk will explore six different trends all of which are associated with some form of scaling and how they could enable an exciting world in which we co-design a platform dependent on the applications. I will make the case that this form of “personalization of computation” is achievable and is necessary for applications of today and tomorrow.”

Call for Papers: International Workshop on High-Performance Big Data Computing (HPBDC)

The 3rd annual International Workshop on High-Performance Big Data Computing (HPBDC) has issued its Call for Papers. Featuring a keynote by Prof. Satoshi Matsuoka from Tokyo Institute of Technology, the event takes place May 29, 2017 in Orlando, FL.

Video: How AI can bring on a second Industrial Revolution

“The AI is going to flow across the grid — the cloud — in the same way electricity did. So everything that we had electrified, we’re now going to cognify. And I owe it to Jeff, then, that the formula for the next 10,000 start-ups is very, very simple, which is to take x and add AI. That is the formula, that’s what we’re going to be doing. And that is the way in which we’re going to make this second Industrial Revolution. And by the way — right now, this minute, you can log on to Google and you can purchase AI for six cents, 100 hits. That’s available right now.”