Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

BSC Collaborates with OpenFog Consortium

“To reinforce and continue with our pioneering work on fog computing that started in 2008, we pursue synergies between leading technology companies and academic and scientific community,” said Mario Nemirovsky, Network Processors Manager at BSC. “By collaborating with the OpenFog Consortium, we will be able to contribute to the consolidation of an IoT platform for the interoperability for consumers, business, industry and research. We are looking forward to a constructive and fruitful collaborations with all OpenFog members.”

Microsoft Cognitive Toolkit Updates for Deep Learning Advances

Today Microsoft released an updated version of Microsoft Cognitive Toolkit, a system for deep learning that is used to speed advances in areas such as speech and image recognition and search relevance on CPUs and Nvidia GPUs. “We’ve taken it from a research tool to something that works in a production setting,” said Frank Seide, a principal researcher at Microsoft Artificial Intelligence and Research and a key architect of Microsoft Cognitive Toolkit.

Speakers Announced for Dell HPC Community Meeting at SC16

The Dell HPC Community at SC16 has posted their Meeting Agenda. “Blair Bethwaite from Monash University will present OpenStack for HPC at Monash. After that, Josh Simons from VMWare will describe the latest technologies in HPC virtualization.” The event takes place Saturday, Nov. 12 at the Radisson Hotel in Salt Lake City.

HPC Bear Cloud to Power Research at University of Birmingham

Designed specifically with researchers in mind, the Birmingham Environment for Academic Research (BEAR) Cloud will augment an already rich set of IT services at the University of Birmingham and will be used by academics across all disciplines, from Medicine to Archaeology, and Physics to Theology. “We are very proud of the new system, but building a research cloud isn’t easy,” said Simon Thompson, Research Computing Infrastructure Architect in IT Services at the University of Birmingham. “We challenged a range of carefully-selected partners to provide the underlying technology.”

Slidecast: Running HPC Simulation Workflows in Microsoft Azure

In this video from the Microsoft Ignite Conference, Tejas Karmarkar describes how to run your HPC Simulations on Microsoft Azure – with UberCloud container technology. “High performance computing applications are some of the most challenging to run in the cloud due to requirements that can include fast processors, low-latency networking, parallel file systems, GPUs, and Linux. We show you how to run these engineering, research and scientific workloads in Microsoft Azure with performance equivalent to on-premises. We use customer case studies to illustrate the basic architecture and alternatives to help you get started with HPC in Azure.”

Streamlining HPC Workloads with Containers

“While we often talk about the density advantages of containers, it’s the opposite approach that we use in the High Performance Computing world! Here, we use exactly 1 system container per node, giving it unlimited access to all of the host’s CPU, Memory, Disk, IO, and Network. And yet we can still leverage the management characteristics of containers — security, snapshots, live migration, and instant deployment to recycle each node in between jobs. In this talk, we’ll examine a reference architecture and some best practices around containers in HPC environments.”

Supercomputing Cancer Diagnostics with CyVerse

Adam Buntzman and his colleagues at the University of Arizona recently developed a tool that uses CyVerse supercomputing resources to create the first nearly comprehensive map of the human immunome, all the possible immune receptors our bodies can make. “When people go to a clinic, it’s usually because they’re already sick,” Buntzman said. “If doctors could detect cancerous cells before they grow drastically out of proportion to healthy cells, patients would have much higher odds of successful cancer treatment and survival.”

AMD GPUs to Speed Alibaba Cloud

​Today AMD announced that the Alibaba Cloud will use AMD Radeon Pro GPU technology to help expand its cloud computing offerings and accelerate adoption of its cloud-based services. “The partnership between AMD and Alibaba Cloud will bring both of our customers more diversified, cloud-based graphic processing solutions. It is our vision to work together with leading technology firms like AMD to empower businesses in every industry with cutting-edge technologies and computing capabilities,” said Simon Hu, president of Alibaba Cloud.

Pure Storage Introduces Petabyte Flash Arrays

Today Pure Storage announced the availability of petabyte-scale storage for mission-critical cloud IT, anchored by the release of the next-generation of FlashArray//m the company’s flagship all-flash storage array, which now delivers best-in-class performance with the simplicity and agility of public cloud.

XSEDE Awards Supercomputer Time to 155 Research Teams

Last week, XSEDE announced it has awarded more than $16M worth of compute resources to 155 research projects. This is the first cohort of allocations awardees after the announcement of a 5-year renewal of XSEDE by the National Science Foundation to expand access to the nation’s cyberinfrastructure ecosystem.