Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Rescale Announces ScaleX Labs with Intel Xeon Phi and Omni-Path

Today the Rescale HPC Cloud introduced the ScaleX Labs with Intel Xeon Phi processors and Intel Omni-Path Fabric managed by R Systems. The collaboration brings lightning-fast, next-generation computation to Rescale’s cloud platform for big compute, ScaleX Pro. “We are proud to provide a remote access platform for Intel’s latest processors and interconnect, and appreciate the committed cooperation of our partners at R Systems,” said Rescale CEO Joris Poort. “Our customers care about both performance and convenience, and the ScaleX Labs with Intel Xeon Phi processors brings them both in a single cloud HPC solution at a price point that works for everyone.”

Rambus Collaborates with Microsoft on Cryogenic Memory

“With the increasing challenges in conventional approaches to improving memory capacity and power efficiency, our early research indicates that a significant change in the operating temperature of DRAM using cryogenic techniques may become essential in future memory systems,” said Dr. Gary Bronner, vice president of Rambus Labs. “Our strategic partnership with Microsoft has enabled us to identify new architectural models as we strive to develop systems utilizing cryogenic memory. The expansion of this collaboration will lead to new applications in high-performance supercomputers and quantum computers.”

SPACK: A Package Manager for Supercomputers, Linux, and MacOS

“HPC software is becoming increasingly complex. The space of possible build configurations is combinatorial, and existing package management tools do not handle these complexities well. Because of this, most HPC software is built by hand. This talk introduces “Spack”, an open-source tool for scientific package management which helps developers and cluster administrators avoid having to waste countless hours porting and rebuilding software.” A tutorial video on using Spack is also included.

Anaconda Open Data Science Platform comes to IBM Cognitive Systems

Today IBM announced that it will offer the Anaconda Open Data Science platform on IBM Cognitive Systems. Anaconda will also integrate with the PowerAI software distribution for machine learning and deep learning that makes it simple and fast to take advantage of Power performance and GPU optimization for data intensive cognitive workloads. “Anaconda is an important capability for developers building cognitive solutions, and now it’s available on IBM’s high performance deep learning platform,” said Bob Picciano, senior vice president of Cognitive Systems. “Anaconda on IBM Cognitive Systems empowers developers and data scientists to build and deploy deep learning applications that are ready to scale.”

OpenPOWER Developer Congress Event to Focus on Machine Learning

Today IBM announced that the first annual OpenPOWER Foundation Developer Congress will take place May 22-25 in San Francisco. With a focus on Machine Learning, the conference will focus on continuing to foster the collaboration within the foundation to satisfy the performance demands of today’s computing market.

Baidu Deep Learning Service adds Latest NVIDIA Pascal GPUs

“Baidu and NVIDIA are long-time partners in advancing the state of the art in AI,” said Ian Buck, general manager of Accelerated Computing at NVIDIA. “Baidu understands that enterprises need GPU computing to process the massive volumes of data needed for deep learning. Through Baidu Cloud, companies can quickly convert data into insights that lead to breakthrough products and services.”

Podcast: How AI Can Improve the Diagnosis and Treatment of Diseases

In this AI Podcast, Mark Michalski from the Massachusetts General Hospital Center for Clinical Data Science discusses how AI is being used to advance medicine. “Medicine — particularly radiology and pathology — have become more data-driven. The Massachusetts General Hospital Center for Clinical Data Science — led by Mark Michalski — promises to accelerate that, using AI technologies to spot patterns that can improve the detection, diagnosis and treatment of diseases.”

Deep Learning on the SaturnV Cluster

“The basic idea of deep learning is to automatically learn to represent data in multiple layers of increasing abstraction, thus helping to discover intricate structure in large datasets. NVIDIA has invested in SaturnV, a large GPU-accelerated cluster, (#28 on the November 2016 Top500 list) to support internal machine learning projects. After an introduction to deep learning on GPUs, we will address a selection of open questions programmers and users may face when using deep learning for their work on these clusters.”

Agenda Posted for June Teratec Forum in France

The TERATEC Forum has posted their Agenda for their upcoming June meeting. With technical workshops, plenary sessions and a vendor exhibit, the event takes place June 27-28 at the Ecole Polytechnique campus in Palaiseau, France. “Our objective is to bring together all decision makers and experts in the field of digital simulation and Big Data, from the industrial and technological world and the world of research.”

Engility Pursues NASA Advanced Computing Services Contract

Today Engility that the company will bring its world-class high performance computing capabilities to bear as it competes to win NASA’s Advanced Computing Services contract. “HPC is a strategic, enabling capability for NASA,” said Lynn Dugle, CEO of Engility. “Engility’s cadre of renowned computational scientists and HPC experts, coupled with our proven high performance data analytics solutions, will help increase NASA’s science and engineering capabilities.”