Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Call for Sessions: Open Fabrics Workshop 2017 in Austin

The OpenFabrics Alliance Workshop 2017 has issued their Call for Sessions. The event will take place March 27-31, 2017 in Austin, Texas. “An ongoing collaboration between OpenFabrics Software (OFS) producers and users is necessary to address difficult network challenges. The 13th Annual OpenFabrics Alliance (OFA) Workshop is a key industry event encouraging a dialogue that is geared toward strengthening high-performance networks end-to-end and represents a joint effort among open source networking community members.”

How Researchers Will Benefit from Canada’s National Data Cyberinfrastructure

“Individual institutions or organizations will have opportunities to deploy storage locally and can federate their local repository into the national system,” says Dr. Greg Newby, Compute Canada’s Chief Technology Officer. “This provides enhanced privacy and sharing capabilities on a robust, country-wide solution with improved data security and back-up. This is a great solution to address the data explosion we are currently experiencing in Canada and globally.”

Cobham Opera Simulation Software Moves Tokamak Closer to Fusion Energy

The Cobham Technical Services Opera software is helping Tokamak Energy to reduce the very high costs associated with prototyping a new fusion power plant concept,” said Paul Noonan, R&D Projects Director for ST40. “After we have built our new prototype, we hope to have assembled some profoundly exciting experimental and theoretical evidence of the viability of producing fusion power from compact, high field, spherical tokamaks.”

HPE Apollo 6500 for Deep Learning

“With up to eight high performance NVIDIA GPUs designed for maximum transfer bandwidth, the HPE Apollo 6500 is purpose-built for HPC and deep learning applications. Its high ratio of GPUs to CPUs, dense 4U form factor and efficient design enable organizations to run deep learning recommendation algorithms faster and more efficiently, significantly reducing model training time and accelerating the delivery of real-time results, all while controlling costs.”

Increasing the Efficiency of Storage Systems

Have you ever wondered why your HPC installation is not performing as you had envisioned ? You ran small simulations. You spec’d out the CPU speed, the network speed and the disk drive speed. You optimized your application and are taking advantage of new architectures. But now as you scale the installation, you realize that the storage system is not performing as expected. Why ? You bought the latest disk drives and expect even better than linear performance from the last time you purchased a storage system. Read how you can get increased efficiency of your storage system.

Supercomputing Drug Discovery to Combat Heart Disease

Using a unique computational approach to rapidly sample proteins in their natural state of gyrating, bobbing, and weaving, a research team from UC San Diego and Monash University in Australia has identified promising drug leads that may selectively combat heart disease, from arrhythmias to cardiac failure.

Radio Free HPC Reviews the SC16 Student Cluster Competition Configurations & Results

In this podcast, the Radio Free HPC team reviews the results from SC16 Student Cluster Competition. “This year, the advent of clusters with the new Nvidia Tesla P100 GPUs made a huge impact, nearly tripling the Linpack record for the competition. For the first-time ever, the team that won top honors also won the award for achieving highest performance for the Linpack benchmark application. The team “SwanGeese” is from the University of Science and Technology of China. In traditional Chinese culture, the rare Swan Goose stands for teamwork, perseverance and bravery.”

NVIDIA Launches Deep Learning Teaching Kit for University Professors

“With demand for graduates with AI skills booming, we’ve released the NVIDIA Deep Learning Teaching Kit to help educators give their students hands on experience with GPU-accelerated computing. The kit — co-developed with deep-learning pioneer Yann LeCun, and largely based on his deep learning course at New York University — was announced Monday at the NIPS machine learning conference in Barcelona. Thanks to the rapid development of NVIDIA GPUs, training deep neural networks is more efficient than ever in terms of both time and resource cost. The result is an AI boom that has given machines the ability to perceive — and understand — the world around us in ways that mimic, and even surpass, our own.”

NIH Powers Biowulf Cluster with Mellanox EDR 100Gb/s InfiniBand

Today Mellanox announced that NIH, the U.S. National Institute of Health’s Center for Information Technology, has selected Mellanox 100G EDR InfiniBand solutions to accelerate Biowulf, the largest data center at NIH. The project is a result of a collaborative effort between Mellanox, CSRA, Inc., DDN, and Hewlett Packard Enterprise. “The Biowulf cluster is NIH’s core HPC facility, with more than 55,000 cores. More than 600 users from 24 NIH institutes and centers will leverage the new supercomputer to enhance their computationally intensive research.”

Kx Streaming Analytics Crunches 1.2 Billion NYC Taxi Data Points using Intel Xeon Phi

“The complexity and high costs of architecting and maintaining streaming analytics solutions often make it difficult to get new projects off the ground. That’s part of the reason Kx, a leading provider of high-volume, high-performance databases and real-time analytics solutions, is always interested in exploring how new technologies may help it push streaming analytics performance and efficiency boundaries. The Intel Xeon Phi processor is a case in point. At SC16 in Salt Lake City, Kx used a 1.2 billion record database of New York City taxi cab ride data to demonstrate what the Intel Xeon Phi processor could mean to distributed big data processing. And the potential cost/performance implications were quite promising.”