Sign up for our newsletter and get the latest HPC news and analysis.


Deep Learning at Scale

imgres

“We present a state-of-the-art image recognition system, Deep Image, developed using end-to-end deep learning. The key components are a custom-built supercomputer dedicated to deep learning, a highly optimized parallel algorithm using new strategies for data partitioning and communication, larger deep neural network models, novel data augmentation approaches, and usage of multi-scale high-resolution images.”

Video: A Bioinformatics Pipeline for Analyzing Patient Tumours

westgrid

In this video from WestGrid in Canada, Dr. Yussanne Ma from the Michael Smith Genome Sciences Centre describes how high performance computing supports her research group’s work, highlighting a recent project where a bioinformatics pipeline was built for the personalized onco-genomics project (POG) at the BC Cancer Agency.

How HPC is increasing speed and accuracy

Mark Gunn, Sr. VP, One Stop Systems

The overwhelming task of high performance computing today is the processing of huge amounts of data quickly and accurately. Just adding greater numbers of more intensive, sophisticated servers only partially solves the problem.

RCE Podcast Looks at Apache Spark

RCE Podcast logo

In this RCE podcast, Brock Palen and Jeff Squyres speak with Matei Zaharia about Apache Spark, a fast engine for large-scale data processing.

New Intel Xeons Target Realtime Analytics

xeon

Today Intel announced the new Xeon processor E7-8800/4800 v3 product families, delivering accelerated business insight through real-time analytics.

Talking Machines Podcast Looks at Machine Learning

talking machine

In this Talking Machines podcast, Katherine Gorman hosts Ryan Adams from Harvard to preview their new podcast series on machine learning. “Machine learning is changing the questions we can ask of the world around us, here we explore how to ask the best questions and what to do with the answers.”

Video: Understanding Hadoop Performance on Lustre

Hadoop Cluster

“In this talk, Seagate presents details on its efforts and achievements around improving Hadoop performance on Lustre including a summary on why and how HDFS and Lustre are different and how those differences affect Hadoop performance on Lustre compared to HDFS, Hadoop ecosystem benchmarks and best practices on HDFS and Lustre, Seagate’s open-source efforts to enhance performance of Lustre within “diskless” compute nodes involving core Hadoop source code modification (and the unexpected results), and general takeaways ways on running Hadoop on Lustre more rapidly.”

Deploying Hadoop on Lustre Storage: Lessons Learned and Best Practices

Hadoop Cluster

In this video from LUG 2015 in Denver, J.Mario Gallegos from Dell presents: Deploying Hadoop on Lustre Storage: Lessons Learned and Best Practices. “Merging of strengths of both technologies to solve big data problems permits harvesting the power of HPC clusters on very fast storage.”

TACC’s “Wrangler” Uses DSSD Technology for Data-Intensive Computing

article

Today the Texas Advanced Computing Center announced that the Wrangler data analysis and management supercomputing system is now in early operations for the open science community. Supported by a grant from the NSF, Wrangler uses innovative DSSD technology for data-intensive computing.

The High Performance Data Analytics Market

HPDA

As data analytics becomes more mission critical, hardware and software need to evolve to handle both historical data (batch) and real time streaming data. This combined ability to manage different types of data is critical for a wide range of organizations.