MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


RCE Podcast Looks at Apache Spark

RCE Podcast logo

In this RCE podcast, Brock Palen and Jeff Squyres speak with Matei Zaharia about Apache Spark, a fast engine for large-scale data processing.

New Intel Xeons Target Realtime Analytics

xeon

Today Intel announced the new Xeon processor E7-8800/4800 v3 product families, delivering accelerated business insight through real-time analytics.

Video: Monitoring a Heterogeneous Lustre Environment with Splunk

lug

“Monitoring a large Lustre site, running multiple generations of Lustre filesystems can be a challenge. Some equipment offer vendor specific monitoring interfaces while others, built on open source Lustre, have minimal monitoring capabilities. This talk will report on our operational experience using a homegrown python module to collect data from each filesystem. We will discuss in detail how the data is visualized centrally in Splunk and cross-referenced with users workload to analyze and troubleshoot our environment.”

Deploying Hadoop on Lustre Storage: Lessons Learned and Best Practices

Hadoop Cluster

In this video from LUG 2015 in Denver, J.Mario Gallegos from Dell presents: Deploying Hadoop on Lustre Storage: Lessons Learned and Best Practices. “Merging of strengths of both technologies to solve big data problems permits harvesting the power of HPC clusters on very fast storage.”

TACC’s “Wrangler” Uses DSSD Technology for Data-Intensive Computing

article

Today the Texas Advanced Computing Center announced that the Wrangler data analysis and management supercomputing system is now in early operations for the open science community. Supported by a grant from the NSF, Wrangler uses innovative DSSD technology for data-intensive computing.

Video: Application-optimized Lustre Solutions for Big-Data Workflows

ddn

In this video from LUG 2015 in Denver, Robert Triendl from DDN presents: Application-optimized Lustre Solutions for Big-Data Workflows.

NCDS Takes Action on Big Data

Stan Ahalt, chair of the steering committee for the National Consortium for Data Science

In this special guest feature from Scientific Computing World, Stan Ahalt from the National Consortium for Data Science discusses how and why the organization came into being.

Open Computing Benefits Many Industry Segments

open-compute-project

The Open Compute Project is a way for organization to increase computing power while lowering associated costs with hyper-scale computing. This article is the 4th in a series from insideHPC that showcases the benefits of open computing to specific industries.

Video: Addressing Challenges of Data-Intensive Research

027a1b3

“Building on 20+ years of experience in High Performance Computing, as well as more than a decade of involvement in open science developments (open data and e-infrastructures), ICM views Data Sciences as strategic direction for the ICM Centre’s future.”