Sign up for our newsletter and get the latest HPC news and analysis.


Video: Addressing Challenges of Data-Intensive Research

027a1b3

“Building on 20+ years of experience in High Performance Computing, as well as more than a decade of involvement in open science developments (open data and e-infrastructures), ICM views Data Sciences as strategic direction for the ICM Centre’s future.”

Video: Accelerating Big Data Processing with Hadoop, Spark and Memcached

DK Panda, Ohio State University

“Using the publicly available software packages in the High-Performance Big Data (HiBD) project, we will provide case studies of the new designs for several Hadoop/Spark/Memcached components and their associated benefits. Through these case studies, we will also examine the interplay between high performance interconnects, storage systems (HDD and SSD), and multi-core platforms to achieve the best solutions for these components.”

Genomics and Big Data

HPC Life Sciences 2

Advances in computational biology as applied to NGS workflows have led to an explosion of sequencing data. All that data has to be sequenced, transformed, analyzed, and stored. The machines capable of performing these computations at one point cost millions of dollars, but today the price tag has dropped into the hundreds of thousands of dollars range.

ClusterStor Solution for HPC and Big Data

Multilevel Security Image

The ClusterStor SDA is built on Seagate’s successful ClusterStor family of high performance storage solutions for HPC and Big Data, providing unmatched file system performance, optimized productivity and the HPC industry’s highest levels of efficiency, reliability, availability and serviceability. Taking full advantage of the Lustre file system, Seagate ClusterStor is designed for massive scale-out performance and capacity, with the ability to support from hundreds to tens-of-thousands of client compute nodes delivering data intensive workload throughput from several GB/sec to over 1TB/sec.

Slidecast: Deep Learning – Unreasonably Effective

deep

“Deep Learning is a new area of Machine Learning research, which has been introduced with the objective of moving Machine Learning closer to one of its original goals: Artificial Intelligence. At the 2015 GPU Technology Conference, you can join the experts who are making groundbreaking improvements in a variety of deep learning applications, including image classification, video analytics, speech recognition, and natural language processing.”

Call for Papers: ISC Cloud & Big Data

logo

The inaugural ISC Cloud & Big Data conference has announced its Call for Research Papers. The event takes place Sept. 28-30 in Frankfurt, Germany. The organizers are looking forward to welcoming international attendees – IT professionals, consultants and managers from organizations seeking information about the latest cloud and big data developments. Researchers in these two […]

Video: Introduction to Bridges Supercomputer at PSC

Bridges_4c_stacked

Bridges is a uniquely capable supercomputer designed to help researchers facing challenges in Big Data to work more intuitively. Called Bridges, the new system will consist of tiered, large-shared-memory resources with nodes having 12TB, 3TB, and 128GB each, dedicated nodes for database, web, and data transfer, high-performance shared and distributed data storage, Hadoop acceleration, powerful new CPUs and GPUs, and a new, uniquely powerful interconnection network.

Video: SGI UV Finds the Needle in the Big Data Haystack

straw

According to IDC, SGI has shipped approximately 8 percent of of all the Hadoop servers in production today. In fact, did you know that SGI introduced the word “Big Data” to supercomputing in 1996? Jorge Titinger, SGI President and CEO, shares SGI’s history in helping to design, develop, and deploy Hadoop clusters. (NOTE: Straw was substituted for actual hay to avoid any potential allergic reactions.)

CloudyCluster Moves HPC out of the Data Center and Into the Cloud

cloudy cluster

CloudyCluster allows you to quickly set up and configure a cluster on Amazon Web Services (AWS) to handle the most demanding HPC and Big Data tasks. You don’t need access to a data center and you don’t have to be an expert in the ins and outs of running computationally intensive workloads in a cloud environment.

Data Intensive Computing: The Gorilla behind the Computation

dataintensive

In this video from the Dell booth at SC14, Rich Brueckner from insideHPC moderates a panel discussion on Data Intensive Computing panelists Ken Buetow (Arizona State University), Erik Deumens (University of Florida), Niall Gaffney (TACC), and William Law (Stanford University).