Sign up for our newsletter and get the latest HPC news and analysis.


Interview: Why Software Defined Infrastructure Makes Sense for HPC

Jay Muelhoefer, IBM

“I came to IBM via the acquisition of Platform Computing. There’s also been other IBM assets around HPC, namely GPFS. What’s been the evolution of those items as well and how they really come together under this concept of software-defined infrastructure, and how we’re now taking these capabilities and expanding them into other initiatives that have sort of bled into the HPC space.”

Interview: SGI’s Jorge Titinger on HPC Teamwork and other Lessons from Soccer

imgres

In this Purematter video, SGI CEO Jorge Titinger discusses the role that his experiences as a professional soccer player has had in both his professional development and his company’s success. He also provides insights into how SGI is leveraging High Performance Computing to scale innovation faster than ever before.

insideHPC to Livestream Keynotes from GPU Technology Conference

GPU Tech Conf

We are pleased to announce that insideHPC will be streaming live keynotes next week from the GPU Technology Conference in San Jose.

Radio Free HPC Looks at Big Data Analytics in the Big Leagues

bubble

In this episode, the Radio Free HPC team takes a look at Big Data Analytics in sports. According to recent reports, at least one team in Major League Baseball is using a Cray Urika system in a bid to gain competitive advantage.

Slidecast: Deep Learning – Unreasonably Effective

deep

“Deep Learning is a new area of Machine Learning research, which has been introduced with the objective of moving Machine Learning closer to one of its original goals: Artificial Intelligence. At the 2015 GPU Technology Conference, you can join the experts who are making groundbreaking improvements in a variety of deep learning applications, including image classification, video analytics, speech recognition, and natural language processing.”

Call for Papers: ISC Cloud & Big Data

logo

The inaugural ISC Cloud & Big Data conference has announced its Call for Research Papers. The event takes place Sept. 28-30 in Frankfurt, Germany. The organizers are looking forward to welcoming international attendees – IT professionals, consultants and managers from organizations seeking information about the latest cloud and big data developments. Researchers in these two […]

Penguin Computing Launches Scyld ClusterWare for Hadoop

penguin-computing-logo

Today Penguin Computing announced Scyld ClusterWare for Hadoop, adding greater capability to the company’s existing Scyld ClusterWare high performance computing cluster management solution.

Video: Introduction to Bridges Supercomputer at PSC

Bridges_4c_stacked

Bridges is a uniquely capable supercomputer designed to help researchers facing challenges in Big Data to work more intuitively. Called Bridges, the new system will consist of tiered, large-shared-memory resources with nodes having 12TB, 3TB, and 128GB each, dedicated nodes for database, web, and data transfer, high-performance shared and distributed data storage, Hadoop acceleration, powerful new CPUs and GPUs, and a new, uniquely powerful interconnection network.

Video: SGI UV Finds the Needle in the Big Data Haystack

straw

According to IDC, SGI has shipped approximately 8 percent of of all the Hadoop servers in production today. In fact, did you know that SGI introduced the word “Big Data” to supercomputing in 1996? Jorge Titinger, SGI President and CEO, shares SGI’s history in helping to design, develop, and deploy Hadoop clusters. (NOTE: Straw was substituted for actual hay to avoid any potential allergic reactions.)

CloudyCluster Moves HPC out of the Data Center and Into the Cloud

cloudy cluster

CloudyCluster allows you to quickly set up and configure a cluster on Amazon Web Services (AWS) to handle the most demanding HPC and Big Data tasks. You don’t need access to a data center and you don’t have to be an expert in the ins and outs of running computationally intensive workloads in a cloud environment.