Sign up for our newsletter and get the latest big data news and analysis.
Daily
Weekly

Video: User Managed Virtual Clusters in Comet

Rick Wagner from SDSC presented this talk at the the 4th Annual MVAPICH User Group. “At SDSC, we have created a novel framework and infrastructure by providing virtual HPC clusters to projects using the NSF sponsored Comet supercomputer. Managing virtual clusters on Comet is similar to managing a bare-metal cluster in terms of processes and tools that are employed. This is beneficial because such processes and tools are familiar to cluster administrators.”

Video: Mellanox Powers Open Science Grid on Comet Supercomputer

“We are pioneering the area of virtualized clusters, specifically with SR-IOV,” said Philip Papadopoulos, SDSC’s chief technical officer. “This will allow virtual sub-clusters to run applications over InfiniBand at near-native speeds – and that marks a huge step forward in HPC virtualization. In fact, a key part of this is virtualization for customized software stacks, which will lower the entry barrier for a wide range of researchers by letting them project an environment they already know onto Comet.”

Comet Supercomputer at SDSC Helps Confirm Gravitational Wave Discovery

The NSF-funded Comet supercomputer at SDSC was one of several high-performance computers used by researchers to help confirm that the discovery of gravitational waves before a formal announcement was made.

Lustre: This is Not Your Grandmother’s (or Grandfather’s) Parallel File System

“Over the last several years, an enormous amount of development effort has gone into Lustre to address users’ enterprise-related requests. Their work is not only keeping Lustre extremely fast (the Spider II storage system at the Oak Ridge Leadership Computing Facility (OLCF) that supports OLCF’s Titan supercomputer delivers 1 TB/s ; and Data Oasis, supporting the Comet supercomputer at the San Diego Supercomputing Center (SDSC) supports thousands of users with 300GB/s throughput) but also making it an enterprise-class parallel file system that has since been deployed for many mission-critical applications, such as seismic processing and analysis, regional climate and weather modeling, and banking.”

Seagate SSDs Boost Analytics on Comet Supercomputer

The San Diego Supercomputer Center is adding 800GB Seagate SAS SSDs to significantly boost the data analytics capability of its Comet supercomputer. To expand its node-local storage capacity for data-intensive workloads, device pairs will be added to all 72 compute nodes in one rack of Comet, alongside the existing SSDs. This will bring the flash storage in a single node to almost 2TB, with total rack capacity at more than 138TB.

Video: Dell Powers Comet Supercomputer at SDSC

“Comet is SDSC’s newest HPC cluster, designed as a high-throughput system with unique HPC virtualization capabilities to accommodate a large number of researchers looking for rapid turnaround. It is built on Dell PowerEdge C6320 servers with Intel Xeon Haswell E5 2680v3 processors.”

Petascale Comet Supercomputer Enters Early Operations

“Comet is really all about providing high-performance computing to a much larger research community – what we call ‘HPC for the 99 percent’ – and serving as a gateway to discovery,” said SDSC Director Michael Norman, the project’s principal investigator. “Comet has been specifically configured to meet the needs of researchers in domains that have not traditionally relied on supercomputers to solve their problems.”