Sign up for our newsletter and get the latest HPC news and analysis.

How Ceph Helps Power Penguin Computing On-Demand

In this video, Travis Rhoden from Penguin Computing describes how the Ceph distributed object storage system powers the company’s Penguin on Demand HPC Cloud offering.

Why is Ceph such a good fit for POD? We wanted a storage solution that was:
 
Economical – the code is Open Source, and we can run it on our own hardware. We have deployed Ceph on Penguin Computing’s Icebreaker 2712 storage server that accommodates 12 drives in a 2U form factor and is based on Intel’s Xeon 5600 processors. Going forward we are planning to use the Icebreaker 2812 storage server or our Relion 2800 server, both based on the Xeon E5-2600. These chassis offer a good drive/core ratio, while offering a failure domain that we are comfortable with. Easily expandable with scale-out behavior – We can simply add more storage servers to the cluster to gain more space. Ceph automatically rebalances data appropriately, immediately using all resources available. For each additional server, we gain additional performance
 
Self-healing – We are using SATA drives, and everyone knows they are going to fail. Often at inconvenient times. With Ceph, failed drives don’t have to be attended to right away. Data is rebalanced early and quickly, and we don’t have to worry near as much about being in a “critical window” of time where a second or third failure may lead to data loss as we do when rebuilding RAID arrays.
 
Unified – We wanted one storage system that we could focus on. Ceph provides us with both object storage and block storage. It is also tightly integrated with OpenStack, which we use in our latest POD clusters.
 
No Single Point of Failure – This is critical for our POD customers, and really should be in any storage system. The architecture of Ceph is highly resilient to failures, and it was built from the ground-up as a fully distributed system.

Read the Full Story or check out more presentations from Ceph Day 2013.

Resource Links: