I wrote a feature last week for HPCwire on solid state drives. I had two goals: first, to provide a little background for those wondering exactly what an SSD is and just why everyone is talking about them now. My second goal was to talk a little about the potential — and problems — of SSDs in HPC.
There has been a lot of interest among the enterprise datacenter crowd lately in a relatively old technology: solid state drives (SSDs). Today’s flash drives are faster and cheaper than their predecessors, and are almost certain to assume a place in the standard enterprise IT architect’s toolkit. But it seems that they have quite a bit of potential in HPC too, though not (just) in the way you might think.
Yes, you can slot them in to systems to replace hard disks in a way that makes things faster but doesn’t change how you think about the memory hierarchy. But I was very interested in David Flynn’s perspective
Flynn was formerly the chief architect at Linux Networx [but is now the CTO of Fusion-io, which is why I interviewed him], and says that his experience in HPC has led him to conclude that “balanced systems lead to cost effective throughput.” Fusion-io’s device connects to the PCI Express bus, and Flynn conceptualizes the flash memory as sitting between memory and disk, relieving the performance pressure on both, and creating a new first-class participant in the data flow hierarchy.
“You can put 15 Fusion-io cards in a commodity server and get 10 GB/s of throughput from a 10 TB flash pool with over one million IOPS of performance,” says Flynn. How does this matter? He gave NASTRAN as a customer example, in which jobs that took three days to run would complete in six hours on the same system and with no change in the application after the installation of the flash device.
I’ve already heard from one reader who thinks today’s flash-based SSDs are a long way from ready for serious computing. What do you think?