Herman Mehling at EnterpriseStorageForum.com writes this week about one of the big challenges SSD faces in IO-intensive (especially write-intensive) environments: performance degradation over time.
Solid state drives (SSD) are known for performance that is many times that of hard disk drives (HDDs), but what’s not so well known is that SSD performance tends to degrade over time, and benchmarks show that SSDs can perform much better new than when heavily used.
I’ve talked about this in some of the SSD reporting I’ve done over the past several months, but always with respect to a particular company’s solution — Mehling’s article looks at the problem in general and talks about the approach that several vendors are taking to solve it.
SSDs suffer from a difficulty that doesn’t exist in HDDs — the flash must be erased before new data can be written into it, said Jim Handy, an analyst at Objective Analysis, a market research firm specializing in SSDs and semiconductors.
“This erase, which can take up to a half second, would bring the SSD to its knees were it not for some clever work-arounds that SSD makers build into their controllers,” said Handy. “One of these is to over-provision, to build more flash into the SSD than appears to the outside world.”
SSD technologies typically suffer significant performance degradation over time — by as much as 50 percent or more — as more data is written to the NAND flash memory and as applications accessing the device vary the read-to-write ratio, said Greg Goelz, vice president of marketing at Pliant Technology.
How are vendors addressing the problem? Over-provisioning is one solution, as are special wear leveling algorithms that spread the data around to all the memory cells round robin, ensuring that if you are only using a portion of your total capacity new writes get written to previously unused cells before rewriting an older cell.
More in the article, which does a good job summarizing the problem and the ways vendors are addressing it.