Problem Statement: Limits of Linux File Systems

Print Friendly, PDF & Email

Henry Newman writes that Linux file systems have scalability issues that are not being adequately addressed. But just how big is the problem? When he met up recently with Jeff Layton, Enterprise Technologist for HPC at Dell, the two came up with a plan to test the limits and define the problem in a series of feature articles.

We both agreed that the problem with large file systems is the metadata scan rate. Let’s say you have 100 million files in your file system and the scan rate of the file system is 5,000 inodes per second. If you had a crash, the time to fsck could take 20,000 seconds or about 5.5 hours. If you are a business, you would lose most of the day waiting on fsck to complete. THIS IS NOT ACCEPTABLE. Today, a 100-million file file system should not take that much time, given the speed of networks and the processing power in systems. Add to this the fact that a single file server could support 100 users and 1 million files per user is a lot, but not a crazy number. The other issue is we do not know what the scan rate is for the large file systems with large file counts. What if the number is not 5,000 but 2,000? Yikes, for that business. With enterprise 3.5 inch disk drives capable of between 75 and 150 IOPS per drive, 20 drives should be able to achieve at least 1,500 IOPS. The question is what percentage of hardware bandwidth can be archived with fsck for the two file systems?

As CTO (and CEO) of Instrumental, Henry is one tenacious fellow when it comes to tackling tough storage problems at scale. I’m looking forward to watching this series of features unfold. Read the Full Story.

Trackbacks

  1. […] Go here to see the original: Linux Blog on: Problem Statement: Limits of Linux File Systems […]