Whither ZFS results?

Print Friendly, PDF & Email

Joe Landman has a remark in a post on ZFS performance testing on his company’s hardware that I found interesting

So we have Solaris 10 installed on a JackRabbit-M. According to Sun’s license, as I have learned last night, we cannot report benchmark results without permission from Sun. Sad, but this is how they wish to govern information flow around their product.

I’m sure the whole post is interesting, but I have a storage blindspot (at this point one must really suspect that I’m just willfully ignoring storage). The part that’s interesting is that Sun, which by many other measures is an open company — a CEO who blogs, relatively raw videos shot out straight to the interwebs, podcasts and blog posts from deep within the company — is to tightly controlling information about a core product.

Unfortunately it makes one think about what else they might only appear to be open about, even if such thinking isn’t justified. Seems like a silly mistake for Sun to pursue this path. Perhaps they’ll correct it.


  1. […] zfs un-benchmarking – “Our rationale for testing was to finally get some numbers that we can provide to users/customers about real zfs performance. There is a huge amount of (largely uncontested) information (emanating mainly from Sun and its agents) that zfs is a very fast file system. We want to test this, on real, live hardware, and report. Well, we can’t do the latter due to Sun’s licensing, but we did do the former. Paraphrasing Mark Twain: ‘Rumors of zfs’s performance have been greatly exaggerated.’” When Joe Landman blogs about performance, I take what he has to say seriously, but given the stability problems he notes, I wonder if – as he suggests – that driver issues are a factor here, and we’re not seeing a generic ZFS issue. (Seen at InsideHPC.) […]


  1. John

    As I have learned, Sun doesn’t have this requirement on the OpenSolaris bits. So we can test performance with this. Discussions on and offline with others suggest that we will get better performance, and likely more help with that than with the Solaris version.

    Zfs is a very interesting file system. It is probably the easiest raid setup I have done to date. But the mdadm + mkfs isn’t that far off, and could be wrapped in a script to make it just as simple to use. The interesting aspects of zfs are the checksumming, the error detection, and a few other capabilities. This is not something that an mdadm+mkfs script can replicate as easily. zfs is quite (badly) overhyped, but if you can work your way past this, there are real, and interesting elements there that are worth taking a look at.

    The issue I wanted to handle was how to tune our hardware for best zfs performance, for a customer who wanted to run the Sun official version. What annoyed me is that I cannot post the results of the tests, so we cannot compare these to other tests on (literally) this hardware.

    Unfortunately, due to its license (and the patents) zfs will not likely ever find its way into the Linux kernel. A shame, but things like btrfs are GPL, and will likely take its place (and given the GPL heritage, be available for many more OSes). Also ceph, and GlusterFS are very interesting technologies in varying stages of completeness. GlusterFS in particular is robust enough for usage (and we have customers using it).

    Again, I hope we can get zfs performing well, and we would certainly like to see the Linux community using it. The Zfs on fuse project is out there, but I am not sure how active it is right now. I see commits from a few months ago in the repository. Hopefully more people can contribute to this effort.