New BeeGFS Release Targets High-Availability Storage

Print Friendly, PDF & Email

beegfsToday Fraunhofer ITWM and ThinkParQ announced a new major release of the BeeGFS parallel file system. Now available as a free download, BeeGFS version 2015.03-r1 comes with enterprise features including built-in storage server high-availability based on replication with self-healing, support for access control lists (ACLs), and adds a number of performance and usability improvements.

BeeGFS (formerly FhGFS) is a parallel cluster file system, developed with a strong focus on performance and designed for very easy installation and management. If I/O intensive workloads are your problem, BeeGFS is the solution.

The primary focus for 2015.03-r1 release was the introduction of high availability for storage servers as an enterprise feature. Based on data replication across different storage servers, automatic failovers will happen in case of storage server failures and self-healing will resynchronize the server when it comes back.

Additionally, in this release BeeOND (BeeGFS on demand) was added as a stand-alone package. BeeOND provides a tool that can create complete BeeGFS instances on-the-fly with a single command. While this functionality was basically already usable in previous releases, usability was considerably improved now. Within the BeeOND package, users can not only find tools to create and destruct a file system, but also tools to perform a parallel copy of data between file systems (e.g. a global cluster storage and a per-job BeeOND instance).

Besides that, BeeGFS now supports extended attributes, as well as access control lists (ACLs). In addition to the optional GUI-based setup of BeeGFS, a general usability improvement is the introduction of new setup tools, which allow command-line based setup of the BeeGFS services without the need to edit configuration files. To optimize performance of multi-target storage servers under high load, the storage server worker threads are now grouped and dedicated to the individual storage targets, resulting in better balance and fairness of parallel client request handling. Other performance improvements include a new aggressive low-latency mode, metadata access optimizations and enhancements for manycore servers. For a complete list, please refer to the changelog document, which accompanies the release packages.

As before, the file system client native kernel module is still compatible with a wide range of Linux kernel versions, starting with 2.6.18, while the new version adds support for the recent 4.0 kernel.

Sign up for our insideHPC Newsletter.