PSSC Labs Updates CBeST Cluster Management Software

Print Friendly, PDF & Email

Today PSSC Labs announced it has refreshed its CBeST (Complete Beowulf Software Toolkit) cluster management package. CBeST is already a proven platform deployed on over 2200 PowerWulf Clusters to date and with this refresh PSSC Labs is adding a host of new features and upgrades to ensure users have everything needed to manage, monitor, maintain and upgrade their HPC cluster.

PSSC Labs is unique in that we manufacture all of our own hardware and develop our own cluster management toolkits in house. While other companies simply cobble together third party hardware and software, PSSC Labs custom builds every HPC cluster to achieve performance and reliability boosts of up to 15%,” said Alex Lesser, Vice President of PSSC Labs. “Our highly skilled and deeply knowledgeable engineers can modify every CBeST component to compliment the customer’s unique hardware specifications and computing needs and are here to provide responsive support for the lifetime of the product. The end result is a superior, ready-to-run HPC solution at a cost-effective price.”

The CBeST software stack is integrated into PSSC Labs’ PowerWulf Clusters to deliver a preconfigured solution with all the necessary hardware, network settings and cluster management software prior to shipping. Due to its component nature CBeST is the most flexible cluster management software package available.

New CBeST Version 4 features include:

  • Support for CentOS 7 & RedHat 7
    • Previous version of CBeST only supported CentOS 6 and RedHat 6
  • Diskless Compute Node Support
    • Cost — Because the compute nodes have no disks, the cost is reduced. The budget typically allocated for traditional hard disks/SSDs can either be saved entirely or reinvested into other areas of the cluster (network storage, additional RAM, or even extra compute nodes).
    • Stability — Hard drives are the most failure-prone component. Eliminating them also removes the biggest potential point of failure from each compute node.
    • Performance — Since the operating system runs in a minimal footprint of RAM as opposed to a hard drive, performance is generally superior.
    • Security — Some companies and government agencies have IT security requirements for the disposal of failed storage devices. Diskless compute nodes eliminate this issue.
    • Management/Provisioning — Compute node software can be managed from a single chroot (change root) environment. It’s also very simple to test software changes/upgrades.
    • Users simply back up the existing image, make their changes, and reboot the nodes. If something goes wrong, just revert to your backup and reboot the nodes to restore them to their previous state.
    • Support for the latest high speed network fabrics
  • Support for Intel Omnipath (56 Gbps & 100 Gbps) Network Backplane
  • Support for Mellanox EDR Infiniband (100 Gbps) Network Backplane
  • Higher speed network fabrics allow faster computational speed and overall cluster performance
  • Support for the latest processor and coprocessor technologies including:
    • Intel Xeon PHI
    • Nvidia P100 GPU
    • Altera FPGAs

Offering support for these new processor and co-processor technologies widens the breadth of computation problems that can be solved using PowerWulf Clusters. Support for Xeon PHI and nVidia P100 GPUs is key because they are often central to deep learning, machine learning and artificial intelligence applications.

Every PowerWulf HPC Cluster with CBeST installation includes a one year unlimited phone / email support package (additional year support available). Prices for a custom built PowerWulf solution start at $20,000.

Sign up for our insideHPC Newsletter