MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Supermicro Launches Total Solution for Lustre on ZFS

“With Supermicro’s 90 top-load 3.5” hot-swap bay JBOD as the storage core of our Lustre Pod Cluster, we maximize performance, density and capacity and simplify serviceability for massive scale HA storage deployments. Combining our preconfigured, validated 2U SuperStorage OSS, 1U Ultra SuperServer with Intel Enterprise Edition for Lustre software, and global service and support, Supermicro has the Total Solution for Lustre ready for HPC, Genomics and Big Data.”

Seagate Adopts Intel Enterprise Edition for Lustre

Today Seagate announced it will incorporate Intel Enterprise Edition for Lustre (IEEL), a big data software platform, into its market-leading ClusterStor storage architecture for high-performance computing. The move will strengthen Seagate’s HPC data storage product line and provide customers with an additional choice of Lustre parallel file systems to help drive advancements in the HPC and big data market.

Video: Seagate Exascale HPC Storage

“Traditionally, storage have been using brute force rather than intelligent design to deliver the required throughputs but the current trend is to design balanced systems with full utilization of the back-end storage and other related components. These new systems need to use fine grained power control all the way down to individual disk drives as well as tools for continuous monitoring and management of these systems. In addition, the storage solutions of tomorrow needs to support multiple tiers including backend archiving systems supported by HSM as well multiple file systems if required. This presentation is intended to provide a short update of where Seagate HPC storage is today.”

Hewlett Packard Enterprise Expands HPC Portfolio

As high performance and webscale applications become mainstream, HPE’s continued focus on this market is yielding positive results for our customers,” said Bill Mannel, vice president and general manager, HPC, Big Data and IoT Servers, HPE. “Already, more than a third of the HPC market is using HPE compute platforms to enhance scientific and business innovation and gain a competitive edge. Today’s announcement reinforces our commitment to delivering new infrastructure solutions that satisfy our customers’ insatiable need for massive compute power to fuel new applications and unlock the value of their data.”

The Lustre Parallel File System—A Landscape of Topics and Insight from the Community

Since its beginnings in 1999 as a project at Carnegie Mellon University, Lustre, the high performance parallel file system, has come a long, long way. Designed and always focusing on performance and scalability, it is now part of nearly every High Performance Computing (HPC) cluster on the Top500.org’s list of fastest
computers in the world—present in 70 percent of the top 100 and nine out of the top ten. That’s an achievement for any developer—or community of developers, in the case of Lustre—to be proud of. Learn what the HPC Community is saying about Lustre.

Job of the Week: Lustre Software Integrator at GDH

GDH Consulting in Albuquerque is seeking a Lustre File Systems HPC Software Integrator Integrator in our Job of the Week. “A systems programmer with proficient coding experience in C is required to participate in the implementation of research file systems and related systems software. The qualified candidate will have the opportunity to work on innovative approaches to data movement on large scale Linux clusters, and to develop new strategies for attacking the challenging issues in the I/O arena which arise from the requirements of the file system to scale and be robust.”

OpenSFS Releases Lustre 2.8.0 for LUG 2016 Conference

Today the Open Scalable File Systems (OpenSFS) community announced the release of Lustre 2.8.0, the fastest and most scalable parallel file system. OpenSFS, founded in 2010 to advance Lustre development, is the premier non-profit organization promoting the use of Lustre and advancing its capabilities through coordinated releases of the Lustre file system.

Podcast: Intel Moves HPC Forward with Broadwell Family of Xeon Processors

In this podcast, Rich Brueckner interviews Hugo Saleh, Director of Marketing for the Intel High Performance Computing Platform Group. They discuss the new Intel® Xeon® processor E5-2600 v4 product family, based upon the Broadwell microarchitecture, and the first processor within Intel® Scalable System Framework (Intel® SSF). Hugo describes how the new processors improve HPC performance and examine the impact of Intel® SSF on vastly different procurements ranging from the massive 200 Petaflops Aurora system to small and large enterprise clusters plus scientific systems.

SGI Provides Total with Improved Modeling to Support Decision Making

Total, one of the largest integrated oil and gas companies in the world, announced they are boosting the compute power of their SGI Pangea supercomputer with an additional 4.4 petaflops provided by a new SGI ICE X system and based on the Intel Xeon processor. Purchased last year, the new SGI system is now in production and will allow Total to determine the optimal extraction methods more quickly. The SGI supercomputer allows Total to improve complex modeling of the subsurface and to simulate the behavior of reservoirs, reducing the time and costs associated with discovering and extracting energy reserves.

Seagate and LANL to Heat Up Data Archiving For Supercomputers

Seagate Technology and Los Alamos National Laboratory are researching a new storage tier to enable massive data archiving for supercomputing. The joint effort is aimed at determining innovative new ways to keep massive amounts of stored data available for rapid access, while also minimizing power consumption and improving the quality of data-driven research. Under a Cooperative Research and Development Agreement, Seagate and Los Alamos are working together on power-managed disk and software solutions for deep data archiving, which represents one of the biggest challenges faced by organizations that must juggle increasingly massive amounts of data using very little additional energy.