Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Lustre Software for Intel Cloud

Even the largest HPC clusters can experience degradation due to poor I/O performance. This occurs as massive amounts of data and increasingly large individual files combine limited disk drive hardware capacity to cause significant bottlenecks. Lustre is an open source parallel file system that improves the overall scalability and performance of HPC clusters. It provides cluster client nodes with shared access to file system data in parallel, greatly increasing throughout and performance. Lustre is the most widely used HPC storage system in the world-with parallel storage capabilities utilized by over 50% of HPC deployments-and can scale to tens of thousands of clients.

InsideHPC Guide to Lustre Solutions for Business

In this Guide, we take a look at what an HPC solution like Lustre can deliver for a broad community of business and commercial organizations struggling with the challenge of big data and demanding storage growth.

Video: Parallel I/O Best Practices

In this video from the 2016 Blue Waters Symposium, Andriy Kot from NCSA presents: Parallel I/O Best Practices.

Call for Participation: Lustre User Group at NCI in Australia

NCI in Australia has issued its Call for Participation for the the Down-Under version of the 2016 Lustre User Group. The event will be held Sept. 7-8 on the campus of The Australian National University in Canberra, ACT Australia. “LUG 2016 will be a dynamic two day workshop that will explore improvements in the performance and flexibility of the Lustre file system for supporting diverse workloads. This will be a great opportunity for the Lustre community to discuss the challenges associated with enhancing Lustre for diverse applications, the technological advances necessary, and the associated ecosystem.”

Video: Matching the Speed of SGI UV with Multi-rail LNet for Lustre

Olaf Weber from SGI presented this talk at LUG 2016. “In collaboration with Intel, SGI set about creating support for multiple network connections to the Lustre filesystem, with multi-rail support. With Intel Omni-Path and EDR Infiniband driving to 200Gb/s or 25GB/s per connection, this capability will make it possible to start moving data between a single SGI UV node and the Lustre file system at over 100GB/s.”

Superior Performance Commits Kyoto University to CPUs Over GPUs

In this special guest feature, Rob Farber writes that a study done by Kyoto University Graduate School of Medicine shows that code modernization can help Intel Xeon processors outperform GPUs on machine learning code. “The Kyoto results demonstrate that modern multicore processing technology now matches or exceeds GPU machine-learning performance, but equivalently optimized software is required to perform a fair benchmark comparison. For historical reasons, many software packages like Theano lacked optimized multicore code as all the open source effort had been put into optimizing the GPU code paths.”

SGI Powers Energy Research at UFRJ in Brazil

The Federal University of Rio de Janeiro is embarking on ground-breaking energy research powered by a new SGI Computer. The high performance computing system will be installed through the SGI valued partner, Versatus HPC.

Call for Lustre Presentations: LAD’16 in Paris

The LAD’16 conference has issued its Call for Presentations. Hosted jointly by CEA, EOFS, and OpenSFS, the Lustre Administrator & Developer Conference will take place Sept. 20-21 in Paris.

Supermicro Showcases Intel Xeon Phi and Nvidia P100 Solutions at ISC 2016

At ISC 2016, Supermicro debuted the latest innovations in HPC architectures and technologies including a 2U 4-Node server supporting new Intel Xeon Phi processors (formerly code named Knights Landing) with integrated or external Intel Omni-Path fabric option, together with associated 4U/Tower development workstation; 1U SuperServer supporting up to 4 GPU including the next generation P100 GPU; Lustre High Performance File system; and 1U 48-port top-of-rack network switch with 100Gbps Intel Omni-Path Architecture (OPA) providing a unique HPC cluster solution offering excellent bandwidth, latency and message rate that is highly scalable and easily serviceable.

Interview: Seagate Powers Four TOP10 HPC Sites at ISC 2016

“So one of the things that we’ve really been very proud of, in terms of our progress, particularly in EMEA over the last 12 months, is we’ve deployed a number of really significant systems. If you remember when we were back together actually at SC15 in Austin. One of the big pieces of news that we were very proud of was our presence in the top 10, 4 of them are actually powered by Seagate. Even more impressive is that 100% of the newest systems are powered by Seagate. When you peel that layer back just a little bit further, actually three of those four systems are actually from Europe and the Middle East.”