In this video, Eric Barton from Intel describes how the company is leveraging Fast Forward Collectives to improve Lustre RAS.
In this Chip Chat podcast, Mike Bernhardt, Community Evangelist for HPC and Technical Computing at Intel, discusses the importance of code modernization as we move into multi- and many-core systems in the HPC field. According Bernhardt, markets as diverse as oil and gas, financial services, and health and life sciences can see a dramatic performance improvement in their code through parallelization.
“The single most important truth about high-performance computing (HPC) over the next decade is that it will have a more profound societal impact with each passing year. The issues that HPC systems address are among the most important facing humanity: disease research and medical treatment; climate modelling; energy discovery; nutrition; new product design; and national security. In short, the pace of change and of enhancements in HPC performance – and its positive impact on our lives – will only grow.”
Over at Admin HPC, Intel’s Jeff Layton writes that understanding how data makes its way from the application to storage devices is key to understanding how I/O works and that monitoring the lowest level of the I/O stack, the block driver, is a crucial part of this overall understanding of I/O patterns.
As an example of what you can do with key-value storage and how simple it can be, Seagate has created a new storage drive called Kinetic that you address using REST-like commands such as get, put, and delete. A simple open-source library allows you to then develop IO libraries so that applications can perform IO to/from the drives. Some object storage solutions such as Swift have already been ported to use the Kinetic drives. Ceph is also developing a version that can use Kinetic drives. Other object based storage systems such as Lustre and Gluster could theoretically use this technology as well.”
“Working in close collaboration with Intel Labs Parallel Computing Lab, performing a series of architecture-aware optimizations, the team was able to scale the complexity of science and sustained performance to an unprecedented level. SeisSol sustained 8.6 PFLOPS (double precision), almost equivalent 8.6 quadrillion calculations per second when processing seismic wave phenomena using half of the Tianhe-2 supercomputer.
In the course of this talk, Intel’s Raj Hazra unveils details of the Knights Landing architecture including the new Omni Scale Fabric, an integrated, high performance interconnect designed for CPU to CPU communications. “The industry ecosystem needs to work together to tackle challenges in system architecture, programming models, and energy efficiency – all while lowering the thresholds for broader user access and usability.”