MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Teratec Forum to Put Spotlight on Big Data & HPC

The TERATEC Forum has posted their agenda for their 11th annual meeting. The event takes place June 28-29 in Palaiseau, France. “TERATEC brings together top international experts in high performance numerical design, simulation and Big Data, making it the major event in France and in Europe in this domain.”

MarsFS – A Near-POSIX Namespace Leveraging Scalable Object Storage

David Bonnie from LANL presented this talk at the 2016 MSST Conference. “As we continue to scale system memory footprint, it becomes more and more challenging to scale the long-term storage systems with it. Scaling tape access for bandwidth becomes increasingly challenging and expensive when single files are in the many terabytes to petabyte range. Object-based scale out systems can handle the bandwidth requirements we have, but are also not ideal to store very large files as objects. MarFS sidesteps this while still leveraging the large pool of object storage systems already in existence by striping large files across many objects.”

Intel® Enterprise Edition for Lustre* Software—High-performance File System

Intel has been working on a new HPC design philosophy for HPC systems called Intel® Scalable System Framework (Intel® SSF), an approach designed to enable sustained, balanced performance in HPC as the community pushes towards the Exascale computing era. Central to Intel SSF performance is the Lustre* scalable, parallel file system (PFS). Intel® Enterprise Edition for Lustre software (Intel® EE for Lustre software) is the Intel distribution of the well-known PFS, which is used by the majority of the fastest supercomputers around the world.

Storage Performance Modeling for Future Systems

In this video from the 2016 MSST Conference, Yoonho Park from IBM presents: Storage Performance Modeling for Future Systems. “The burst buffer is an intermediate, high-speed layer of storage that is positioned between the application and the parallel file system (PFS), absorbing the bulk data produced by the application at a rate a hundred times higher than the PFS, while seamlessly draining the data to the PFS in the background.”

Video: Accelerating Code at the GPU Hackathon in Delaware

In this video from the GPU Hackathon at the University of Delaware, attendees tune their code to accelerate their application performance. The 5-day intensive GPU programming Hackathon was held in collaboration with Oak Ridge National Lab (ORNL). “Thanks to a partnership with NASA Langley Research Center, Oak Ridge National Laboratory, National Cancer Institute, National Institutes of Health (NIH), Brookhaven National Laboratory and the UD College of Engineering, UD students had access to the world’s second largest supercomputer — the Titan — to help solve real-world problems.”

Job of the Week: HPC Pre-Sales Engineer at SGI

SGI_logo_platinum_lgSGI is seeking an HPC Pre-Sales Engineer in our Job of the Week. “The HPC Pre-Sales Engineer role provides in depth technical and architectural expertise in Federal Sales opportunities in the DC area, primarily working with DOD and Civilian Agencies. As the primary technical interface with the customer, you must be able to recognize customer needs, interpret them and produce comprehensive solutions.”

Register for ISC 2016 by May 11 for Early Bird Discounts

There is still time to take advantage of Early Bird registration rates for ISC 2016. You can save over 45 percent off the on-site registration rates if you sign up by May 11. “ISC 2016 takes place June 19-23 in Frankfurt, Germany. With an expected attendance of 3,000 participants from around the world, ISC will also host 146 exhibitors from industry and academia.”

Superfacility – How New Workflows in the DOE Office of Science are Changing Storage Requirements

Katie Antypas from NERSC presented this talk at the 2016 MSST conference. Katie is the Project Lead for the NERSC-8 system procurement, a project to deploy NERSC’s next generation supercomputer in mid-2016. The system, named Cori, (after Nobel Laureate Gerty Cori) will be a Cray XC system featuring 9300 Intel Knights Landing processors. The Knights Landing processors will have over 60 cores with 4 hardware threads each and a 512 bit vector unit width. It will be crucial that users can exploit both thread and SIMD vectorization to achieve high performance on Cori.”

Peta-Exa-Zetta: Robert Wisniewski and the Growth of Compute Power

While much noise is being made about the race to exascale, building productive supercomputers really comes down to people and ingenuity. In this special guest feature, Donna Loveland profiles supercomputer architect Robert Wisniewski from Intel. “In combining the threading and memory challenges, there’s an increased need for the hardware to perform synchronization operations, especially intranode ones, efficiently. With more threads utilizing less memory with wider parallelism, it becomes important that they synchronize among themselves efficiently and have access to efficient atomic memory operations. Applications also need to be vectorized to take advantage of the wider FPUs on the chip. While much of the vectorization can be done by compilers, application developers can follow design patterns that aid the compiler’s task.”

New Report Charts Future Directions for NSF Advanced Computing Infrastructure

A newly released report commissioned by the National Science Foundation (NSF) and conducted by National Academies of Sciences, Engineering, and Medicine examines priorities and associated trade-offs for advanced computing investments and strategy. “We are very pleased with the National Academy’s report and are enthusiastic about its helpful observations and recommendations,” said Irene Qualters, NSF Advanced Cyberinfrastructure Division Director. “The report has had a wide range of thoughtful community input and review from leaders in our field. Its timing and content give substance and urgency to NSF’s role and plans in the National Strategic Computing Initiative.”