In this video from the LAD’14 Conference in Reims, Nathan Rutman from Seagate presents: Exascale: A Long Look at Lustre Limitations.
“Exascale levels of computing pose many system- and application-level computational challenges. Mellanox as a provider of end-to-end communication services is progressing the foundation of the InfiniBand architecture to meet the exascale challenges. This presentation will focus on recent technology improvements which significantly improve InfiniBand’s scalability, performance, and ease of use.”
“The business case for exascale in O&G is extremely compelling, and — as anyone who has read Daniel Yergin’s “The Prize” will appreciate — goes to the very core of why IOCs exist. In the search for oil and gas in the Gulf of Mexico — one of the richest hydrocarbon basins in the world that continues to reinvent itself for exploration plays — the biggest prizes lie in ultra-deep water. In a deeply submerged area about 300 miles southwest of New Orleans and extending into Mexico waters, rock formations from the Paleogene period, also known as the Lower Tertiary, represent the leading edge of deep-water oil discovery. “
Today insideHPC announced that the organization is seeking nominations for the HPC Vanguard Award. Launched by The Exascale Report in 2013, the award recognizes critical leaders in the HPC community’s strategic push to achieve exascale levels of supercomputing performance. The HPC Vanguard Award recognizes leadership in driving the HPC community,” said Rich Brueckner, President of insideHPC. “As the name suggests, Vanguards consistently push the envelope and are always open to new, innovative thinking.”
“In terms of the hardware, one of the biggest successes surely was to make the Intel Xeon Phi boot via the Extoll network. This might not sound so special, but for the DEEP project it is – because this basically is the essential milestone for proving our architectural concept: The Cluster-Booster approach. In traditional heterogeneous architectures the accelerators cannot boot without a host CPU. Our aim was to develop a cluster – made up of usual CPUs – and a booster – made up of accelerators – that can both act autonomously while being interconnected via two networks.”
NERSC has accepted a selection of key DOE science projects into its NERSC Exascale Scientific Applications Program, a collaborative effort in which NERSC will partner with code teams to prepare for the NERSC-8 Cori manycore architecture. NESAP represents an important opportunity for researchers to prepare application codes for the new architecture and to help advance […]
In this episode of This Week in HPC, Michael Feldman and Addison Snell from Intersect360 Research discuss the new Cray CS-Storm supercomputer based on Nvidia GPUs. After that, the discussion turns to exascale investment recommendations coming out of a new report from a Department of Energy Task Force.
A new report on the problems and opportunities that will drive the need for next generation HPC has been released by the Task Force on High Performance Computing of Secretary of Energy Advisory Board. Commissioned by Secretary of Energy, Dr. Ernest J. Moniz, the report includes recommendations as to where the DOE and the NNSA should invest to deliver the next class of leading edge machines by the middle of the next decade.
In an unprecedented collaboration, eight national laboratories will apply supercomputing resources to a new climate study with the National Center for Atmospheric Research. The project, called Accelerated Climate Modeling for Energy, or ACME, is designed to accelerate the development and application of fully coupled, state-of-the-science Earth system models for scientific and energy applications.