“Exascale levels of computing pose many system- and application- level computational challenges. Mellanox Technologies, Inc. as a provider of end-to-end communication services is progressing the foundation of the InfiniBand architecture to meet the exascale challenges. This presentation will focus on recent technology improvements which significantly improve InfiniBand’s scalability, performance, and ease of use.”
“This talk will focus on challenges in designing software libraries and middleware for upcoming exascale systems with millions of processors and accelerators. Two kinds of application domains – Scientific Computing and Big data will be considered. For scientific computing domain, we will discuss about challenges in designing runtime environments for MPI and PGAS (UPC and OpenSHMEM) programming models by taking into account support for multi-core, high-performance networks, GPGPUs and Intel MIC. “
Over at the ISC Blog, Dr Mirko Rahn from Frauenhofer writes that Partitioned Global Address Space (PGAS) approaches have become a hot topic for the exascale computing domain. While there is much work to be done in this area, the EC-funded EPiGRAM project has identified the gaps to be filled when attempting to master the Exascale challenge with PGAS.
“We really need to re-look at what the requirements are that will lead us all the way up to being able to support Exascale deployments. One of these absolute requirements is CPU fabric integration, because the performance that’s needed, the density, the power, are all areas that have to be vastly improved to support deployments of exascale.”
“When OpenACC first appeared it made sense to use this forum to experiment with new approaches while the use of GPUs in HPC was evolving rapidly, with the expectation that the best ideas would then be reintroduced into OpenMP. But OpenMP and OpenACC now seem to be diverging. Indeed, a comparison of OpenACC and OpenMP on the OpenACC web site says “efforts so far to include support for GPUs in the OpenMP specification are — in the opinions of many involved — at best insufficient, and at worst misguided.”
“HPC IO problems for HPC have not been resolved so far and the future of exascale is full of uncertainties. The good news is that we have detected an appetite for change both in the storage and in the application community. In addition, a wilderness of new hardware will be arriving such as deeper hierarchies of storage devices, storage class memories, and large numbers of cores per node. This new hardware may both contribute parts of the solution but will also bring new issues to the forefront, requiring storage and application architects to revisit some ideas used so far.”
In this video from the OpenFabrics International Developer Workshop 2014, SETI@home co-founder Dan Werthimer presents: Petaflop Radio Astronomy Signal Processing and the CASPER Collaboration. As a bellwether for exascale, radio astronomy projects like CASPER and the SKA telescope are pushing the limits of high performance computing.
“OpenFabrics Alliance members are working to streamline and make the verbs interface more efficient. Since this is an open source effort this will take some time, but the consensus is that there is too much overhead both in the depth of the call stack and in the size of associated data structures for verbs to scale well in the exascale era.”
“This talk describes an experimental methodology, ParalleX, that addresses Exascale challenges through a change in the fundamental model of parallel computation from that of the communicating sequential processes (e.g., MPI) to an innovative synthesis of concepts involving message-driven work-queue execution in the context of a global address space.”