Today the HPC Advisory Council announced the HPCAC Spain Conference 2015. The event takes place Sept. 22 at the Campus Diagonal Nord in Barcelona, Spain.
“Named in honor of Professor Emeritus John R. Rice, the Rice supercomputer will be used by researchers to develop new treatments for cancer, improve crop yields to better feed the planet, engineer quieter aircraft, study global climate change and probe the origins of the universe.”
Built with HP compute nodes, the Rice supercomputer is powered by 10-core Intel Xeon-E5 processors, 64 GB of memory, and 56 Gb FDR InfiniBand from Mellanox. Rice nodes are now running Red Hat Enterprise Linux 6 with Moab and TORQUE for job management.
In this slidecast, Pavel Shamis from ORNL and Gilad Shainer from Mellanox announce the UCX Unified Communication X Framework. “UCX is a collaboration between industry, laboratories, and academia to create an open-source production grade communication framework for data centric and HPC applications.”
“Learn about extensions that enable efficient use of Partitioned Global Address Space (PGAS) Models like OpenSHMEM and UPC on supercomputing clusters with NVIDIA GPUs. PGAS models are gaining attention for providing shared memory abstractions that make it easy to develop applications with dynamic and irregular communication patterns. However, the existing UPC and OpenSHMEM standards do not allow communication calls to be made directly on GPU device memory. This talk discusses simple extensions to the OpenSHMEM and UPC models to address this issue.”
“We present results for a platform consisting of an NVM Express SSD, a CAPI accelerator card and a software stack running on a Power8 system. We show how the threading of the Power8 CPU can be used to move data from the SSD to the CAPI card at very high speeds and implement accelerator functions inside the CAPI card that can process the data at these speeds.”
“E4 Computer Engineering has introduced ARKA, the first server solution based on ARM 64 bit SoC dedicated to HPC. The compute node is boosted by discrete GPU NVIDIA cards K20 with 10Gb ethernet and FDR InfiniBand networks implemented by default. In this presentation, the hardware configuration of the compute node is described in detail. The unique capabilities of the ARM+GPU+IB combination are described, including many synthetic benchmarks and application tests with particular attention to molecular dynamics software.”
“ConnectX-4 EDR 100Gb/s with CAPI support tightly integrates with the POWER CPU at the local bus level and provides faster access between the POWER CPU and the network device. We will discuss the latest interconnect advancements that maximize application performance and scalability on OpenPOWER architecture, including enhanced flexible connectivity with the latest Mellanox ConnectX-3 Pro Programmable Network Adapter.”