Today One Stop Systems announced its High Density Compute Accelerator (HDCA) with AMD’s newly released S9170 GPUs. The 3U High Density Compute Accelerator (CA16003) provides up to 84 Tflops of peak single precision (SP), 42 Tflops of peak dual precision (DP) and 512GB of GPU memory using AMD FIREPRO S9170 GPU accelerators.
Today IBM along with Nvidia and two U.S. Department of Energy National Laboratories today announced a pair of Centers of Excellence for supercomputing – one at the Lawrence Livermore National Laboratory and the other at the Oak Ridge National Laboratory. The collaborations are in support of IBM’s supercomputing contract with the U.S. Department of Energy. They will enable advanced, large-scale scientific and engineering applications both for supporting DOE missions, and for the Summit and Sierra supercomputer systems to be delivered respectively to Oak Ridge and Lawrence Livermore in 2017 and to be operational in 2018.
In this podcast, the Radio Free HPC team looks at how the KatRisk startup is using GPUs on the Titan supercomputer to calculate global flood maps. “KatRisk develops event-based probabilistic models to quantify portfolio aggregate losses and exceeding probability curves. Their goal is to develop models that fully correlate all sources of flood loss including explicit consideration of tropical cyclone rainfall and storm surge.”
Today IBM announced that the company is now offering Nvidia Tesla K80 GPU accelerators on bare metal cloud servers. With the new offering, IBM Cloud is bringing high-speed performance to the SoftLayer cloud infrastructure, enabling companies to build supercomputing clusters without having to expand their existing technology infrastructure.
“The AMD FirePro S9170 server GPU can accelerate complex workloads in scientific computing, data analytics, or seismic processing, wielding an industry-leading 32GB of memory. We designed the new offering for supercomputers to achieve massive compute performance while maximizing available power budgets.”
Today Nvidia updated its GPU-accelerated deep learning software to accelerate deep learning training performance. With new releases of DIGITS and cuDNN, the new software provides significant performance enhancements to help data scientists create more accurate neural networks through faster model training and more sophisticated model design.