Today GENCI announced a collaboration with IBM aimed at speeding up the path to exascale computing. “The collaboration, planned to run for at least 18 months, focuses on readying complex scientific applications for systems under development expected to achieve more than 100 petaflops, a solid step forward on the path to exascale. Working closely with supercomputing experts from IBM, GENCI will have access to some of the most advanced high performance computing technologies stemming from the rapidly expanding OpenPOWER ecosystem.”
Today the InfiniBand Trade Association (IBTA) announced the completion of the first Plugfest for RDMA over Converged Ethernet (RoCE) solutions and the publication of the RoCE Interoperability List on the IBTA website. Fifteen member companies participated, bringing their RoCE adapters, cables and switches for testing to the event. Products that successfully passed the testing have been added to the RoCE Interoperability List.
Today Mellanox announced its EDR 100Gb/s InfiniBand solutions have been selected by the KTH Royal Institute of Technology for use in their PDC Center for High Performance Computing. Mellanox’s robust and flexible EDR InfiniBand solution offers higher interconnect speed, lower latency and smart accelerations to maximize efficiency and will enable the PDC Center to achieve world-leading data center performance across a variety of applications, including advanced modeling for climate changes, brain functions and protein-drug interactions.
In this video, Fermilab’s Dr. Don Lincoln gives us a sense of just how much data is involved and the incredible computer resources that make the LHC possible. “The LHC is the world’s highest energy particle accelerator and scientists use it to record an unprecedented amount of data. This data is recorded in electronic format and it requires an enormous computational infrastructure to convert the raw data into conclusions about the fundamental rules that govern matter.”
Today Norway’s Dolphin Interconnect Solutions demonstrated record a low latency of 300 nanoseconds at IDF 2015. Dolphin achieved this record by adding Intel Xeon Non Transparent Bridging (NTB) support to its existing PCI Express network product. In addition, Dolphin announced a new PCIe 3.0 host adapter, the PXH810 Host Adapter, which achieves 540 nanoseconds of latency at 64Gbps wire speeds.
The big memory “Blacklight” system at the Pittsburgh Supercomputer Center will be retired on Aug 15 to make way for the new “Bridges” supercomputer. “Built by HP, Bridges will feature multiple nodes with as much as 12 terabytes each of shared memory, equivalent to unifying the RAM in 1,536 high-end notebook computers. This will enable it to handle the largest memory-intensive problems in important research areas such as genome sequence assembly, machine learning and cybersecurity.”
“Data centric workloads are growing in importance in high-performance computing and, in an industry that has been dominated by a handful of technologies for several years, this has led users to look for new technologies which better suit such jobs. The demand now is for low power, high memory, and I/O-intensive solutions, so there is a growing niche which can be addressed by solutions which are less focused on Flops performance.”
Today the California Institute for Telecommunications and Information Technology (Calit2) and Australia’s Academic and Research Network (AARNet) announced a partnership to connect Australian researchers to the US Pacific Research Platform (PRP), a next generation data sharing network linking research universities and supercomputing centers at unprecedented speeds.