“STFC Hartree Centre needed a powerful, flexible server system that could drive research in energy efficiency as well as economic impact for its clients. By extending its System x platform with NeXtScale System, Hartree Centre can now move to exascale computing, support sustainable energy use and help its clients gain a competitive advantage.” Sophisticated data processes are now integral to all areas of research and business. Whether you are new to discovering the potential of supercomputing, data analytics and cognitive techniques, or are already using them, Hartree’s easy to use portfolio of advanced computing facilities, software tools and know-how can help you create better research outcomes that are also faster and cheaper than traditional research methods.
In this special guest feature from Scientific Computing World, Cray’s Barry Bolding gives some predictions for the supercomputing industry in 2017. “2016 saw the introduction or announcement of a number of new and innovative processor technologies from leaders in the field such as Intel, Nvidia, ARM, AMD, and even from China. In 2017 we will continue to see capabilities evolve, but as the demand for performance improvements continues unabated and CMOS struggles to drive performance improvements we’ll see processors becoming more and more power hungry.”
“The multidisciplinary research team and computational facilities –including MareNostrum– make BSC an international centre of excellence in e-Science. Since its establishment in 2005, BSC has developed an active role in fostering HPC in Spain and Europe as an essential tool for international competitiveness in science and engineering. The center manages the Red Española de Supercomputación (RES), and is a hosting member of the Partnership for Advanced Computing in Europe (PRACE) initiative.”
“Phase one at CINECA, an academic consortium, was completed in May 2016 – coming in at 1.7 Petaflops, which at the time it was the largest Intel Omni-Path Fabric system in the world. Lenovo and CINECA are pleased to announce the delivery and installation of phase two, a 3,600 node Intel Xeon Phi processor which is interconnected with 100Gb Intel Omni-Path fabric – delivering 6.2 Petaflops of performance.”
In this video, CoolIT Systems CEO & CTO, Geoff Lyon, and STULZ ATS President, Joerg Desler, discuss high density Chip-to-Atmosphere™ data center liquid cooling solutions for organizations big or small. When integrated, CoolIT Systems’ DCLC™ solutions can capture 85% and more of the servers’ heat directly into liquid. Complimenting DCLC™, STULZ precision air cooling products capture the balance of the lower density heat. A considerable benefit forms when the total heat energy from both systems is consolidated, transported outside and then dissipated or recaptured for reuse, to heat nearby buildings, for example.
Designed specifically with researchers in mind, the Birmingham Environment for Academic Research (BEAR) Cloud will augment an already rich set of IT services at the University of Birmingham and will be used by academics across all disciplines, from Medicine to Archaeology, and Physics to Theology. “We are very proud of the new system, but building a research cloud isn’t easy,” said Simon Thompson, Research Computing Infrastructure Architect in IT Services at the University of Birmingham. “We challenged a range of carefully-selected partners to provide the underlying technology.”
“The Lenovo HPC organization is delighted to welcome DDN into our HPC Innovation and Benchmark center and strengthen our close collaboration,” said Rick Koopman, EMEA Technical Lead HPC DCG HPC at Lenovo. “With DDN’s high-performance storage and Lustre filesystem solution, customers can easily facilitate proof of concept and benchmarking activities at our HPC Innovation center and more quickly determine the best solution for their needs. We are excited to support our HPC customers and partners in this way.”
Today the University of Iceland unveiled a new supercomputer that will boost research in a range of scientific areas. Manufactured by Lenovo, the cluster was funded by the Research Infrastructure Fund Iceland with matching funds from the University of Iceland, Reykjavik University.
“Cavium ThunderX has significant differentiation in the 64-bit ARM market as Cavium is the first ARMv8 vendor to deliver dual socket support with full ARMv8.1 implementation and significant advantage in CPU cores with 48 cores per socket. In addition, ThunderX supports large memory capacity (512GB per socket, 1TB in a 2S system) with excellent memory bandwidth and low memory latency. In addition, ThunderX includes multiple 10 GbE / 40GbE network interfaces delivering excellent IO throughput. These features enable ThunderX to deliver the core performance & scale out capability that the HPC market requires.”
Researchers from across University College London are now benefitting from “Grace,” a new 181 Teraflop HPC system named in honor of pioneering computer scientist Grace Hopper. Designed and integrated by OCF in the UK, the Grace cluster integrates Lenovo and DDN technology to provide HPC services alongside UCL’s existing HPC machines, Legion and Emerald.