In this WGRZ video, researchers describe supercomputing at the Center for Computational Research at the University of Buffalo. “The Center’s extensive computing facilities, which are housed in a state-of-the-art 4000 sq ft machine room, include a generally accessible (to all UB researchers) Linux cluster with more than 8000 processor cores and QDR Infiniband, a subset (32) of which contain (64) NVidia Tesla M2050 “Fermi” graphics processing units (GPUs).”
Today SURFsara in the Netherlands announced it will expand the capacity of their Cartesius national supercomputer in the second half of 2016. With an upgrade to 1.8 Petaflops, the Bull sequana system will enable researchers to work on more complex models for climate research, water management, improving medical treatment, research into clean energy, noise reduction and product and process optimization.
Today Atos announced that the French CEA and its industrial partners at the Centre for Computing Research and Technology, CCRT, have invested in a new 1.4 petaflop Bull supercomputer. “Three times more powerful than the current computer at CCRT, the new system will be installed in the CEA’s Very Large Computing Centre in Bruyères-le-Châtel, France, mid-2016 to cover expanding industrial needs. Named COBALT, the new Intel Xeon-based supercomputer will be powered by over 32,000 compute cores and storage capacity of 2.5 Petabytes with a throughput of 60 GB/s.”
In this week’s industry Perspective, Katie Garrison of One Stop Systems explains how GPUltima allows HPC professionals to create a highly dense compute platform that delivers a petaflop of performance at greatly reduced cost and space requirements.compute power needed to quickly process the amount of data generated in intensive applications.
The HPC Advisory Council Stanford Conference 2016 has posted its speaker agenda. The event will take place Feb 24-25, 2016 on the Stanford University campus at the new Jen-Hsun Huang Engineering Center. “The HPC Advisory Council Stanford Conference 2016 will focus on High-Performance Computing usage models and benefits, the future of supercomputing, latest technology developments, best practices and advanced HPC topics. In addition, there will be a strong focus on new topics such as Machine Learning and Big Data. The conference is open to the public free of charge and will bring together system managers, researchers, developers, computational scientists and industry affiliates.”
The 2016 OpenFabrics Workshop has extended the dealing for its Call for Sessions to Feb. 1, 2016. The event takes place April 4-8, 2016 in Monterey, California. “The Workshop is the premier event for collaboration between OpenFabrics Software (OFS) producers and those whose systems and applications depend on the technology. Every year, the workshop generates lively exchanges among Alliance members, developers and users who all share a vested interest in high performance networks.”
“The path to Exascale computing is clearly paved with Co-Design architecture. By using a Co-Design approach, the network infrastructure becomes more intelligent, which reduces the overhead on the CPU and streamlines the process of passing data throughout the network. A smart network is the only way that HPC data centers can deal with the massive demands to scale, to deliver constant performance improvements, and to handle exponential data growth.”
Sean Hefty from Intel presented this talk at the Intel HPC Developer Conference at SC15. “OpenFabrics Interfaces (OFI) is a framework focused on exporting fabric communication services to applications. OFI is best described as a collection of libraries and applications used to export fabric services. The key components of OFI are: application interfaces, provider libraries, kernel services, daemons, and test applications. Libfabric is a core component of OFI. It is the library that defines and exports the user-space API of OFI, and is typically the only software that applications deal with directly. It works in conjunction with provider libraries, which are often integrated directly into libfabric.”
Today the National Center for Atmospheric Research announced that it has selected SGI to build one of the world’s most advanced compute systems used to develop models for predicting the impact of climate change and severe weather events on both a global and local scale. As part of a new procurement coming online in 2017, an SGI ICE XA system named “Cheyenne” will perform some of the world’s most data intensive calculations for weather and climate modeling to improve the resolution and precision by orders of magnitude. As a result, NCAR’s scientists will provide more actionable projections about the impact of climate change for specific regions and assist agencies throughout the world develop more accurate weather predictions on a local and global scale.
Fans of High Performance Data Analytics got a boost today with news that the High-Performance Big Data (HiBD) team at Ohio State University has released RDMA-Apache-Spark 0.9.1.