Intel has opened a second parallel computing center at the San Diego Supercomputer Center (SDSC), at the University of California, San Diego. The focus of this new engagement is on earthquake research, including detailed computer simulations of major seismic activity that can be used to better inform and assist disaster recovery and relief efforts.
“Aurora’s revolutionary architecture features Intel’s HPC scalable system framework and 2nd generation Intel Omni-Path Fabric. The system will have a combined total of over 8 Petabytes of on package high bandwidth memory and persistent memory, connected and communicating via a high-performance system fabric to achieve landmark throughput. The nodes will be linked to a dedicated burst buffer and a high-performance parallel storage solution. A second system, named Theta, will be delivered in 2016. Theta will be based on Intel’s second-generation Xeon Phi processor and will serve as an early production system for the ALCF.”
Today SURFsara in the Netherlands announced it will expand the capacity of their Cartesius national supercomputer in the second half of 2016. With an upgrade to 1.8 Petaflops, the Bull sequana system will enable researchers to work on more complex models for climate research, water management, improving medical treatment, research into clean energy, noise reduction and product and process optimization.
In this video from the 2015 Hot Chips Conference, Mike Hutton from Altera presents: Stratix 10 Altera’s 14nm FPGA Targeting 1GHz Performance. “Stratix 10 FPGAs and SoCs deliver breakthrough advantages in performance, power efficiency, density, and system integration: advantages that are unmatched in the industry. Featuring the revolutionary HyperFlex core fabric architecture and built on the Intel 14 nm Tri-Gate process, Stratix 10 devices deliver 2X core performance gains over previous-generation, high-performance FPGAs with up to 70% lower power.”
Today Atos announced that the French CEA and its industrial partners at the Centre for Computing Research and Technology, CCRT, have invested in a new 1.4 petaflop Bull supercomputer. “Three times more powerful than the current computer at CCRT, the new system will be installed in the CEA’s Very Large Computing Centre in Bruyères-le-Châtel, France, mid-2016 to cover expanding industrial needs. Named COBALT, the new Intel Xeon-based supercomputer will be powered by over 32,000 compute cores and storage capacity of 2.5 Petabytes with a throughput of 60 GB/s.”
In this video, researchers describe how the Jetstream project at Indiana University. Jetstream is a user-friendly cloud environment designed to give researchers access to interactive computing and data analysis resources on demand, whenever and wherever they want to analyze their data. It will provide a library of virtual machines designed to do discipline specific scientific analysis. Software creators and researchers will also be able to create their own customized virtual machines or their own private computing system within Jetstream.
In this podcast, the Radio Free HPC team looks at the Top Technology Stories for High Performance Computing in 2015. “From 3D XPoint memory to Co-Design Architecture and NVM Express, these new approaches are poised to have a significant impact on supercomputing in the near future.” We also take a look at the most-shared stories from 2015.
As multi-socket, then multi-core systems have become the standard, the Message Passing Interface (MPI) has become one of the most popular programming models for applications that can run in parallel using many sockets and cores. Shared memory programming interfaces, such as OpenMP, have allowed developers to take advantage of systems that combine many individual servers and shared memory within the server itself. However, two different programming models have been used at the same time. The MPI 3.0 standard allows for a new MPI interprocess shared memory extension (MPI SHM).
“Upgrading legacy HPC systems relies as much on the requirements of the user base as it does on the budget of the institution buying the system. There is a gamut of technology and deployment methods to choose from, and the picture is further complicated by infrastructure such as cooling equipment, storage, networking – all of which must fit into the available space. However, in most cases it is the requirements of the codes and applications being run on the system that ultimately define choice of architecture when upgrading a legacy system. In the most extreme cases, these requirements can restrict the available technology, effectively locking a HPC center into a single technology, or restricting the application of new architectures because of the added complexity associated with code modernization, or porting existing codes to new technology platforms.”
Today SGI Japan announced that the Nagaoka University of Technology has selected the SGI UV 300, SGI UV 30EX and SGI Rackable servers and SGI InfiniteStorage 5600 for its next education and research integrated high-performance computing system. With a tenfold performance increase over the previous system, the new supercomputer will will start operation on March 1, 2016.