MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


UW Projects Awarded 42 Million Core Hours on Yellowstone Supercomputer

“A new supercomputer, dubbed Cheyenne, is expected to be operational at the beginning of 2017. The new high-performance computer will be a 5.34-petaflop system, meaning it can carry out 5.34 quadrillion calculations per second. It will be capable of more than 2.5 times the amount of scientific computing performed by Yellowstone.”

Podcast: Molly Rector from DDN on the Changing Face of HPC Storage

In this Graybeards Podcast, Molly Rector from DDN describes how HPC storage technologies are mainstreaming into the enterprise space. “In HPC there are 1000s of compute cores that are crunching on PB of data. For Oil&Gas companies, it’s seismic and wellhead analysis; with bio-informatics it’s genomic/proteomic analysis; and with financial services, it’s economic modeling/backtesting trading strategies. For today’s enterprises such as retailers, it’s customer activity analytics; for manufacturers, it’s machine sensor/log analysis; and for banks/financial institutions, it’s credit/financial viability assessments. Enterprise IT might not have 1000s of cores at their disposal just yet, but it’s not far off. Molly thinks one way to help enterprise IT is to provide a SuperComputer as a service (ScaaS?) offering, where top 10 supercomputers can be rented out by the hour, sort of like a supercomputing compute/data cloud.”

MultiLevel Parallelism with Intel Xeon Phi

“The combination of using both MPI and OpenMP is a topic that has been explored by many developers in order to determine the most optimum solution. Whether to use OpenMP for outer loops and MPI within, or by creating separate MPI processes and using OpenMP within can lead to various levels of performance. In most cases of determining which method will yield the best results will involve a deep understanding of the application, and not just rearranging directives.”

OSC to Deploy New Dell Supercomputer in Ohio

Today the Ohio Supercomputer Center (OSC) announced plans to boost scientific and industrial discovery and innovation with a powerful new supercomputer from Dell. To be deployed later this year, the new system is part of a $9.7 million investment that received approval from the State Controlling Board in January.

SC16 Workshop Proposals Due Feb. 14

SC16 is now accepting full- and half-day Workshop Proposals for SC16. “SC16 will include full- and half-day workshops that complement the overall Technical Program events, with the goal of expanding the knowledge base of practitioners and researchers in a particular subject area. These workshops provide a focused, in-depth venue for presentations, discussion and interaction. Workshop proposals were peer-reviewed academically with a focus on submissions that inspire deep and interactive dialogue in topics of interest to the HPC community.”

Second Intel Parallel Computing Center Opens at SDSC

Intel has opened a second parallel computing center at the San Diego Supercomputer Center (SDSC), at the University of California, San Diego. The focus of this new engagement is on earthquake research, including detailed computer simulations of major seismic activity that can be used to better inform and assist disaster recovery and relief efforts.

ExaNeSt European Consortium to Develop Exascale Architecture

In this special guest feature, Robert Roe from Scientific Computing World reports that a new Exascale computing architecture using ARM processors is being developed by a European consortium of hardware and software providers, research centers, and industry partners. Funded by the European Union’s Horizon2020 research program, a full prototype of the new system is expected to be ready by 2018.

Video: Theta & Aurora – Big Systems for Big Science

“Aurora’s revolutionary architecture features Intel’s HPC scalable system framework and 2nd generation Intel Omni-Path Fabric. The system will have a combined total of over 8 Petabytes of on package high bandwidth memory and persistent memory, connected and communicating via a high-performance system fabric to achieve landmark throughput. The nodes will be linked to a dedicated burst buffer and a high-performance parallel storage solution. A second system, named Theta, will be delivered in 2016. Theta will be based on Intel’s second-generation Xeon Phi processor and will serve as an early production system for the ALCF.”

2016 OpenPOWER Summit Announces Speaker Agenda

Today, the OpenPOWER Foundation announced the lineup of speakers for the OpenPOWER Summit 2016, taking place April 5-8 at NVIDIA’s GPU Technology Conference (GTC) at the San Jose Convention Center. The Summit will bring together dozens of technology leaders from the OpenPOWER Foundation to showcase the latest advancements in the OpenPOWER ecosystem, including collaborative hardware, software and application developments – all designed to revolutionize the data center.

Video: Supercomputing at the University of Buffalo

In this WGRZ video, researchers describe supercomputing at the Center for Computational Research at the University of Buffalo. “The Center’s extensive computing facilities, which are housed in a state-of-the-art 4000 sq ft machine room, include a generally accessible (to all UB researchers) Linux cluster with more than 8000 processor cores and QDR Infiniband, a subset (32) of which contain (64) NVidia Tesla M2050 “Fermi” graphics processing units (GPUs).”