MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Compute Canada to Renew National Research Computing Infrastructure

computecanada

Today Compute Canada announced funding to renew and consolidate the Canadian national platform for advanced research computing. Compute Canada and its regional partners ACENET, Calcul Quebec, Compute Ontario, and WestGrid will receive close to $75 million through the Canada Foundation for Innovation’s Cyberinfrastructure Initiative competition launched in 2014.

BSC and CEA to Collaborate on HPC Innovation

bsc-cea

Today the Barcelona Supercomputing Center (BSC) and French Alternative Energies and Atomic Energy Commission (CEA) announced plans to collaborate on HPC research and technology innovation. Both organizations have signed an agreement to help promoting “a globally competitive HPC value chain and flagship industry”, echoing the European Union strategy in the domain.

Nested Parallelism

phi-compressor

The benefits of nested parallelism on highly threaded applications can be determined and quantified. With the number of cores in both the host CPU (Intel Xeon) and the coprocessor (Intel Xeon Phi) continues to increase, much thought must be given to minimizing the thread overhead when many threads need to be synchronized, as well as the memory access for each processor (core). Tasks that can be spread across an entire system to exploit the algorithm’s parallelism, should be mapped to the NUMA node to make them more efficient.

Video: Argonne Presents HPC Plans at ISC 2015

30395D036_crop_0

“In April 2015, the U.S. Department of Energy announced a $200 million supercomputing investment coming to Argonne National Laboratory. As the third of three Coral supercomputer procurements, the deal will comprise an 8.5 Petaflop “Theta” system based on Knights Landing in 2016 and a much larger 180 Petaflop “Aurora” supercomputer in 2018. Intel will be the prime contractor on the deal, with sub-contractor Cray building the actual supercomputers.”

Should Users Reset Performance Expectations for Exascale?

exascale

“Exascale computers are going to deliver only one or two per cent of their theoretical peak performance when they run real applications; and both the people paying for, and the people using, such machines need to have realistic expectations about just how low a percentage of the peak performance they will obtain.”

Obama Establishes National Strategic Computing Initiative

whitehouse

Today President Obama issued an Executive Order establishing the National Strategic Computing Initiative (NSCI) to ensure the United States continues leading high performance computing over the coming decades.

Intel’s Raj Hazra on the Convergence of HPC & Big Data at ISC 2015

CKh4C9kUEAAPFuI

In this video from ISC 2015, Intel’s Raj Hazra explores how new innovations and Intel’s Scalable System Framework approach can maximize the potential in the new HPC era. Raj also shares details of upcoming Intel technologies, products and ecosystem collaborations that are powering these breakthroughs and ensuring technical computing continues to fulfill its potential as a scientific and industrial tool for discovery and innovation.

Video: CSCS Focuses on Sustainability at ISC 2015

cscs

In this video from ISC 2015, Michele De Lorenzi sits down with Rich Brueckner from insideHPC to discuss the latest updates from the conference and how the CSCS booth is constructed to reflect Switzerland’s focus on sustainability.

Radio Free HPC Looks at 3D XPoint Non-Volatile Memory

bubble

In this video, the Radio Free HPC team looks at the newly announced 3D XPoint technology from Intel and Micron. “3D XPoint ushers in a new class of non-volatile memory that significantly reduces latencies, allowing much more data to be stored close to the processor and accessed at speeds previously impossible for non-volatile storage.”

SGI UV 300H 20-Socket Appliance Now SAP HANA Certified

sgi

Today SGI announced that its SGI UV 300H is now SAP-certified to run the SAP HANA platform in controlled availability at 20-sockets— delivering up to 15 terabytes (TB) of in-memory computing capacity in a single node.