MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


NERSC Dungeon Session Speeds Code for Cori Supercomputer

Six application development teams from NERSC gathered at Intel in early August for a marathon “dungeon session” designed to help tweak their codes for the next-generation Intel Xeon Phi Knight’s Landing manycore architecture and NERSC’s new Cori supercomputer. “We try to prepare ahead of time to bring the types of problems that can only be solved with the experts at Intel and Cray present—deep questions about the architecture and how applications use the Xeon Phi processor. It’s all geared toward optimizing the codes to run on the new manycore architecture and on Cori.”

IIT in Bombay Opens the First OpenPOWER Research Facility

Today IBM announced the opening of the first OpenPOWER Research Facility (OPRF) at the Indian Institute of Technology Bombay. The OPRF will help drive the country’s National Knowledge Network initiative to interconnect all institutions of higher learning and research with a high-speed data communication network, facilitating knowledge sharing and collaborative research and innovation. “Open collaboration is driving the next wave of innovation across the entire system stack, allowing clients and organizations to develop customized solutions to capitalize on today’s emerging workloads,’ said Monica Aggarwal, Vice President, India Systems Development Lab (ISDL), IBM Systems. “The OPRF will enable Indian companies, universities and government organizations to build technologies indigenously using the high-performance POWER processor, helping to drive the national IT agenda of India,” she added.

Radio Free HPC Looks at IDF 2016

In this podcast, the Radio Free HPC team reviews the recent 2016 Intel Developer Forum. “How will Intel return to growth in the face of a declining PC market? At IDF, they put the spotlight on IoT and Machine Learning. With new threats rising from the likes of AMD and Nvidia, will Chipzilla make the right moves? Tune in to find out.”

Nvidia Donates DGX-1 Machine Learning Supercomputer to OpenAI Non-profit

This week Nvidia CEO Jen-Hsun Huang hand-delivered one of the company’s new DGX-1 Machine Learning supercomputers to the OpenAI non-profit in San Francisco. “The DGX-1 is a huge advance,” OpenAI Research Scientist Ilya Sutskever said. “It will allow us to explore problems that were completely unexplored before, and it will allow us to achieve levels of performance that weren’t achievable.”

Taming Heterogeneity in HPC – The DEEP-ER take

Norbert Eicker from the Jülich Supercomputing Centre presented this talk at the SAI Computing Conference in London. “The ultimate goal is to reduce the burden on the application developers. To this end DEEP/-ER provides a well-accustomed programming environment that saves application developers from some of the tedious and often costly code modernization work. Confining this work to code-annotation as proposed by DEEP/-ER is a major advancement.”

Making Life Easier with Altair Data Center GPU Management Tool

Altair’s new Data Center GPU Management Tool is now available to Nvidia HPC Customers. With the wide adoption of Graphics Processing Units, customers are addressing vital work in fields including artificial intelligence, deep learning, self-driving cars, and virtual reality now have the ability to improve the speed and reliability of their computations through a new technology collaboration with Altair to integrate PBS Professional.

Video: Intel Sneak Peek at Knights Mill Processor for Machine Learning

In this video from the 2016 Intel Developer Forum, Diane Bryant describes the company’s efforts to advance Machine Learning and Artificial Intelligence. Along the way, she offers a sneak peak at the Knights Mill processor, the next generation of Intel Xeon Phi slated for release sometime in 2017. “Now you can scale your machine learning and deep learning applications quickly – and gain insights more efficiently – with your existing hardware infrastructure. Popular open frameworks newly optimized for Intel, together with our advanced math libraries, make Intel Architecture-based platforms a smart choice for these projects.”

Intel Xeon Phi Coprocessor Design

“The major functionality of the Intel Xeon Phi coprocessor is a chip that does the heavy computation. The current version utilizes up to 16 channels of GDDR5 memory. An interesting notes is that up to 32 memory devices can be used, by using both sides of the motherboard to hold the memory. This doubles the effective memory availability as compared to more conventional designs.”

Video: Parallel I/O Best Practices

In this video from the 2016 Blue Waters Symposium, Andriy Kot from NCSA presents: Parallel I/O Best Practices.

Podcast: Supercomputing Better Soybeans

In this TACC Podcast, Researchers describe how XSEDE supercomputing resources are helping them grow a better soybean through the SoyKB project based from the University of Missouri-Columbia. “The way resequencing is conducted is to chop the genome in many small pieces and see the many, many combinations of small pieces,” said Xu. “The data are huge, millions of fragments mapped to a reference. That’s actually a very time consuming process. Resequencing data analysis takes most of our computing time on XSEDE.”