MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


175 Teams to Compete in ASC16 Student Supercomputer Challenge

The ASC Student Supercomputer Challenge (ASC16) Training kicked off in Beijing on January 26. First initiated and organized in China, ASC16 has gained support from experts and technology organizations in US, Europe, and Asia. With a goal to inspire more innovative applications in various fields, it has attracted more and more talent to supercomputing and has greatly promoted communications in the supercomputing community throughout the world. Within 5 years, the ASC Student Supercomputer Challenge has become the world’s largest supercomputing hackathon.

E4 Computer Engineering Collaborates with Ci on HPC Technologies

Today Centerprise International (Ci) in the UK announced a collaboration with E4 Computer Engineering to develop next-generation datacenter technologies for HPC. “This is an exciting development for both companies, as it combines the specialist knowledge of E4 in the field of high performance computing with our considerable experience in building quality, customized hardware solutions and our expansive reach in the UK IT channel,” said Jeremy Nash, Centerprise Sales Director.”

Call for Submissions: EuroMPI in Edinburgh

EuroMPI has issued its Call for Submissions. The aim of this conference is to bring together all of the stakeholders involved in developments and applications related to the Message Passing Interface (MPI). As the preeminent meeting for users, developers and researchers to interact and discuss new developments and applications of message-passing parallel computing, the meeting takes place Sept. 25-28 in Edinburgh.

Video: Using Google Compute Engine Pre-Emptible VMs for Cancer Research

In this video from the HPC in the Cloud Educational Series, Marco Novaes, Solutions Engineer with the Google Cloud Platform team explains how the Broad Institute was able to use Google Pre-Emptible VMs to leverage over 50,000 cores to advance cancer research. “Cancer researchers saw value in a highly-complex genome analysis, but even though they already had powerful processing systems in-house, running the analysis would take months or more. We thought this would be a perfect opportunity to utilize Google Compute Engine’s Preemptible VMs to further their cancer research, which was a natural part of our mission. And now that Preemptible VMs are generally available, we’re excited to tell you about this work.”

Video: MCDRAM (High Bandwidth Memory) on Knights Landing

“The Intel’s next generation Xeon Phi processor family x200 product (code-name Knights Landing) brings in new memory technology, a high bandwidth on package memory called Multi-Channel DRAM (MCDRAM) in addition to the traditional DDR4. MCDRAM is a high bandwidth (~4x more than DDR4), low capacity (up to 16GB) memory, packaged with the Knights Landing Silicon. MCDRAM can be configured as a third level cache (memory side cache) or as a distinct NUMA node (allocatable memory) or somewhere in between. With the different memory modes by which the system can be booted, it becomes very challenging from a software perspective to understand the best mode suitable for an application.”

The Death and Life of Traditional HPC

The consensus of the panel was that making full use of Intel SSF requires system thinking at the highest level. This entails deep collaboration with the company’s application end-user customers as well as with its OEM partners, who have to design, build and support these systems at the customer site. Mark Seager commented: “For the high-end we’re going after density and (solving) the power problem to create very dense solutions that, in many cases, are water-cooled going forward. We are also asking how can we do a less dense design where cost is more of a driver.” In the latter case, lower end solutions can relinquish some scalability features while still retaining application efficiency.

Call for Papers: Supercomputing Frontiers in Singapore

The Supercomputing Frontiers 2016 conference has issued its Call for Papers. Held in conjunction of the launch of the new Singapore National Supercomputing Center, the event takes place March 15-18. “Supercomputing Frontiers 2016 is Singapore’s annual international conference that provides a platform for thought leaders from both academia and industry to interact and discuss visionary ideas, important global trends and substantial innovations in supercomputing. You are invited to submit 4-page extended abstract by February 8, 2016.”

Agenda Posted for HPC Advisory Council Stanford Conference 2016

The HPC Advisory Council Stanford Conference 2016 has posted its speaker agenda. The event will take place Feb 24-25, 2016 on the Stanford University campus at the new Jen-Hsun Huang Engineering Center. “The HPC Advisory Council Stanford Conference 2016 will focus on High-Performance Computing usage models and benefits, the future of supercomputing, latest technology developments, best practices and advanced HPC topics. In addition, there will be a strong focus on new topics such as Machine Learning and Big Data. The conference is open to the public free of charge and will bring together system managers, researchers, developers, computational scientists and industry affiliates.”

Open Fabrics Workshop Extends Call for Sessions Deadline to Feb 1

The 2016 OpenFabrics Workshop has extended the dealing for its Call for Sessions to Feb. 1, 2016. The event takes place April 4-8, 2016 in Monterey, California. “The Workshop is the premier event for collaboration between OpenFabrics Software (OFS) producers and those whose systems and applications depend on the technology. Every year, the workshop generates lively exchanges among Alliance members, developers and users who all share a vested interest in high performance networks.”

Video: Microsoft Azure for Engineering Analysis and Simulation

Tejas Karmarkar from Microsoft presented this talk at SC15. “Azure provides on-demand compute resources that enable you to run large parallel and batch compute jobs in the cloud. Extend your on-premises HPC cluster to the cloud when you need more capacity, or run work entirely in Azure. Scale easily and take advantage of advanced networking features such as RDMA to run true HPC applications using MPI to get the results you want, when you need them.”