Adopting Parallelism… is Mandatory

Print Friendly, PDF & Email

In this special guest feature, Intel’s Raj Hazra describes how the new Intel Parallel Computing Centers will help promote parallel computing worldwide.

Raj_HazraWe live in exciting times. Moore’s Law combined with architectural innovations have now enabled the next era in computing – the many core era of energy-efficient performance through pervasive parallelism. As we look to reaching exascale high-performance computing by the end of the decade, we have to ensure that the hardware doesn’t get there … and leave applications behind. This means not only taking today’s applications and optimizing them on parallel architectures such as the Intel Xeon and more recently, the Intel Xeon Phi processors but also, co-designing future application software and future versions of these architectures to achieve optimal performance at all levels – from the computing node to the entire system. And at all scales – from high-performance personal workstations to very large capability “supercomputers”.

However, no one company, developer, community, government, or industry can make this change happen by alone. It will take many collaborative efforts. To jump start the effort, Intel today is announcing our Intel® Parallel Computing Center program, and an open call for collaboration proposals from experts that want to collaborate with us and tackle this opportunity and challenge head on.

Through these centers, Intel hopes to accelerate the creation of open standard, portable, scalable, parallel applications by combining computational science, hardware, programmer tools, compilers, and libraries, with domain knowledge and expertise.

More information on the program can be found by visiting http://software.intel.com/academic. We encourage interested collaborators to join us in this endeavor by downloading our Request for Proposal and submitting your proposals during our initial submission period.

Our first Intel Parallel Computing centers have a long-history of collaborating with Intel and are committed to our vision. The first five centers are CINECA, Purdue University, Texas Advanced Computing Center at the University of Texas (TACC), The University of Tennessee, and Zuse Institut Berlin (ZIB).

Each center is working on exciting and impactful projects. Some of the projects that the centers are tackling are:

CINECA is a nonprofit Consortium, made up of Italian universities and Institutions – hosting one of the largest public Italian computing center, with EMEA and world-wide visibility. CINECA has high expertise in parallel codes and specifically in material modeling codes. In the initial project, the Parallelization of codes like Quantum Espresso an integrated Open-Source suite of computer codes for electronic-structure calculations and nanoscale materials modeling are the target.

Purdue is initially focusing on optimizing the performance of the NEMO scientific simulation suite of software tools. The NanoElectronics MOdeling tool, NEMO, is used in nanoelectronics as they try to better understand how electrons flow through nano-scale devices, such as next-generation transistors. NEMO is utilized by many of the world’s largest semiconductor companies, including Intel.

The University of Tennessee has projects that will target two important life-science codes: Blast and Gromacs. Additionally they have a project developing MAGMA MIC, a new generation of highly optimized linear algebra libraries for the Intel® MIC architecture

One of the projects TACC is taking on is the Memory Access Centric Performance Optimization tool, or MACPO, generates memory traces of the important data structures by code segment. These memory traces are processed to determine the access and reuse patterns of data in each thread for each structure, allowing new levels of parallel code optimization. Over time we expect this tool to have broad impact.

ZIB’s Research Center for Many-core High-Performance Computing will foster the uptake of current and next generation Intel many- and multicore technology in high performance computing and big data analytics. They are focusing on a diverse set of codes including VASP which is targeted at atomic scale materials modeling.

Together we can accelerate the pace of discovery in the fields of energy, finance, manufacturing, life sciences, weather, and beyond.