Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Register Now for the SDSC Summer Institute on HPC

“This year’s workshop continues SDSC’s strategy of bringing high-performance computing to what is known as the ‘long tail’ of science, i.e. providing resources to a larger and more diverse number of modest-sized computational research projects that represent, in aggregate, a tremendous amount of scientific research and discovery. SDSC has developed and hosted Summer Institute workshops for well over a decade.”

Job of the Week: HPC System Administrator at Embry-Riddle Aeronautical University

Embry-Riddle Aeronautical University in Daytona Beach is seeking a High Performance Computing System Administrator in our Job of the Week. “The HPC Specialist is responsible for technical systems management, administration, and support for the high-performance computing (HPC) cluster environments. This includes all configuration, authentication, networking, storage, interconnect, and software usage & installation of HPC Clusters. The position is highly technical and directly impacts the daily operational functions of the above environments.”

GTC to Feature 90 Sessions on HPC and Supercomputing

Accelerated computing continues to gain momentum. This year the GPU Technology Conference will feature 90 sessions on HPC and Supercomputing. “Sessions will focus on how computational and data science are used to solve traditional HPC problems in healthcare, weather, astronomy, and other domains. GPU developers can also connect with innovators and researchers as they share their groundbreaking work using GPU computing.”

Intel MPI Library 2017 Focuses on Intel Multi-core/Many-Core Clusters

With the release of Intel Parallel Studio XE 2017, the focus is on making applications perform better on Intel architecture-based clusters. Intel MPI Library 2017, a fully integrated component of Intel Parallel Studio XE 2017, implements the high-performance MPI-3.1 specification on multiple fabrics. It enables programmers to quickly deliver the best parallel performance, even if you change or upgrade to new interconnects, without requiring changes to the software or operating environment.

Intel DAAL Accelerates Data Analytics and Machine Learning

Intel DAAL is a high-performance library specifically optimized for big data analysis on the latest Intel platforms, including Intel Xeon®, and Intel Xeon Phi™. It provides the algorithmic building blocks for all stages in data analysis in offline, batch, streaming, and distributed processing environments. It was designed for efficient use over all the popular data platforms and APIs in use today, including MPI, Hadoop, Spark, R, MATLAB, Python, C++, and Java.

Tutorial on In-Network Computing: SHARP Technology for MPI Offloads

“Increased system size and a greater reliance on utilizing system parallelism to achieve computational needs, requires innovative system architectures to meet the simulation challenges. As a step towards a new network class of co-processors intelligent network devices, which manipulate data traversing the data-center network, SHARP technology designed to offload collective operation processing to the network. This tutorial will provide an overview of SHARP technology, integration with MPI, SHARP software components and live example of running MPI collectives.”

Designing HPC & Deep Learning Middleware for Exascale Systems

DK Panda from Ohio State University presented this deck at the 2017 HPC Advisory Council Stanford Conference. “This talk will focus on challenges in designing runtime environments for exascale systems with millions of processors and accelerators to support various programming models. We will focus on MPI, PGAS (OpenSHMEM, CAF, UPC and UPC++) and Hybrid MPI+PGAS programming models by taking into account support for multi-core, high-performance networks, accelerators (GPGPUs and Intel MIC), virtualization technologies (KVM, Docker, and Singularity), and energy-awareness. Features and sample performance numbers from the MVAPICH2 libraries will be presented.”

Five Ways Scale-Up Systems Save Money and Improve TCO

The move away from the traditional single processor/memory design has fostered new programming paradigms that address multiple processors (cores). Existing single core applications need to be modified to use extra processors (and accelerators). Unfortunately there is no single portable and efficient programming solution that addresses both scale-up and scale-out systems.

Programming for High Performance Processors

“Managing the work on each node can be referred to as Domain parallelism. During the run of the application, the work assigned to each node can be generally isolated from other nodes. The node can work on its own and needs little communication with other nodes to perform the work. The tools that are needed for this are MPI for the developer, but can take advantage of frameworks such as Hadoop and Spark (for big data analytics). Managing the work for each core or thread will need one level down of control. This type of work will typically invoke a large number of independent tasks that must then share data between the tasks.”

What’s Next for HPC? A Q&A with Michael Kagan, CTO of Mellanox

As an HPC technology vendor, Mellanox is in the business of providing the leading-edge interconnects that drive many of the world’s fastest supercomputers. To learn more about what’s new for SC16, we caught up with Michael Kagan, CTO of Mellanox. “Moving InfiniBand beyond EDR to HDR is critical not only for HPC, but also for the numerous industries that are adopting AI and Big Data to make real business sense out the amount of data available and that we continue to collect on a daily basis.”