Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Lenovo Updates LiCO Tools to Accelerate AI Deployment

Over at the Lenovo Blog, Dr. Bhushan Desam writes that the company just updated its LiCO tools to accelerate AI deployment and development for Enterprise and HPC implementations. “LiCO simplifies resource management and makes launching AI training jobs in clusters easy. LiCO currently supports multiple AI frameworks, including TensorFlow, Caffe, Intel Caffe, and MXNet. Additionally, multiple versions of those AI frameworks can easily be maintained and managed using Singularity containers. This consequently provides agility for IT managers to support development efforts for multiple users and applications simultaneously.”

NVIDIA Releases PGI 2018 Compilers and Tools

Today NVIDIA announced the availability of PGI 2018. “PGI is the default compiler on many of the world’s fastest computers including the Titan supercomputer at Oak Ridge National Laboratory. PGI production-quality compilers are for scientists and engineers using computing systems ranging from workstations to the fastest GPU-powered supercomputers.”

Univa Open Sources Project Tortuga for moving HPC Workloads to the Cloud

Today Univa announced the contribution of its Navops Launch product to the open source community as Project Tortuga under an Apache 2.0 license. The free and open code is designed to help proliferate the transition of enterprise HPC workloads to the cloud. “Having access to more software that applies to a broad set of applications like high performance computing is key to making the transition to the cloud successful,” said William Fellows, Co-Founder and VP of Research, 451 Research. “Univa’s contribution of Navops Launch to the open source community will help with this process, and hopefully be an opportunity for cloud providers to contribute and use Tortuga as the on-ramp for HPC workloads.”

Accelerating HPC Applications on NVIDIA GPUs with OpenACC

Doug Miles from NVIDIA gave this talk at the Stanford HPC Conference. “This talk will include an introduction to the OpenACC programming model, provide examples of its use in a number of production applications, explain how OpenACC and CUDA Unified Memory working together can dramatically simplify GPU programming, and close with a few thoughts on OpenACC future directions.”

The Mont-Blanc project: Updates from the Barcelona Supercomputing Center

Filippo Mantovani from BSC gave this talk at the GoingARM workshop at SC17. “Since 2011, Mont-Blanc has pushed the adoption of Arm technology in High Performance Computing, deploying Arm-based prototypes, enhancing system software ecosystem and projecting performance of current systems for developing new, more powerful and less power hungry HPC computing platforms based on Arm SoC. In this talk, Filippo introduces the last Mont-Blanc system, called Dibona, designed and integrated by the coordinator and industrial partner of the project, Bull/ATOS.”

State of Linux Containers

Christian Kniep from Docker Inc. gave this talk at the Stanford HPC Conference. “This talk will recap the history of and what constitutes Linux Containers, before laying out how the technology is employed by various engines and what problems these engines have to solve. Afterward, Christian will elaborate on why the advent of standards for images and runtimes moved the discussion from building and distributing containers to orchestrating containerized applications at scale.”

Podcast: Open MPI for Exascale

In this Let’s Talk Exascale podcast, David Bernholdt from ORNL discusses the Open MPI for Exascale project, which is focusing on the communication infrastructure of MPI, or message-passing interface, an extremely widely used standard for interprocessor communications for parallel computing. “It’s possible that even though applications may make millions or billions of short calls to the MPI library during the course of an execution, performance improvements can have a significant overall impact on the application runtime.”

Rigetti Computing Releases Forest 1.3 Quantum Software Platform

Rigetti Computing has released a new version of Forest, their quantum software platform. Forest 1.3 offers upgraded developer tools, improved stability, and faster execution. “Starting today, researchers using Forest will be upgraded to version 1.3, which provides better tools for optimizing and debugging quantum programs. The upgrade also provides greater stability in our quantum processor (QPU), which will let researchers run more powerful quantum programs. Forest is the easiest and most powerful way to build quantum applications today. We believe the combination of one of the most powerful gate-model quantum computers, cutting-edge classical hardware, and our unique hybrid classical/quantum architecture creates the clearest and shortest path toward the demonstration of unequivocal quantum advantage.”

Intel MKL Compact Matrix Functions Attain Significant Speedups

The latest version of Intel® Math Kernel Library (MKL) offers vectorized compact functions for general and specialized matrix computations of this type. These functions rely on true SIMD (single instruction, multiple data) matrix computations, and provide significant performance benefits compared to traditional techniques that exploit multithreading but rely on standard data formats.

Flow Graph Analyzer – Speed Up Your Applications

Using the Intel® Advisor Flow Graph Analyzer (FGA), an application such as those that are needed for autonomous driving can be developed and implemented using very high performing underlying software and hardware. Under the Intel FGA, are the Intel Threaded Building Blocks which take advantage of the multiple cores that are available on all types of systems today.