MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


ORiGAMI – Oak Ridge Graph Analytics for Medical Innovation

Rangan Sukumar from ORNL presented this talk at the HPC User Forum in Tucson. “ORiGAMI is a tool for discovering and evaluating potentially interesting associations and creating novel hypothesis in medicine. ORiGAMI will help you “connect the dots” across 70 million knowledge nuggets published in 23 million papers in the medical literature. The tool works on a ‘Knowledge Graph’ derived from SEMANTIC MEDLINE published by the National Library of Medicine integrated with scalable software that enables term-based, path-based, meta-pattern and analogy-based reasoning principles.”

Monitoring Power Consumption with the Intelligent Platform Management Interface

“NWPerf is software that can measure and collect a wide range of performance data about an application or set of applications that run on a cluster. With minimal impact on performance, NWPerf can gather historical information that then can be used in a visualization package. The data collected includes the power consumption using the Intelligent Platform Management Interface (IPMI) for the Intel Xeon processor and the libmicmgmt API for the Intel Xeon Phi coprocessor. Once the data is collected, and using some data extraction mechanisms, it is possible to examine the power used across the cluster, while the application is running.”

Modernizing Materials Code at OSC’s Intel Parallel Computing Center

A research team at the Ohio Supercomputer Center (OSC) is beginning the task of modernizing a computer software package that leverages large-scale, 3-D modeling to research fatigue and fracture analyses, primarily in metals. “The research is a result of OSC being selected as an Intel Parallel Computing Center. The Intel PCC program provides funding to universities, institutions and research labs to modernize key community codes used across a wide range of disciplines to run on current state-of-the-art parallel architectures. The primary focus is to modernize applications to increase parallelism and scalability through optimizations that leverage cores, caches, threads and vector capabilities of microprocessors and coprocessors.”

Video: Software-Defined Networking on InfiniBand Fabrics

A design for virtual Ethernet networks over IB is described. The virtual Ethernet networks are implemented as overlays on the IB network. They are managed in a flexible manner using software. Virtual networks can be created, removed, and assigned to servers dynamically using this software. A virtual network can exist entirely on the IB fabric, or it can have an uplink connecting it to physical Ethernet using a gateway. The virtual networks are represented on the servers by virtual network interfaces which can be used with para-virtualized I/O, SRIOV,and non-virtualized I/O. This technology has many uses: communication between applications which are not IB-aware, communication between IB-connected servers and Ethernet-connected servers, and multi-tenancy for cloud environments. It can be used in conjunction with OpenStack, such as for tenant networks.”

Manage Reproducibility of Computational Workflows with Docker Containers and Nextflow

“Research computational workflows consist of several pieces of third party software and, because of their experimental nature, frequent changes and updates are commonly necessary thus raising serious deployment and reproducibility issues. Docker containers are emerging as a possible solution for many of these problems, as they allow the packaging of pipelines in an isolated and self-contained manner. This presentation will introduce our experience deploying genomic pipelines with Docker containers at the Center for Genomic Regulation (CRG). I will discuss how we implemented it, the main issues we faced, the pros and cons of using Docker in an HPC environment including a benchmark of the impact of containers technology on the performance of the executed applications.”

Nvidia Launches Tesla P100 Hyperscale Accelerator

“Our greatest scientific and technical challenges — finding cures for cancer, understanding climate change, building intelligent machines — require a near-infinite amount of computing performance,” said Jen-Hsun Huang, CEO and co-founder, NVIDIA. “We designed the Pascal GPU architecture from the ground up with innovation at every level. It represents a massive leap forward in computing performance and efficiency, and will help some of the smartest minds drive tomorrow’s advances.”

Texas A&M is the Latest Intel Parallel Computing Center

Texas A&M University’s High Performance Research Computing (HPRC) center is the latest Intel® Parallel Computing Center. “HPRC is proud to be recognized as an Intel Parallel Computing Center,” said Honggao Liu, director of High Performance Research Computing. “At HPRC we use high-performance computing to unite experts in numerous fields of study. This grant and multi-disciplinary project will allow us to better understand and solve issues within this critical software.”

Allinea Taps the Power of GPUs for High Performance Code

Today Allinea announced plans to showcase its software tools for developing and optimizing high performance code at the GPU Technology Conference April 4-7 in San Jose. The company will highlight the best practices required to unleash the potential performance within the latest generation of NVIDIA GPUs for a wide range of software applications.

Tutorial on the EasyBuild Framework

Kenneth Hoste from the University Ghent presented this tutorial at the Switzerland HPC Conference. “One unnecessarily time-consuming task for HPC user support teams is installing software for users. Due to the advanced nature of a supercomputing system (think: multiple multi-core modern microprocessors (possibly next to co-processors like GPUs), the availability of a high performance network interconnect, bleeding edge compilers & libraries, etc.), compiling the software from source on the actual operating system and system architecture that it is going to be used on is typically highly preferred over using readily available binary packages that were built in a generic way.

Video: Shifter – Containers in HPC environments

“Containers wrap up software with all its dependencies in packages that can be executed anywhere. This can be specially useful in HPC environments where, often, getting the right combination of software tools to build applications is a daunting task. However, typical container solutions such as Docker are not a perfect fit for HPC environments. Instead, Shifter is a better fit as it has been built from the ground up with HPC in mind. In this talk, we show you what Shifter is and how to leverage from the current Docker environment to run your ap- plications with Shifter.”