Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


ISC 2017 Workshop Preview: Optimizing Linux Containers for HPC & Big Data Workloads

Christian Kniep is hosting a half-day Linux Container Workshop on Optimizing IT Infrastructure and High-Performance Workloads on June 23 in Frankfurt. “Docker as the dominant flavor of Linux Container continues to gain momentum within datacenter all over the world. It is able to benefit legacy infrastructure by leveraging the lower overhead compared to traditional, hypervisor-based virtualization. But there is more to Linux Containers – and Docker in particular, which this workshop will explore.”

Liqid Showcases Composable Infrastructure for GPUs at GTC 2017

“The Liqid Composable Infrastructure (CI) Platform is the first solution to support GPUs as a dynamic, assignable, bare-metal resource. With the addition of graphics processing, the Liqid CI Platform delivers the industry’s most fully realized approach to composable infrastructure architecture. With this technology, disaggregated pools of compute, networking, data storage and graphics processing elements can be deployed on demand as bare-metal resources and instantly repurposed when infrastructure needs change.”

A Seat at the Table – The Value of Women in High-Performance Computing

It’s fair to say that women continue to be underrepresented in STEM, but the question is whether there is a systemic bias making it difficult for women to join and succeed in tech industries, or has the tech industry failed to motivate and persuade women to join? Intel’s Figen Ulgen shares her view.

Avere Systems Powers BioTeam Test Lab at TACC

“In cooperation with vendors and TACC, BioTeam utilizes the lab to evaluate solutions for its clients by standing up, configuring and testing new infrastructure under conditions relevant to life sciences in order to deliver on its mission of providing objective, vendor agnostic solutions to researchers. The life sciences community is producing increasingly large amounts of data from sources ranging from laboratory analytical devices, to research, to patient data, which is putting IT organizations under pressure to support these growing workloads.”

Introduction to Parallel Programming with OpenACC – Part 2

In this video, Michael Wolfe from PGI continues his series of tutorials on parallel programming. “The second in a series of short videos to introduce you to parallel programming with OpenACC and the PGI compilers, using C++ or Fortran. You will learn by example how to build a simple example program, how to add OpenACC directives, and to rebuild the program for parallel execution on a multicore system. To get the most out of this video, you should download the example programs and follow along on your workstation.”

OCF Deploys 600 Teraflop Cluster at University of Bristol

OCF in the UK has deployed a new 600 teraflop supercomputer at the University of Bristol. Designed, integrated, and configured by OCF, the system is the largest of any UK university by core count. “Early benchmarking is showing that the new system is three times faster than our previous cluster.”

One Stop Systems Showcases HPC as a Service at GTC 2017

In this video from GTC 2017, Jaan Mannik from One Stop Systems describes the company’s new HPC as a Service offering. As makers of high density GPU expansion chassis, One Stop Systems designs and manufactures high performance computing systems that revolutionize the data center by increasing speed to the Internet while reducing cost and impact to the infrastructure.

HPE Introduces the World’s Largest Single-memory Computer

Hewlett Packard Enterprise today introduced the world’s largest single-memory computer, the latest milestone in The Machine research project. “The prototype unveiled today contains 160 terabytes (TB) of memory, capable of simultaneously working with the data held in every book in the Library of Congress five times over—or approximately 160 million books. It has never been possible to hold and manipulate whole data sets of this size in a single-memory system, and this is just a glimpse of the immense potential of Memory-Driven Computing.”

DEEP-ER Project Paves the Way to Future Supercomputers

“The DEEP-ER project has created far-reaching impact. Its results have led to widespread innovation and substantially reinforced the position of European industry and academia in HPC. We are more than happy that we are granted the opportunity to continue our DEEP projects journey and generalize the Cluster-Booster approach to create a truly Modular Supercomputing system,” says Prof. Dr. Thomas Lippert, Head of Jülich Supercomputing Centre and Scientific Coordinator of the DEEP-ER project.

D-Wave Lands $50M Funding for Next Generation Quantum Computers

Today D-Wave Systems announced that it has received up to $50 Million in funding from PSP Investments. This facility brings D-Wave’s total funding to approximately US$200 million. The new capital is expected to enable D-Wave to deploy its next-generation quantum computing system with more densely-connected qubits, as well as platforms and products for machine learning applications. “This commitment from PSP Investments is a strong validation of D-Wave’s leadership in quantum computing,” said Vern Brownell, CEO of D-Wave. “While other organizations are researching quantum computing and building small prototypes in the lab, the support of our customers and investors enables us to deliver quantum computing technology for real-world applications today. In fact, we’ve already demonstrated practical uses of quantum computing with innovative companies like Volkswagen. This new investment provides a solid base as we build the next generation of our technology.”