Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

The High Performance Virtual Computer for Graphics-Intensive Applications

Many industries deploy graphics-intensive applications on single user workstations with individual GPU resources. For those who have switched to a virtualization based environment, many of the legacy desktop virtualization platforms can’t support high end GPUs or multiple GPU configurations. Together with partners like Cisco and One Stop Systems, a London-based tech start-up ebb3 has created the High Performance Virtual Computer (HPVC) to tackle this issue with the aim of creating the fastest performing solution in the world.

NYU Advances Robotics with Nvidia DGX-1 Deep Learning Supercomputer

In this video, NYU researchers describe their plans to advance deep learning with their new Nvidia DGX-1 AI supercomputer. “The DGX-1 is going to be used in just about every research project we have here,” said Yann LeCun, founding director of the NYU Center for Data Science and a pioneer in the field of AI. “The students here can’t wait to get their hands on it.”

Datacenter Efficiencies Through Innovative Cooling

Datacenters that are designed for High Performance Computing (HPC) applications are more difficult to design and construct than those that are designed for more basic enterprise applications. Organizations that are creating these datacenters need to be aware of, and design for systems that are expected to run at their maximum or near maximum performance for the lifecycle of the servers.

2016 Intel HPC Developer Conference Addresses In-Demand Topics

Supercomputing developers and experts from around the globe will converge on Salt Lake City, Utah for the 2016 Intel® HPC Developer Conference on November 12-13 – just prior to SC ‘16. Conference attendance is free, however, those interested in attending should register quickly as Intel is expecting a big response, reflecting the broadening demand for HPC learning opportunities among technical developers. road on to learn about the incredible presenter lineup this year.

Video: Cycle Computing Works with Dell to Deliver More Science for More Users

In this this video from ISC 2016, Tim Carroll describes how Cycle Computing is working with Dell Technologies to deliver more science for more users. Cycle Computing’s CycleCloud software suite is the leading cloud orchestration, provisioning, and data management platform for Big Compute, Big Data, and large technical computing applications running on any public, private, or internal environment.

Slidecast: For AMD, It’s Time to ROCm!

“AMD has been away from the HPC space for a while, but now they are coming back in a big way with an open software approach to GPU computing. The Radeon Open Compute Platform (ROCm) was born from the Boltzmann Initiative announced last year at SC15. Now available on GitHub, the ROCm Platform bringing a rich foundation to advanced computing by better integrating the CPU and GPU to solve real-world problems.”

Co-design for Data Analytics And Machine Learning

The big data analytics market has seen rapid growth in recent years. Part of this trend includes the increased use of machine learning (Deep Learning) technologies. Indeed, machine learning speed has been drastically increased though the use of GPU accelerators. The issues facing the HPC market are similar to the analytics market — efficient use of the underlying hardware. A position paper from the third annual Big Data and Extreme Computing conference (2015) illustrates the power of co-design in the analytics market.

The Future of HPC Application Management in a Post Cloud World

The prevalency of cloud computing has changed the HPC landscape necessaiting HPC management tools that can manage and simplify complex enviornments in order to optimize flexibility and speed. Altair’s new solution PBS Cloud Manager makes it easy to build and manage HPC application stacks.

Network Co-design as a Gateway to Exascale

Achieving better scalability and performance at Exascale will require full data reach. Without this capability, onload architectures force all data to move to the CPU before allowing any analysis. The ability to analyze data everywhere means that every active component in the cluster will contribute to the computing capabilities and boost performance. In effect, the interconnect will become its own “CPU” and provide in-network computing capabilities.

Facilitate HPC Deployments with Reference Designs for Intel Scalable System Framework

With Intel Scalable System Framework Architecture Specification and Reference Designs, the company is making it easier to accelerate the time to discovery through high-performance computing. The Reference Architectures (RAs) and Reference Designs take Intel Scalable System Framework to the next step—deploying it in ways that will allow users to confidently run their workloads and allow system builders to innovate and differentiate designs