Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Video: Towards the Decoding of the Human Brain

Katrin Amunts from Jülich presented this keynote at the PASC17 conference. “The human brain has a multi-level organization and high complexity. New approaches are necessary to decode the brain with its 86 billion nerve cells, each with 10,000 connections. 3D Polarized Light Imaging, for example, elucidates the connectional architecture at the level of axons, while keeping the topography of the whole organ; it results in data sets of several petabytes per brain, which should be actively accessible while minimizing their transport. The Human Brain Project creates a cutting-edge HPC- and HPDA infrastructure to address such challenges including cloud-based collaboration and development platforms with databases, workflow systems, petabyte storage, and supercomputers.”

Bright Computing Steps up with Cloud Bursting to Azure at ISC 2017

In this video from ISC 2017, Bill Wagner from Bright Computing describes the company’s new capabilities for Cloud Bursting to Microsoft Azure. “Cloud bursting from an on-premises cluster to Microsoft Azure offers companies an efficient, cost-effective, secure and flexible way to add additional resources to their HPC infrastructure. Bright’s integration with Azure also gives our clients the ability to build an entire off-premises cluster for compute-intensive workloads in the Azure cloud platform.”

Andrew Jones wraps up ISC 2017

In this special guest feature, Andrew Jones from NAG offers his perspective on ISC 2017. How he came to know such things is a mystery as Mr. Jones did not attend the show this year. One thing is for sure; from now on I’m going to assume his agents are everywhere.

Supercomputing the Secrets of the Snake Genome at TACC

Researchers at the University of Texas at Arlington are using TACC supercomputers to study the unique traits of snake evolution. Led by assistant professor of biology Todd Castoe, the team is exploring the genomes of snakes and lizards to answer critical questions about these creatures’ evolutionary history. For instance, how did they develop venom? How do they regenerate their organs? And how do evolutionarily-derived variations in genes lead to variations in how organisms look and function? “Some of the most basic questions drive our research. Yet trying to understand the genetic explanations of such questions is surprisingly difficult considering most vertebrate genomes, including our own, are made up of literally billions of DNA bases that can determine how an organism looks and functions,” says Castoe. “Understanding these links between differences in DNA and differences in form and function is central to understanding biology and disease, and investigating these critical links requires massive computing power.”

AFRL Taps IBM to Build Brain-Inspired AI Supercomputer

Today IBM announced they are collaborating with the U.S. Air Force Research Laboratory (AFRL) on a first-of-a-kind brain-inspired supercomputing system powered by a 64-chip array of the IBM TrueNorth Neurosynaptic System. The scalable platform IBM is building for AFRL will feature an end-to-end software ecosystem designed to enable deep neural-network learning and information discovery.

One Stop Systems Showcases GPU and Flash Appliances at ISC

This week in Frankfurt, One Stop Systems showcased all-new HPC Appliances at ISC 2017. “One Stop Systems offers a wide variety of GPU and Flash appliances to support a range of customer needs,” Said Steve Cooper, CEO of One Stop Systems. “As GPU and flash technology continue to improve, OSS products are immediately able to accommodate the newest and most powerful GPUs and the highest capacity flash cards. Now we’re offering customers the opportunity to lease time on our HPC appliances through SkyScale. SkyScale utilizes OSS GPU appliances to build, configure, and manage dedicated systems strategically located in maximum-security facilities, allowing customers to focus on results while minimizing capital equipment investment.”

Deep Learning Frameworks Get a Performance Benefit from Intel MKL Matrix-Matrix Multiplication

Intel® Math Kernel Library 2017 (Intel® MKL 2017) includes new GEMM kernels that are optimized for various skewed matrix sizes. The new kernels take advantage of Intel® Advanced Vector Extensions 512 (Intel® AVX-512) and achieves high GEMM performance on multicore and many-core Intel® architectures, particularly for situations arising from deep neural networks..

Inspur Unveils GX4 Ai Accelerator

Today Inspur unveiled GX4, a new flexible and high scalability AI accelerating box at ISC 2017. The GX4 is able to achieve the decoupling coprocessor resources including CPU and GPU, Xeon Phi and FPGA, expand the computing power on demand, and provide highly flexible support to various AI applications in GPU-accelerated computing. This is another innovative effort followed by the release of the ultra-high density AI supercomputer AGX-2 last month at GTC 2017 in California. “According to Jay Zhang from Inspur, the GX4 sufficiently addresses the major differences in the AI deep-learning training model, using a flexible expansion method to support different levels of AI training models, and effectively lowering energy consumption and delays. The GX4 provides a flexible and innovative AI computing solution for companies and research organizations engaged in artificial intelligence across the world.”

Dr. Eng Lim Goh on HPE’s Recent PathForward Award for Exascale Computing

In this video from ISC 2017, Dr. Eng Lim Goh from HPE discusses the company’s recent PathForward award as well as the challenges of designing energy efficient Exascale systems. After that, he gives his unique perspective on HPE’s “The Machine” architecture for memory-driven computing. “The work funded by PathForward will include development of innovative memory architectures, higher-speed interconnects, improved reliability systems, and approaches for increasing computing power without prohibitive increases in energy demand.”

NVIDIA Powers Top 13 Most Energy Efficient Supercomputers

Today Nvidia announced that the NVIDIA Tesla AI supercomputing platform powers the top 13 measured systems on the new Green500 list of the world’s most energy-efficient high performance computing systems. All 13 use NVIDIA Tesla P100 data center GPU accelerators, including four systems based on the NVIDIA DGX-1 AI supercomputer. “NVIDIA also released performance data illustrating that NVIDIA Tesla GPUs have improved performance for HPC applications by 3X over the Kepler architecture released two years ago. This significantly boosts performance beyond what would have been predicted by Moore’s Law, even before it began slowing in recent years.”