Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Job of the Week: Senior Memory Systems Architect at NVIDIA

NVIDIA in Silicon Valley is seeking a Senior Memory Systems Architect in our Job of the Week. “NVIDIA is building the world’s fastest highly-parallel processing systems, period. Our high-bandwidth multi-client memory subsystems are blazing new territory with every generation. As we increase levels of parallelism, bandwidth and capacity, we are presented with design challenges exacerbated by clients with varying but simultaneous needs such as real-time, low latency, and high-bandwidth. In addition, we are adding improved virtualization and programming model capabilities.”

Stanford HPC Conference Returns to Palo Alto in February

Today HPC Advisory Council announced its 2018 Stanford HPC Conference will take place February 20-21, 2018 at Stanford University. The annual California-based conference draws world-class experts from all over the world for two days of thought leadership talks and immersive tutorials focusing on emerging trends with extensive coverage of AI, Data Sciences, HPC, Machine Learning and more. “The Stanford Conference is an intimate gathering of the global HPC community who come together to collaborate and innovate the way to the future,” said Steve Jones, Director of Stanford’s High Performance Computing Center. “SMEs, mentors, students, peers and professionals, representing a diverse range of disciplines, interests and industry, are drawn to the conference to learn from each other and leave collectively inspired to contribute to making the world a better place.”

EuroExa Project puts Europe on the Road to Exascale

In this special guest feature from Scientific Computing World, Robert Roe writes that the EuroExa project has Europe on the road to Exascale computing. “Ultimately, the goals for exascale computing projects are focused on delivering and supporting an exascale-class supercomputer, but the benefits have the potential to drive future developments far beyond the small number of potential exascale systems. Projects such as EuroExa and the Exascale Computing Project in the US could have far-reaching benefits for smaller-scale HPC systems.”

Slidecast: HPC and the Cloud – Announcing the Ellexus Container Checker

In this slidecast, Dr. Rosemary Francis describes the new Ellexus Container Checker, a pioneering cloud-based tool that provides visibility into the inner workings of Docker containers. “Container Checker will help people using cloud platforms to quickly detect problems within their containers before they are let loose on the cloud to potentially waste time and compute spend. Estimates suggest that up to 45% of cloud spend is wasted due in part to unknown application activity and unsuitable storage decisions, which is what we want to help businesses tackle.”

Experiences in providing secure multi-tenant Lustre access to OpenStack

In this video from the DDN User Group at SC17, Dr. Peter Clapham from Wellcome Trust Sanger Institute presents: Experiences in providing secure multi-tenant Lustre access to OpenStack. “If you need 10,000 cores to perform an extra layer of analysis in an hour, you have to scale a significant cluster to get answers quickly. You need a real solution that can address everything from very small to extremely large data sets.”

Quantum Launches Scale-out NAS for High-Value and Data-Intensive Workloads

There is a gap in the market between NAS systems designed for enterprise data management and HPC solutions designed for data-intensive workloads,” said Molly Presley, vice president, Global Marketing, Quantum. “Xcellis Scale-out NAS fills this gap with the features needed by enterprises and the performance required by HPC in a single solution. Xcellis uniquely delivers capacity with the economics of tape and cloud and integrated AI for advanced data insights and can even support traditional block storage demands within the same platform.”

MeDiCI – How to Withstand a Research Data Tsunami

Jake Carroll from The Queensland Brain Institute gave this talk at the DDN User Group. “The Metropolitan Data Caching Infrastructure (MeDiCI) project is a data storage fabric developed and trialled at UQ that delivers data to researchers where needed at any time. The “magic” of MeDiCI is it offers the illusion of a single virtual data centre next door even when it is actually distributed over potentially very wide areas with varying network connectivity.”

Speeding Data Transfer with ESnet’s Petascale DTN Project

Researchers at the DOE are looking to dramatically increase their data transfer capabilities with the Petascale DTN project. “The collaboration, named the Petascale DTN project, also includes the National Center for Supercomputing Applications (NCSA) at the University of Illinois in Urbana-Champaign, a leading center funded by the National Science Foundation (NSF). Together, the collaboration aims to achieve regular disk-to-disk, end-to-end transfer rates of one petabyte per week between major facilities, which translates to achievable throughput rates of about 15 Gbps on real world science data sets.”

New Avere FXT Edge Filer Doubles Performance, Capacity, and Bandwidth for Challenging Workloads

Today Avere Systems introduced the top-of-the-line FXT Edge filer, the FXT 5850. Designed for high data-growth industries, the new FXT enables customers to speed time to market, produce higher quality output and modernize the IT infrastructure with both cloud and advanced networking technologies.

“Our customers in the fields of scientific research, financial services, media and entertainment and others are nearing the limits of the modern data center with ever-increasing workload demands,” said Jeff Tabor, Senior Director of Product Management and Marketing at Avere Systems. “Built on Avere’s enterprise-proven file system technology, FXT 5850 delivers unparalleled performance and capacity to support the most compute-intensive environments and help our customers accelerate their businesses.”

Video: DDN Applied Technologies, Performance and Use Cases

James Coomer gave this talk at the DDN User Group at SC17. “Our technological and market leadership comes from our long-term investments in leading-edge research and development, our relentless focus on solving our customers’ end-to-end data and information management challenges, and the excellence of our employees around the globe, all relentlessly focused on delivering the highest levels of satisfaction to our customers. To meet these ever-increasing requirements, users are rapidly adopting DDN’s best-of-breed high-performance storage solutions for end-to-end data management from data creation and persistent storage to active archives and the Cloud.”