Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Video: Inside the Award-Winning Stanford Living Heart Project

In this video, Staffan Hansson from AdvaniaDC chats with Wolfgang Gentzsch from The UberCloud about the award-winning Stanford Living Heart Project, how the partnership was so successful, and his thoughts on HPC in the cloud and what it means for the future of research. “The Stanford LHP project is simulating cardiac arrhythmia, which can be an undesirable and potentially lethal side effect of drugs.”

Stanford HPC Conference Returns to Palo Alto in February

Today HPC Advisory Council announced its 2018 Stanford HPC Conference will take place February 20-21, 2018 at Stanford University. The annual California-based conference draws world-class experts from all over the world for two days of thought leadership talks and immersive tutorials focusing on emerging trends with extensive coverage of AI, Data Sciences, HPC, Machine Learning and more. “The Stanford Conference is an intimate gathering of the global HPC community who come together to collaborate and innovate the way to the future,” said Steve Jones, Director of Stanford’s High Performance Computing Center. “SMEs, mentors, students, peers and professionals, representing a diverse range of disciplines, interests and industry, are drawn to the conference to learn from each other and leave collectively inspired to contribute to making the world a better place.”

Video: How to Get the HPC Best-in-class Performance via Intel Xeon Skylake

“HPC Cloud services built on the latest Intel architecture, Skylake Xeon processor, are now powering the C5 compute intensive instance at AWS and can serve as your next-generation HPC platform. Hear how customers are starting to consider hybrid strategies to increase productivity and lower their capital expenditure and maintenance costs. Also learn how to adapt this model to meet the increasing HPC and data analytics needs for your applications with the new technologies incorporated into the platform.”

Rescale Demonstrates Easy HPC Cloud Management at SC17

In this video from SC17, Peter Lyu from Rescale demonstrates how the company brings HPC Workloads to the Cloud. “Rescale helps customers shift from complex on-premise workflows to an easy-to-use web-based engineering SaaS workflow; Workflow variety: Rescale supports all types of workflows including multidisciplinary exploration, optimizations, design of experiments, and more.”

Job of the Week: HPC Systems Engineer at Taos

Taos is immediately hiring a HPC Systems Engineer for a cutting-edge tech company in Sunnyvale, CA! We’re changing the face of some of the most innovative companies with our diverse solution offerings, exceptional talent and thought leadership. Our clients look to us first for advice, insight, and support, driving us to relentlessly focus on customer success.

Rescale Brings HPC Workloads to the Cloud at SC17

In this video from SC17, Gabriel Broner from Rescale describes how the company brings HPC Workloads to the Cloud. “Rescale offers HPC in the cloud for engineers and scientists, delivering computational performance on-demand. Using the latest hardware architecture at cloud providers and supercomputing centers, Rescale enables users to extend their on-premise system with optimized HPC in the cloud.”

Slidecast: HPC and the Cloud – Announcing the Ellexus Container Checker

In this slidecast, Dr. Rosemary Francis describes the new Ellexus Container Checker, a pioneering cloud-based tool that provides visibility into the inner workings of Docker containers. “Container Checker will help people using cloud platforms to quickly detect problems within their containers before they are let loose on the cloud to potentially waste time and compute spend. Estimates suggest that up to 45% of cloud spend is wasted due in part to unknown application activity and unsuitable storage decisions, which is what we want to help businesses tackle.”

Baidu Deploys AMD EPYC Single Socket Platforms for ‘ABC’ Datacenters

“Baidu’s mission is to make a complex world simpler through technology, and we are constantly looking to discover and apply the latest cutting-edge technologies, innovations, and solutions to business. AMD EPYC processors provide Baidu with a new level of energy efficient and powerful computing capability.”

MeDiCI – How to Withstand a Research Data Tsunami

Jake Carroll from The Queensland Brain Institute gave this talk at the DDN User Group. “The Metropolitan Data Caching Infrastructure (MeDiCI) project is a data storage fabric developed and trialled at UQ that delivers data to researchers where needed at any time. The “magic” of MeDiCI is it offers the illusion of a single virtual data centre next door even when it is actually distributed over potentially very wide areas with varying network connectivity.”

New Avere FXT Edge Filer Doubles Performance, Capacity, and Bandwidth for Challenging Workloads

Today Avere Systems introduced the top-of-the-line FXT Edge filer, the FXT 5850. Designed for high data-growth industries, the new FXT enables customers to speed time to market, produce higher quality output and modernize the IT infrastructure with both cloud and advanced networking technologies.

“Our customers in the fields of scientific research, financial services, media and entertainment and others are nearing the limits of the modern data center with ever-increasing workload demands,” said Jeff Tabor, Senior Director of Product Management and Marketing at Avere Systems. “Built on Avere’s enterprise-proven file system technology, FXT 5850 delivers unparalleled performance and capacity to support the most compute-intensive environments and help our customers accelerate their businesses.”