Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Video: HPE goes for Mission to Mars with Supercomputer Launch

“Traveling to Mars and further destinations will require more sophisticated computing capabilities to cut down on communication latencies and ensure astronauts’ survival, but existing computing resources are limited and incapable of extended periods of uptime. Settled aboard the SpaceX Dragon Spacecraft, the Spaceborne Computer is a year-long experiment from HPE and NASA—roughly the amount of time it will take to get to Mars—which will test a supercomputer’s ability to function in the harsh conditions of space.”

HPC Analyst Crossfire at ISC 2017

In this video from ISC 2017, Addison Snell from Intersect360 Research fires back at industry leaders with hard-hitting questions about the state of the HPC industry. “Listen in as visionary leaders from the supercomputing community comment on forward-looking trends that will shape the industry this year and beyond.”

Intel Xeon Scalable Platform Enables Major Advancements for HPC

Described by Intel as the biggest platform advancement in a decade, the new Intel Xeon Scalable Platform accelerates high-performance computing workloads and provides new capabilities to advance emerging fields like artificial intelligence and self-driving vehicles.

Video: HPE Powers 1 Petaflop QURIOSITY Supercomputer at BASF

“In today’s data-driven economy, high performance computing plays a pivotal role in driving advances in space exploration, biology and artificial intelligence,” said Meg Whitman, President and Chief Executive Officer, Hewlett Packard Enterprise. “We expect this supercomputer to help BASF perform prodigious calculations at lightning fast speeds, resulting in a broad range of innovations to solve new problems and advance our world.”

Advancing AI Capabilities with Next-Generation HPC Solutions

HPE and NVIDIA are delivering IT solutions with superhuman intelligence to harness the full power of AI and pioneer the next generation of HPC systems. In this special guest feature, HPE’s Vineeth Ram explores the possibilities of advancing AI capabilities with next-generation HPC solutions. “HPE is excited to complement our purpose-built systems innovation for Deep Learning and AI with the unique, industry leading strengths of the NVIDIA V100 technology architecture to accelerate insights and intelligence for our customers.”

Nick Nystrom Named Interim Director of PSC

Nick Nystrom has been appointed Interim Director of the Pittsburgh Supercomputing Center. Nystrom succeeds Michael Levine and Ralph Roskies, who have been co-directors of PSC since its founding in 1985. “During the interim period, Nystrom will oversee PSC’s state-of-the-art research into high-performance computing, data analytics, science and communications, working closely with Levine and Roskies to ensure a smooth and seamless transition.”

Video: ddR – Distributed Data Structures in R

“A few weeks ago, we revealed ddR (Distributed Data-structures in R), an exciting new project started by R-Core, Hewlett Packard Enterprise, and others that provides a fresh new set of computational primitives for distributed and parallel computing in R. The package sets the seed for what may become a standardized and easy way to write parallel algorithms in R, regardless of the computational engine of choice.”

insideBIGDATA Guide to Artificial Intelligence & Deep Learning

This guide to artificial intelligence explains the difference between AI, machine learning and deep learning, and examine the intersection of AI and HPC. To learn more about AI and HPC download this guide.

Deep Learning in the Spotlight at ISC

This year the ISC conference dedicated an entire day to deep learning, Wednesday, June 21, to discuss the recent advances in artificial intelligence based on deep learning technology. However it, not just the conference where deep learning was dominating the conversation as the showfloor of the exhibition hosted many new products dedicated to optimizing HPC hardware for use in deep learning and AI workloads.

How HPE is Approaching Exascale with Memory-Driven Computing

In this video from ISC 2017, Mike Vildibill describes how Hewlett Packard Enterprise describes why we need Exascale and how the company is pushing forward with Memory-Driven Computing. “At the heart of HPE’s exascale reference design is Memory-Driven Computing, an architecture that puts memory, not processing, at the center of the computing platform to realize a new level of performance and efficiency gains. HPE’s Memory-Driven Computing architecture is a scalable portfolio of technologies that Hewlett Packard Labs developed via The Machine research project. On May 16, 2017, HPE unveiled the latest prototype from this project, the world’s largest single memory computer.”