Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


WekaIO Unveils Industry’s First Cloud-native Scalable File System

Today WekaIO, a venture backed high-performance cloud storage software company, today emerged from stealth to introduce the industry’s first cloud-native scalable file system that delivers unprecedented performance to applications, scaling to Exabytes of data in a single namespace. Headquartered in San Jose, CA, WekaIO has developed the first software platform that harnesses flash technology to create a high-performance parallel scale out file storage solution for both on-premises servers and public clouds.
Data is at the heart of every business but many industries are hurt by the performance limitations of their storage infrastructure,” said Michael Raam, president and CEO of WekaIO. “We are heralding a new era of storage, having developed a true scale-out data infrastructure that puts independent, on-demand capacity and performance control into the hands of our customers. It’s exciting to be part of a company that delivers a true revolution for the storage industry.”

NEC’s Aurora Vector Engine & Advanced Storage Speed HPC & Machine Learning at ISC 2017

In this video from ISC 2017, Oliver Tennert from NEC Deutschland GmbH introduces the company’s advanced technologies for HPC and Machine Learning. “Today NEC Corporation announced that it has developed data processing technology that accelerates the execution of machine learning on vector computers by more than 50 times in comparison to Apache Spark technologies.”

OCF Builds POWER8 Supercomputer for Atomic Weapons Establishment in the UK

High Performance Computing integrator OCF is supporting scientific research at the UK Atomic Weapons Establishment (AWE), with the design, testing and implementation of a new HPC cluster and a separate big data storage system. “The new HPC system is built on IBM’s POWER8 architecture and a separate parallel file system, called Cedar 3, built on IBM Spectrum Scale. In early benchmark testing, Cedar 3 is operating 10 times faster than the previous high-performance storage system at AWE. Both server and storage systems use IBM Spectrum Protect for data backup and recovery.”

Top Weather and Climate Sites run on DDN Storage

“DDN’s unique ability to handle tough application I/O profiles at speed and scale gives weather and climate organizations the infrastructure they need for rapid, high-fidelity modeling,” said Laura Shepard, senior director of product marketing, DDN. “These capabilities are essential to DDN’s growing base of weather and climate organizations, which are at the forefront of scientific research and advancements – from whole climate atmospheric and oceanic modeling to hurricane and severe weather emergency preparedness to the use of revolutionary, new, high-resolution satellite imagery in weather forecasting.”

DDN Drives Discoveries at Van Andel Research Institute

“Deploying DDN’s end-to-end storage solution has allowed us to elevate the standard of protection, increase compliance and push the boundaries of science on a single, highly scalable storage platform,” said Ramjan. “We’ve also saved hundreds of thousands of dollars by centralizing the storage of our data-intensive research and a dozen data-hungry scientific instruments on DDN. With all these advantages it is easy to see why DDN is core to our operation and a major asset to our scientists.”

Video: Lenovo Powers Manufacturing Innovation at Hartree Centre

“STFC Hartree Centre needed a powerful, flexible server system that could drive research in energy efficiency as well as economic impact for its clients. By extending its System x platform with NeXtScale System, Hartree Centre can now move to exascale computing, support sustainable energy use and help its clients gain a competitive advantage.” Sophisticated data processes are now integral to all areas of research and business. Whether you are new to discovering the potential of supercomputing, data analytics and cognitive techniques, or are already using them, Hartree’s easy to use portfolio of advanced computing facilities, software tools and know-how can help you create better research outcomes that are also faster and cheaper than traditional research methods.

DDN and IBM Team on 50 TB per Day, Inter-Continental Active Archive Solution

Today DDN announced that Yahoo Japan has deployed an active archive system jointly developed by DDN and IBM Japan. The new system allows Yahoo! JAPAN to cache dozens of petabytes of data from its OpenStack Swift storage solution in a Japan-based data center, and transfer data to a U.S.-based data center at an astonishing rate of 50 TB of data per day – thus enabling energy cost savings of 74 percent due to lower energy rates in the United States versus Japan, while ensuring fast data access regardless of location.

HPC Bear Cloud to Power Research at University of Birmingham

Designed specifically with researchers in mind, the Birmingham Environment for Academic Research (BEAR) Cloud will augment an already rich set of IT services at the University of Birmingham and will be used by academics across all disciplines, from Medicine to Archaeology, and Physics to Theology. “We are very proud of the new system, but building a research cloud isn’t easy,” said Simon Thompson, Research Computing Infrastructure Architect in IT Services at the University of Birmingham. “We challenged a range of carefully-selected partners to provide the underlying technology.”

DDN Powers High Performance Data Storage Fabric at University of Queensland

“The University’s researchers are making landmark discoveries in fields spanning human heritable disease, cancer, agriculture and biofuels manufacture – and they depend on our IT team to provide them with the fastest, most efficient data storage and compute systems to support their data-heavy work,” said Professor David Abramson, University of Queensland Research Computing Center director. “Our IBM, SGI (DMF) and DDN-based data fabric allows us to deliver ultra-fast multi-site data access without requiring any extra intervention from researchers and helps us to ensure our scientists can focus their time on potentially life-saving discoveries.”

Consolidating Storage for Scientific Computing

In this special guest feature from Scientific Computing World, Shailesh M Shenoy from the Albert Einstein College of Medicine in New York discusses the challenges faced by large medical research organizations in the face of ever-growing volumes of data. “In short, our challenge was that we needed the ability to collaborate within the institution and with colleagues at other institutes – we needed to maintain that fluid conversation that involves data, not just the hypotheses and methods.”