Sign up for our newsletter and get the latest big data news and analysis.

Making Storage Bigger on the Inside

This sponsored post from HPE delves into how tools like HPE Data Management Framework are working to make HPC storage “bigger on the inside” and streamline data workflows. “DMF seamlessly moves data between tiers, whether they’re “hot” tiers based on flash storage, “warm” tiers based on hard drives or “cold tiers based on tape.”

Improving Speed, Scalability and the Customer Experience with In-Memory Data Grids

Over the last decade, the new anytime, anywhere, personalized experience has driven query and transaction volumes up 10 to 1000x. It has created 50x more data about customers, products, and interactions. It has also shrunk the response times customers expect from days or hours to seconds or less. Download the new report from GridGain to learn how in-memory computing and in-memory data grids are tackling today’s data storage challenges. 

Quantum Drives High Performance Storage at SC17

In this video from SC17, Molly Presley from Quantum describes how the company’s high performance storage systems power HPC. “So why have an autonomous car at a Supercomputing show? The answer is Big Data. The Autonomous Stuff vehicle in this video is actually a rolling software development platform equipped with sensors that generate a whopping 30 Terabytes of data per day. Now just imagine if there were millions of vehicles on the road generating this kind of data. Only HPC could deal with that problem at scale. Companies like Quantum are stepping up to help solve this big data problem, both in the vehicle, on the edge, and in the datacenter.”

Increasing the Efficiency of Storage Systems

Have you ever wondered why your HPC installation is not performing as you had envisioned ? You ran small simulations. You spec’d out the CPU speed, the network speed and the disk drive speed. You optimized your application and are taking advantage of new architectures. But now as you scale the installation, you realize that the storage system is not performing as expected. Why ? You bought the latest disk drives and expect even better than linear performance from the last time you purchased a storage system. Read how you can get increased efficiency of your storage system.

Supercomputers for All

Supercomputers may date back to the 1960s, but it is only recently that their vast processing power has begun to be harnessed by industry and commerce, to design safer cars, build quieter aeroplanes, speed up drug discovery, and subdue the volatility of the financial markets. The need for powerful computers is growing, says Catherine Rivière […]

Science and Industry using Supercomputers

This paper is intended for people interested in High Performance Computing (HPC) in general, in the performance development of HPC systems from the beginning in the 1970s and, above all, in HPC applications in the past, today and tomorrow. Readers do not need to be supercomputer experts.

Dell HPC General Research Computing

This white paper provides information on the latest Dell HPC General Research Computing Solution based on the Dell 13th Generation servers. The solution supports new generation Intel Xeon E5-2600 v3 based PowerEdge servers targeted to provide optimal performance and dense compute power.

Dell HPC NFS Storage Solution

This white paper describes the Dell NFS Storage Solution – High Availability configurations (NSS6.0-HA) with Dell PowerEdge 13th generation servers. It presents a comparison among all available NSS-HA offerings so far, and provides performance results for a configuration with a storage system providing 480TB of raw capacity.

Lustre Dell Storage for HPC with Intel

The following sections of this paper will describe the Lustre File System, the Dell Storage for HPC with Intel EE for Lustre solution followed by performance analysis and conclusions. Appendix A: Benchmark Command Reference

Design of Projects using HPC

With high-performance computing construction firms and architects have the tools to design more efficient, comfortable and safer buildings by subjecting prototypes to thorough robustness simulations, including detailed analysis of smoke hazard and countermeasures.