MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Three Questions to Ensure Your HPC Success

Successful HPC computing depends on choosing the architecture that addresses both application and institutional needs. In particular, finding a simple path to leading edge HPC and Data Analytics is not difficult, if you consider the capabilities and limitations of various approaches to HPC performance, scaling, ease of use, and time to solution. Careful analysis and consideration of the following questions will help lead to a successful and cost-effective HPC solution. Here are three questions to ask to ensure HPC success.

Video: Versity HSM – Archiving to Objects

In this video from the 2016 MSST Conference, Harriet Coverston from Versity presents: Versity – Archiving to Objects. “Introducing Versity Storage Manager – an enterprise-class storage virtualization and archiving system that runs on Linux. Offering comprehensive data management for tiered storage environments and the ability to preserve and protect your data forever. Maximum protection at a minimum cost. Versity supports nearly unlimited volumes of storage and offers the most robust archive policy engine on the market.”

Register for ISC 2016 by May 11 for Early Bird Discounts

There is still time to take advantage of Early Bird registration rates for ISC 2016. You can save over 45 percent off the on-site registration rates if you sign up by May 11. “ISC 2016 takes place June 19-23 in Frankfurt, Germany. With an expected attendance of 3,000 participants from around the world, ISC will also host 146 exhibitors from industry and academia.”

HPC and the HDC Datacenter

In this special guest feature from Scientific Computing World, Darren Watkins from Virtus Data Centres explains the importance of building a data centre from the ground up to support the requirements of HPC users – while maximizing productivity, efficiency and energy usage. “The reality for many IT users is they want to run analytics that –with the growth of data – have become too complex and time critical for normal enterprise servers to handle efficiently.”

Video: SGI Production Supercomputing

Mark Seamans from SGI presented this talk at the HPC User Forum in Tucson. “As the trusted leader in high performance computing, SGI helps companies find answers to the world’s biggest challenges. Our commitment to innovation is unwavering and focused on delivering market leading solutions in Technical Computing, Big Data Analytics, and Petascale Storage. Our solutions provide unmatched performance, scalability and efficiency for a broad range of customers.”

Video: Cloud for the “Missing Middle”

Leo Reiter from Nimbix presented this deck at the HPC User Forum. “Nimbix is a pure high performance computing cloud built for volume, speed and simplicity. We give people the tools and the processing power to solve their biggest, toughest problems. We give you the freedom to imagine new possibilities, to test the limits of reality, and to model the future. For most workloads, Nimbix is far less expensive than building, running and maintaining your own supercomputer. It’s also more efficient at spinning up, executing, completing the job and delivering your results — which saves you time and money. And our user-friendly platform means you invest less in development and infrastructure.”

RCE Podcast Looks at the Impala Project

In this RCE Podcast, Marcel Kornacker from Cloudera describes the Impala project. Impala brings scalable parallel database technology to Hadoop, enabling users to issue low-latency SQL queries to data stored in HDFS and Apache HBase without requiring data movement or transformation. Impala is integrated with Hadoop to use the same file and data formats, metadata, security and resource management frameworks used by MapReduce, Apache Hive, Apache Pig and other Hadoop software.

Intersect360 Publishes New Report on the Hyperscale Market

Today Intersect360 Research published a new research report on the Hyperscale market. “This report provides definitions, segmentations, and dynamics of the hyperscale market and describes its scope, the end-user applications it touches, and the market drivers and dampers for future growth. It is the foundational report for the Intersect360 Research hyperscale market advisory service.”

Exxact to Distribute NVIDIA DGX-1 Deep Learning System

The NVIDIA DGX-1 features up to 170 teraflops of half precision (FP16) peak performance, 8 Tesla P100 GPU accelerators with 16GB of memory per GPU, 7TB SSD DL Cache, and a NVLink Hybrid Cube Mesh. Packaged with fully integrated hardware and easily deployed software, it is the world’s first system built specifically for deep learning and with NVIDIA’s revolutionary, Pascal-powered Tesla P100 accelerators, interconnected with NVIDIA’s NVLink. NVIDIA designed the DGX-1 to meet the never-ending computing demands of artificial intelligence and claims it can provide the throughput of 250 CPU-based servers delivered via a single box.

Video: Exploiting HPC Technologies to Accelerate Big Data Processing

“This talk will present RDMA-based designs using OpenFabrics Verbs and heterogeneous storage architectures to accelerate multiple components of Hadoop (HDFS, MapReduce, RPC, and HBase), Spark and Memcached. An overview of the associated RDMAenabled software libraries being designed and publicly distributed as a part of the HiBD project.”