MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


New HP Apollo 4000 Systems Fulfill Booming Big Data Analytics & Object Storage Requirements

Apollo4530_Gen9_Server_TOP

High Performance Computing and Big Data analytics touch us every day. We each rely on daily weather forecasts, banking and financial information, scientific and health analyses, and thousands of other activities that involve HPC and Big Data analysis.

Bare Metal to Application Ready in Less Than a Day

cluster complexity

There is big push for decreasing the complexity in setting up and managing HPC clusters in the data center. This IBM Webinar, “Bare Metal To Application Ready is Less Than a Day” provides excellent tips for preparing and managing the complexity of an HPC cluster.

Intel to Purchase Altera for $16.7 Billion

altera

Intel announced plans to buy Altera, a maker of programmable logic semiconductors, for $16.7 billion, strengthening its presence in the datacenter market.

Open Computing Benefits Many Industry Segments

open-compute-project

The Open Compute Project is a way for organization to increase computing power while lowering associated costs with hyper-scale computing. This article is the 4th in a series from insideHPC that showcases the benefits of open computing to specific industries.

IBM Platform Computing – Ready to Run Clusters in the Cloud

Cluster Cloud sm

Demands by users that are running applications in the scientific, technical, financial or research areas can easily outstrip the capabilities of in-house clusters of servers. IT departments have to anticipate compute and storage needs for their most demanding users, which can lead to extra spending on both CAPEX and OPEX once the workload changes.

ClusterStor Solution for HPC and Big Data

Multilevel Security Image

The ClusterStor SDA is built on Seagate’s successful ClusterStor family of high performance storage solutions for HPC and Big Data, providing unmatched file system performance, optimized productivity and the HPC industry’s highest levels of efficiency, reliability, availability and serviceability. Taking full advantage of the Lustre file system, Seagate ClusterStor is designed for massive scale-out performance and capacity, with the ability to support from hundreds to tens-of-thousands of client compute nodes delivering data intensive workload throughput from several GB/sec to over 1TB/sec.

Seagate ClusterStor Secure Data Appliance

Secure Data Appliance

The Seagate ClusterStor Secure Data Appliance (SDA) is the HPC industry’s first scale-out secure storage system officially ICD-503 certified to consolidate multiple previously isolated systems, maintain data security, enforce security access controls, segregate data at different security levels, and provide audit trails, all in a single scale-out file system with proven linear performance and storage scalability.

Inside Lustre Hierarchical Storage Management (HSM)

Hierarchical Storage Management

There is always different levels of importance assigned to various data files in a computer system, specifically a very large system that is storing petabytes of data. In order to maximize the use of the highest speed storage, Hierarchical Storage Management (HSM) was developed to move and store data within easy use of users, yet at the appropriate speed and price.

Developing Multilevel Security (MLS) Framework

Multilevel secure framework

This is the third article in a series designed to address the needs of government and business or collaborative and secure information sharing within a Multilevel Security (MLS) framework. Learn the key elements of a workable Multilevel Security framework.

Deploying Collaborative Multilevel Security at Big Data and HPC Scale

Multilevel Security Image

This article series is the first to explore the Seagate ClusterStor™ Secure Data Appliance, which is designed to address government and business enterprise need for collaborative and secure information sharing within a Multilevel Security (MLS) framework at Big Data and HPC Scale. Compared to prior methods, this provides vast cost savings in reduced capital equipment and networks as well as reduced operational complexity, floor space, weight, power, and cooling while satisfying today’s requirements for performance, collaborative secure data sharing, and availability.