“The architecture of the ClusterStor product line builds on 25 years of experience with storage systems, Seagate (formerly Xyratex) and Lustre on a parallel open source high performance storage system that is commonly used for scalable computing.”
The Seagate ClusterStor Secure Data Appliance (SDA) is the HPC industry’s first scale-out secure storage system officially ICD-503 certified to consolidate multiple previously isolated systems, maintain data security, enforce security access controls, segregate data at different security levels, and provide audit trails, all in a single scale-out file system with proven linear performance and storage scalability.
There is always different levels of importance assigned to various data files in a computer system, specifically a very large system that is storing petabytes of data. In order to maximize the use of the highest speed storage, Hierarchical Storage Management (HSM) was developed to move and store data within easy use of users, yet at the appropriate speed and price.
As you’ve increasingly seen in news headlines, secure access to shared data is not only an issue for Federal and local government agencies and the Intelligence Community – it has also become an issue for business enterprises needing to protect their intellectual property and other sensitive business data while engaging on a global scale with their partners and contractors.
The white paper, Inside the Lustre File System, describes the inner workings of Lustre in a way that is easy to understand, yet is technical enough for many users and systems administrators. Lustre is a mature and stable file system that has consistently been able to respond to the needs of organizations that require high performance throughput and expanding capacity.
This article series is the first to explore the Seagate ClusterStor™ Secure Data Appliance, which is designed to address government and business enterprise need for collaborative and secure information sharing within a Multilevel Security (MLS) framework at Big Data and HPC Scale. Compared to prior methods, this provides vast cost savings in reduced capital equipment and networks as well as reduced operational complexity, floor space, weight, power, and cooling while satisfying today’s requirements for performance, collaborative secure data sharing, and availability.
In this special guest feature from Scientific Computing World, Robert Roe writes that the era of data-centric HPC is upon us. He then investigates how data storage companies are rising to the challenge. In August 2014, a ‘Task Force on High Performance Computing’ reported to the US Department of Energy that data-centric computing will be […]