Today Bull Information Systems announced two major new international agreements that reflect the company’s momentum in the HPC market.
“The hybrid ActiveStor 16 system blends cost efficient disk-based storage with additional flash capacity and metadata performance, making the platform an optimal choice for PanFS 6.0 with RAID 6+ data protection. Depending on configuration, ActiveStor 16 ships with up to 122.4 TB of capacity per 4U enclosure, providing more than 1.2 PB per rack. In production, ActiveStor 16 will achieve up to 150 GB per second data throughput with capacity that scales beyond 12 PB in a single file system. Manageability is simple and initial systems can be deployed in less than 10 minutes.”
“For those who haven’t been following the details of one of DOE’s more recent procurement rounds, the NERSC-8 and Trinity request for proposals (RFP) explicitly required that all vendor proposals include a burst buffer to address the capability of multi-petaflop simulations to dump tremendous amounts of data in very short order. The target use case is for petascale checkpoint-restart, where the memory of thousands of nodes (hundreds of terabytes of data) needs to be flushed to disk in an amount of time that doesn’t dominate the overall execution time of the calculation.”
In this video from the DDN User Group at ISC’14, Satoshi Matsuoka from the Tokyo Institute of Technology presents: A Look at Big Data in HPC. “HPC has been dealing with big data for all of its existence. But it turns out that the recent commercial emphasis on big data, has coincided with a fundamental change in the sciences as well. As scientific instruments and facilities produce large amounts of data in an unprecedented rate, the HPC community is reacting to this, with revisiting architecture, tools, and services to address this growth in data.”