Have spinning disk drives joined the Grateful Dead? That is the contention here in this video, where industry analyst Mark Peters from ESG dons his tie-dyed shirt and declares that flash is the new storage of choice in the datacenter.
“It’s been nearly three years since Intel acquired Whamcloud and its Lustre engineering team. With Intel’s recent announcement that Lustre will power the 2018 Aurora supercomputer at Argonne, we took the opportunity to catch up with Brent Gorda, general manager of Intel High Performance Data Division at Intel Corporation.”
“The Cray XC series DataWarp applications I/O accelerator technology delivers a balanced and cohesive system architecture from compute to storage. It allocates storage dynamically in either private (dedicated) or shared modes. Storage performance quality of service can be provided to individual applications, based on the user’s policies. While leveraging Cray’s proven domain expertise in storage, the DataWarp accelerator can be used as a global storage cache for parallel file systems (PFS) such as Lustre, General Parallel File System (GPFS) and PanFS.”
“Comet is really all about providing high-performance computing to a much larger research community – what we call ‘HPC for the 99 percent’ – and serving as a gateway to discovery,” said SDSC Director Michael Norman, the project’s principal investigator. “Comet has been specifically configured to meet the needs of researchers in domains that have not traditionally relied on supercomputers to solve their problems.”
Today the Square Kilometre Array (SKA) Organization announced that it is teaming up with Amazon Web Services (AWS) to use cloud computing to explore ever-increasing amounts of astronomy data. To kick things off, they just issued a Call for Proposals for AstroCompute in the Cloud, a grant program to accelerate the development of innovative tools and techniques for processing, storing and analyzing the global astronomy community’s vast amounts of astronomic data.
“In this talk, Seagate presents details on its efforts and achievements around improving Hadoop performance on Lustre including a summary on why and how HDFS and Lustre are different and how those differences affect Hadoop performance on Lustre compared to HDFS, Hadoop ecosystem benchmarks and best practices on HDFS and Lustre, Seagate’s open-source efforts to enhance performance of Lustre within “diskless” compute nodes involving core Hadoop source code modification (and the unexpected results), and general takeaways ways on running Hadoop on Lustre more rapidly.”
“For a period of time, it didn’t look like flash drives were going to decrease in price very much. Flash cell technology is limited to around 20nm because of cost and complexity considerations, but manufacturers have found ways around the limitation. Rather than decrease the features size, they now store more bits per cell (TLC) and have started to create 3D flash chips. This combination, plus the growth in flash storage sales, has driven down the price per gigabyte.”