Metadata makes finding scientific data much easier, both to an individual user as well as to an entire organization. A file system may only keep track of the file location, owner and various time stamps. With a sophisticated metadata system, significant more information can be stored along with the actual data itself. This allows for more efficient workflow within an organization, enabling better collaboration.
ICHEC is using DDN’s Infinite Memory Engine (IME) to accelerate Read and Write speeds in order to test ICHEC’s codes used by Tullow Oil, Africa’s leading independent oil company. In this talk, Browne shows how IME accelerated current workflows and reduced run time, opening the door to significantly faster resource discovery.
“By working with ThinkParQ, we have been able to leverage one of the best and highest performance storage systems for scale-out deployment,” said Dr. Joseph Landman, CEO of Scalable Informatics. “When testing a write-dominated workload using fio, IOR, and io-bm,a single rack of FastPath Unison with BeeGFS running on spinning disks sustained in excess of 40GB/s for multi-terabyte sized writes,far outside of cache. This level of performance comes from the combination of FastPath Unison hardware design, the Scalable Informatics Operating System (SIOS), and the excellent BeeGFS filesystem.”
“100% Flash in the Datacenter? It won’t happen any time soon. Many (most?) tier one workloads will be moved to flash of course, but data is adding up so quickly that it’s highly unlikely you will be seeing a 100% datacenter any time soon. It will take a few years to have about 10/20% of data stored on flash and the rest will remain on huge hard disks (cheap 10+TB hard disks will soon be broadly available for example).”
Over at TOP500.org, Bernd Mohr writes that Europe’s Human Brain Project will have a main production system located at the Juelich Supercomputing Centre. “The HBP supercomputer will be built in stages, with an intermediate “pre-exascale” system on the order of 50 petaflops planned for the 2016-18 timeframe. Full brain simulations are expected to require exascale capabilities, which, according to most potential suppliers’ roadmaps, are likely to be available in, approximately 2021-22.”
In this special guest feature from Scientific Computing World, Robert Roe writes that the era of data-centric HPC is upon us. He then investigates how data storage companies are rising to the challenge. In August 2014, a ‘Task Force on High Performance Computing’ reported to the US Department of Energy that data-centric computing will be […]