Today DataDirect Networks announced DDN Flashscale, a new family of all-flash scale-out and scale-up storage solutions, which delivers the highest performance and capacity in a cost-optimized and feature rich platform designed for Enterprise Big Data and Analytics, Web Scale Cloud, and HPC environments. “DDN Flashscale’s fast embedded PCIe fabric delivers full native performance and extreme low latency from 48 NVMe or 72 SAS or any mix of SSDs while offering cost optimized sub $1/GB all-flash storage up to 576TB, 6 Million IOPS and 60GB/s per 4U node.”
Researchers are using the Magnus supercomputer at the Pawsey Centre to explore the mysteries of two shipwrecks involved in Australia’s greatest naval disaster. “The process of generating 3D models from the photographs we’ve taken is very computationally intensive. The time it would take to process half a million photographs using our conventional techniques, using our standard computers, would take about a thousand years, so we needed to do something to bring that time down to something achievable.”
Rob Peglar from Micron presented this talk at the 2016 MSST Conference. “The growing demands of mobile computing and data centers continue to drive the need for high-capacity, high-performance NAND flash technology. With planar NAND nearing its practical scaling limits, delivering to those requirements has become more difficult with each generation. Enter our 3D NAND technology, which uses an innovative process architecture to provide 3X the capacity of planar NAND technologies while providing better performance and reliability. System designers who build products like laptops, mobile devices and servers can take advantage of 3D NAND’s unprecedented performance to meet the rising data movement needs for businesses and consumers.”
In this podcast, the Radio Free HPC team looks at the news highlights for the week leading up to Friday the 13th of May, 2016. Highlights include a 25 Petaflop Fujitsu supercomputer coming to Japan, an OpenPOWER Summit coming to Europe, and fighting the Zombie Apocalypse with HPC.
A new paper outlining NERSC’s Burst Buffer Early User Program and the center’s pioneering efforts in recent months to test drive the technology using real science applications on Cori Phase 1 has won the Best Paper award at this year’s Cray User Group (CUG) meeting.
In this video, Oklahoma State Director of HPC Dana Brunson describes how the Cowboy supercomputer powers research. “High performance computing is often used for simulations that may be too big, too small, too fast, too slow, too dangerous or too costly, another thing it’s used for involves data. So you may remember the human genome project it took nearly a decade and cost a billion dollars, these sorts of things can now be done over the weekend for under a thousand dollars. Our current super computer is named Cowboy and it was funded by a 2011 National Science Foundation Grant and it has been serving us very well.”
Today SGI announced the deployment of its largest SGI UV 300 supercomputer to date at The Genome Analysis Centre (TGAC) in the UK. As one of the largest Intel SSD for PCIe*deployments worldwide, TGAC’s new supercomputing platform gives the research Institute access to the next-generation of SGI UV technology for genomics. This will enable TGAC researchers to store, categorize and analyze more genomic data in less time for decoding living systems and answering crucial biological questions. “The combination of processor performance, memory capacity and one of the largest deployments of Intel SSD storage worldwide makes this a truly powerful computing platform for the life sciences.”
Today Fujitsu announced an order for a 25 Petaflop supercomputer system from the University of Tokyo and the University of Tsukuba. Powered by Intel Knights Landing processors, the “T2K Open Supercomputer” will be deployed at the Joint Center for Advanced High-Performance Computing (JCAHPC), which the two universities jointly operate. “The new supercomputer will be an x86 cluster system consisting of 8,208 of the latest FUJITSU Server PRIMERGY x86 servers running next-generation Intel Xeon Phi processors. Due to be completely operational in December 2016, the system is expected to be Japan’s highest-performance supercomputer.”
David Bonnie from LANL presented this talk at the 2016 MSST Conference. “As we continue to scale system memory footprint, it becomes more and more challenging to scale the long-term storage systems with it. Scaling tape access for bandwidth becomes increasingly challenging and expensive when single files are in the many terabytes to petabyte range. Object-based scale out systems can handle the bandwidth requirements we have, but are also not ideal to store very large files as objects. MarFS sidesteps this while still leveraging the large pool of object storage systems already in existence by striping large files across many objects.”
Today’s High Performance Computing (HPC) systems offer the ability to model everything from proteins to galaxies. The insights and discoveries offered by these systems are nothing short of astounding. Indeed, the ability to process, move, and store data at unprecedented levels, often reducing jobs from weeks to hours, continues to move science and technology forward at an accelerating pace. This article series offers those considering HPC, both users and managers, guidance when considering the best way to deploy an HPC solution.