Tamara Kolda from Sandia gave this Invited Talk at SC16. “Scientists are drowning in data. The scientific data produced by high-fidelity simulations and high-precision experiments are far too massive to store. For instance, a modest simulation on a 3D grid with 500 grid points per dimension, tracking 100 variables for 100 time steps yields 5TB of data. Working with this massive data is unwieldy and it may not be retained for future analysis or comparison. Data compression is a necessity, but there are surprisingly few options available for scientific data.”
In this podcast, the Radio Free HPC team honors the Festivus tradition of the annual Airing of Grievances. Our random gripes include: the need for a better HPC benchmark suite, the missed opportunity for ARM servers, the skittish battery in the new Macbook Pro, and a lack of an industry standards body for cloud computing.
“The SAGE project, which incorporates research and innovation in hardware and enabling software, will significantly improve the performance of data I/O and enable computation and analysis to be performed more locally to data wherever it resides in the architecture, drastically minimizing data movements between compute and data storage infrastructures. With a seamless view of data throughout the platform, incorporating multiple tiers of storage from memory to disk to long-term archive, it will enable API’s and programming models to easily use such a platform to efficiently utilize the most appropriate data analytics techniques suited to the problem space.”
“As financial institutions acknowledge the importance of data as an asset and as they continue to deploy sophisticated analytics to realize the benefit of that asset, we will begin to see some achieve competitive advantages in this fast-paced marketplace. Of course, these organizations also face continued regulatory scrutiny and disruptive changes to technology that can pose challenges,” said Michael Hay, Vice President and Chief Engineer at Hitachi Data Systems. “Together with our partner Maxeler Technologies, Hitachi Data Systems can help our customers address business demands, stay compliant and transform data into information, insight and opportunities to win.”
Today Nimbus Data announced the award of a patent for its non-blocking all-flash architecture. Nimbus Data’s Parallel Memory Architecture scales capacity and performance linearly within each ExaFlash system, offering latency and throughput performance up to 6x faster scale-up designs. “Conventional HDD-centric architectures employed by the majority of all-flash array vendors trap flash performance behind legacy shared bus and scale-up designs,” stated Thomas Isakovich, CEO and Founder. “Now patented, Nimbus Data’s Parallel Memory Architecture overcomes the limitations of generic off-the-shelf servers, capturing the full performance potential of all-flash technology.”
Today the HPC Advisory Council announced key dates for its 2017 international conference series in the USA and Switzerland. The conferences are designed to attract community-wide participation, industry leading sponsors and subject matter experts. “HPC is constantly evolving and reflects the driving force behind many medical, industrial and scientific breakthroughs using research that harnesses the power of HPC and yet, we’ve only scratched the surface with respect to exploiting the endless opportunities that HPC, modeling, and simulation present,” said Gilad Shainer, chairman of the HPC Advisory Council. “The HPCAC conference series presents a unique opportunity for the global HPC community to come together in an unprecedented fashion to share, collaborate, and innovate our way into the future.”
“I think one of the things that resellers like about us is that we never take a reseller deal directly. As a channel-first company, we always drive as much business through channel as our customers allow,” said Philip Crocker, senior director of channel marketing and sales enablement at Panasas. “In addition, we have a high-quality yet low-certification entry cost to the program. We also allow 24 x 7 x 365 access to field sales engineers, are highly responsive to partners and have zero sales friction. Specifically, the Accelerate program is compensation-neutral for our sales representatives and distributors.”
“DDN leads the market in large-scale object-based storage with individual customer installations of more than 500+ billion objects in production,” said Kurt Kuckein, director of product management, DDN. “DDN’s high-performance, massively scalable WOS object storage platform offers superior multi-site collaboration, big data archive capabilities and storage efficiencies that make it ideal for a wide range of use cases. In a year marked by rapid growth, new customers and new markets for DDN, we are excited to cap 2016 with IDC’s validation of our global leadership in object storage.”
“MEGWARE is building high quality BeeGFS turn-key solutions already since the first days of its release back in 2009. Over the years, we have seen an outstanding level of customer satisfaction and systems that regularly exceeded customer expectations in throughput, manageability and technical support. Thus, we are proud to have MEGWARE as the world’s first BeeGFS Platinum partner.“ says Sven Breuner, CEO of ThinkParQ, the company behind BeeGFS.
Today DDN announced that it has partnered with Synergy Solutions Management to offer organizations access to a first-of-its-kind facility in North America where users can plan, design and test video surveillance and high performance computing solutions and conduct training. The Synergy Innovations Lab, located near Vancouver, Canada, provides a fully-equipped testing lab that allows users to evaluate solutions within a mixed workload environment.