Today the HPC Advisory Council announced that 12 university teams from around the world will compete in the HPCAC-ISC 2016 Student Cluster Competition at ISC 2016 conference next June in Frankfurt.
“What we’re showcasing this year is – what we’re jokingly calling – face-melting performance. What we’re trying to do is make extreme performance available at a very aggressive price point, and at a very aggressive space point, for end users. So, what we’ve been doing and what we’ve been working on for the past couple of months has been, basically, building an NVMe-type unit. This NVMe unit connects flash devices through a PCIe interface to the processor complex.”
In this video from SC15, Bill Mannel from HPE, Charlie Wuischpard from Intel, and Nick Nystrom from the Pittsburgh Supercomputing Center discuss their collaboration for High Performance Computing. Early next year, Hewlett Packard Enterprise will deploy the Bridges supercomputer based on Intel technology for breakthrough data centric computing at PSC. “Welcome to Bridges, a new concept in HPC – a system designed to support familiar, convenient software and environments for both traditional and non-traditional HPC users. It is a richly connected set of interacting systems offering a flexible mix of gateways (web portals), Hadoop and Spark ecosystems, batch processing and interactivity.”
Genome sequencing is a technology that can takes advantage of the growing capability of todays ‘ modern HPC systems. Dell is leading the charge in the area of personalized medicine by providing highly tuned systems to perform genomic sequencing and data management. The whitepaper, The InsideHPC Guide to Genomic is a overview of how Dell is providing state-of-the-art solutions to the life science industry.
Dan Stanzione from TACC presented this talk at the DDN User Group at SC15. “TACC is an advanced computing research center that provides comprehensive advanced computing resources and support services to researchers in Texas and across the USA. The mission of TACC is to enable discoveries that advance science and society through the application of advanced computing technologies. Specializing in high performance computing, scientific visualization, data analysis & storage systems, software, research & development and portal interfaces, TACC deploys and operates advanced computational infrastructure to enable computational research activities of faculty, staff, and students of UT Austin.”
The HPC Advisory Council Stanford Conference 2016 has issued its Call for Participation. The event will take place Feb 24-25, 2016 on the Stanford University campus at the new Jen-Hsun Huang Engineering Center. “The HPC Advisory Council Stanford Conference 2016 will focus on High-Performance Computing usage models and benefits, the future of supercomputing, latest technology developments, best practices and advanced HPC topics. In addition, there will be a strong focus on new topics such as Machine Learning and Big Data. The conference is open to the public free of charge and will bring together system managers, researchers, developers, computational scientists and industry affiliates.”
“We have enabled virtualization for HPC but it’s important to bring the benefits of virtualization to end researchers in a way they can use it, right? So what we have done is we have created the solution plus VMware High-Performance Analytics, which allows researchers to author their own workloads, they can collaborate it, they can clone it, then they can share it with other researchers. And they can modify their workload – they can fine tune it.”
The computational requirements for weather forecasting are driven by the need for higher resolution models for more accurate and extended forecasts. In addition, more physics and chemistry processes are included in the models so we can observe the very fine features of weather behavior. These models operate on 3D grids that encompass the globe. The closer the points on the grid are to each other, the more accurate the results.
Last week at SC15, Rambus announced that it has partnered with Los Alamos National Laboratory (LANL) for evaluating elements of its Smart Data Acceleration (SDA) Research Program. The SDA platform has been deployed at LANL to improve the performance of in-memory databases, graph analytics and other Big Data applications.