Archives for June 2016

Video: Analyst Crossfire from ISC 2016

In this this lively panel discussion from ISC 2016, moderator Addison Snell asks visionary leaders from the supercomputing community to comment on forward-looking trends that will shape the industry this year and beyond.

Seeking Submissions for the SC16 Impact Showcase

“Organizations who are currently employing high performance computing to advance their competitiveness and innovation in the global marketplace can highlight their compelling/interesting/novel real-world applications at SC16’s HPC Impact Showcase. The Showcase is designed to introduce attendees to the many ways that HPC matters in our world, through testimonials from companies large and small. Rather than a technical deep dive of how they are using or managing their HPC environments, their stories are meant to tell how their companies are adopting and embracing HPC as well as how it is improving their businesses. Last year’s line-up included presentations on topics from battling ebola to designing at Rolls-Royce. It is not meant for marketing presentations. Whether you are new to HPC or a long-time professional, you are sure to learn something new and exciting in the HPC Impact Showcase.”

Mellanox Technology Accelerates the World’s Fastest Supercomputer

Today Mellanox announced that the company’s interconnect technology accelerates the world’s fastest supercomputer at the supercomputing center in Wuxi, China. The new number one supercomputer delivers 93 Petaflops (3 times higher compared to the previous top system), connecting nearly 41 thousand nodes and more than ten million CPU cores. The offloading architecture of the Mellanox interconnect solution is the key to providing world leading performance, scalability and efficiency, connecting the highest number of nodes and CPU cores within a single supercomputer.

Calyos Demonstrates Water Free Cooling at ISC 2016

In this video from ISC 2016, Olivier de Laet from Calyos describes the company’s innovative cooling technology for high performance computing. “The HPC industry is ever facing is facing the challenge of ever-increasing cooling requirements. While liquid cooling cooling looks to be the best solution, what if you could achieve the same efficiencies without out using water and pumps? Enter Calytronics, cooling technology that is as simple as a heat pipe and as performant as liquid cooling.”

Industries That Need Flexible HPC

Organizations that implement high-performance computing technologies have a wide range of requirements. From small manufacturing suppliers to national research institutions, using significant computing technologies is critical to creating innovative products and leading-edge research. No two HPC installations are the same. “For maximum return, budget, software requirements, performance and customization all must be considered before installing and operating a successful environment.”

Video: How HPC Unlocks Competitive Advantage

In this video, Addison Snell from Intersect360 Research shares how HPC can unlock innovations for a competitive advantage. “Dell HPC solutions are deployed across the globe as the computational foundation for industrial, academic and governmental research critical to scientific advancement and economic and global competitiveness. With the richness of the Dell enterprise portfolio, HPC customers are increasingly relying on Dell HPC experts to provide integrated, turnkey solutions and services resulting in enhanced performance, reliability and simplicity. Customers benefit by engaging with Dell as a single source for total solution design, delivery and ongoing support.”

Mellanox and PNNL to Collaborate on Exascale System

Today Mellanox announced a joint technology collaboration with Pacific Northwest National Laboratory (PNNL) to architect, design and explore technologies for future Exascale platforms. The agreement will explore the advanced capabilities of Mellanox interconnect technology while focusing on a new generation of in-network computing architecture and the laboratory application requirements. This collaboration will also enable the DOE lab, through its Center for Advanced Technology Evaluation (CENATE), and Mellanox to effectively explore new software and hardware synergies that can drive high performance computing to the next level.

Altair Releases PBS Pro Source Code

Open source licensing for Altair’s market-leading HPC workload manager, PBS Professional, is now available. PBS Pro development communities are now forming and the full-core open source version of PBS Pro can be downloaded at www.pbspro.org. “Our intent is to continuously push the boundaries of HPC to pursue exascale computing through active participation with the HPC community,” says James R. Scapa, Altair’s Founder, Chairman, and CEO. “Working together toward common goals will allow for resources to be applied more efficiently. Our dual-licensing platform will encourage public and private sector collaboration to advance globally relevant topics including Big Data, cloud computing, advanced manufacturing, energy, life sciences, and the inexorable move toward a connected world through the Internet of Things.”

Intel to Distribute SUSE High Performance Computing Stack

“The SUSE and Intel collaboration on Intel HPC Orchestrator and OpenHPC puts this power within reach of a whole new range of industries and enterprises that need data-driven insights to compete and advance. This is an industry-changing approach that will rapidly accelerate HPC innovation and advance the state of the art in a way that creates real-world benefits for our customers and partners.”

Context Matters: Distributed Graph Algorithms and Runtime Systems

In this video from the PASC16 conference, Andrew Lumsdaine from Indiana University presents: Context Matters: Distributed Graph Algorithms and Runtime Systems. “The increasing complexity of the software/hardware stack of modern supercomputers makes understanding the performance of the modern massive-scale codes difficult. Distributed graph algorithms (DGAs) are at the forefront of that complexity, pushing the envelope with their massive irregularity and data dependency. We analyze the existing body of research on DGAs to assess how technical contributions are linked to experimental performance results in the field. We distinguish algorithm-level contributions related to graph problems from “runtime-level” concerns related to communication, scheduling, and other low-level features necessary to make distributed algorithms work. We show that the runtime is an integral part of DGAs’ experimental results, but it is often ignored by the authors in favor of algorithm-level contributions.”