Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


IBM Rolls Out All-flash Storage for Cognitive Workloads

“The DS8880 All-Flash family is targeted at users that have experienced poor storage performance due to latency, low server utilization, high energy consumption, low system availability and high operating costs. These same users have been listening, learning and understand the data value proposition of being a cognitive business,” said Ed Walsh, general manager, IBM Storage and Software Defined Infrastructure. “In the coming year we expect an awakening by companies to the opportunity that cognitive applications, and hybrid cloud enablement, bring them in a data driven marketplace.”

D-Wave Releases Open Quantum Software Environment

“Just as a software ecosystem helped to create the immense computing industry that exists today, building a quantum computing industry will require software accessible to the developer community,” said Bo Ewald, president, D-Wave International Inc. “D-Wave is building a set of software tools that will allow developers to use their subject-matter expertise to build tools and applications that are relevant to their business or mission. By making our tools open source, we expand the community of people working to solve meaningful problems using quantum computers.”

Selecting HPC Network Technology

“With three primary network technology options widely available, each with advantages and disadvantages in specific workload scenarios, the choice of solution partner that can deliver the full range of choices together with the expertise and support to match technology solution to business requirement becomes paramount.”

Exascale Computing: A Race to the Future of HPC

In this week’s Sponsored Post, Nicolas Dube of Hewlett Packard Enterprise outlines the future of HPC and the role and challenges of exascale computing in this evolution. The HPE approach to exascale is geared to breaking the dependencies that come with outdated protocols. Exascale computing will allow users to process data, run systems, and solve problems at a totally new scale, which will become increasingly important as the world’s problems grow ever larger and more complex.

OpenFabrics Alliance Workshop 2017 – Call for Sessions Open

Each year the OpenFabrics Alliance (OFA) hosts an annual workshop devoted to advancing the state of the art in networking. “One secret to the enduring success of the workshop is the OFA’s emphasis on hosting an interactive, community-driven event. To continue that trend, we are once again reaching out to the community to create a rich program that addresses topics important to the networking industry. We’re looking for proposals for workshop sessions.”

Video: Livermore HPC Takes Aim at Cancer

In this video, Jonathan Allen from LLNL describes how Lawrence Livermore’s supercomputers are playing a crucial role in advancing cancer research and treatment. “A historic partnership between the Department of Energy (DOE) and the National Cancer Institute (NCI) is applying the formidable computing resources at Livermore and other DOE national laboratories to advance cancer research and treatment. Announced in late 2015, the effort will help researchers and physicians better understand the complexity of cancer, choose the best treatment options for every patient, and reveal possible patterns hidden in vast patient and experimental data sets.”

Oak Ridge Plays key role in Exascale Computing Project

Oak Ridge National Laboratory reports that its team of experts are playing leading roles in the recently established DOE’s Exascale Computing Project (ECP), a multi-lab initiative responsible for developing the strategy, aligning the resources, and conducting the R&D necessary to achieve the nation’s imperative of delivering exascale computing by 2021. “ECP’s mission is to ensure all the necessary pieces are in place for the first exascale systems – an ecosystem that includes applications, software stack, architecture, advanced system engineering and hardware components – to enable fully functional, capable exascale computing environments critical to scientific discovery, national security, and a strong U.S. economy.”

Apply Now for Summer of HPC 2017 in Barcelona

“The PRACE Summer of HPC is an outreach and training program that offers summer placements at top High Performance Computing centers across Europe to late-stage undergraduates and early-stage postgraduate students. Up to twenty top applicants from across Europe will be selected to participate. Participants will spend two months working on projects related to PRACE technical or industrial work and produce a report and a visualization or video of their results.”

Understanding Cities through Computation, Data Analytics, and Measurement

“For many urban questions, however, new data sources will be required with greater spatial and/or temporal resolution, driving innovation in the use of sensors in mobile devices as well as embedding intelligent sensing infrastructure in the built environment. Collectively, these data sources also hold promise to begin to integrate computational models associated with individual urban sectors such as transportation, building energy use, or climate. Catlett will discuss the work that Argonne National Laboratory and the University of Chicago are doing in partnership with the City of Chicago and other cities through the Urban Center for Computation and Data, focusing in particular on new opportunities related to embedded systems and computational modeling.”

Intel HPC Orchestrator Powers Research at University of Pisa

In this video, Maurizio Davini from the University of Pisa describe how the University works with Dell EMC and Intel to test new technologies, integrate and optimize HPC systems with Intel HPC Orchestrator software. “We believe these two companies are at the forefront of innovation in high performance computing,” said University CTO Davini. “We also share a common goal of simplifying HPC to support a broader range of users.”