The first annual International Workshop on Performance Portable Programming Models for Accelerators has issued its Call for Papers. Known as P^3MA, the workshop will provide a forum for bringing together researchers, vendors, users and developers to brainstorm aspects of heterogeneous computing and its various tools and techniques.
“A new supercomputer, dubbed Cheyenne, is expected to be operational at the beginning of 2017. The new high-performance computer will be a 5.34-petaflop system, meaning it can carry out 5.34 quadrillion calculations per second. It will be capable of more than 2.5 times the amount of scientific computing performed by Yellowstone.”
In this Graybeards Podcast, Molly Rector from DDN describes how HPC storage technologies are mainstreaming into the enterprise space. “In HPC there are 1000s of compute cores that are crunching on PB of data. For Oil&Gas companies, it’s seismic and wellhead analysis; with bio-informatics it’s genomic/proteomic analysis; and with financial services, it’s economic modeling/backtesting trading strategies. For today’s enterprises such as retailers, it’s customer activity analytics; for manufacturers, it’s machine sensor/log analysis; and for banks/financial institutions, it’s credit/financial viability assessments. Enterprise IT might not have 1000s of cores at their disposal just yet, but it’s not far off. Molly thinks one way to help enterprise IT is to provide a SuperComputer as a service (ScaaS?) offering, where top 10 supercomputers can be rented out by the hour, sort of like a supercomputing compute/data cloud.”
“The combination of using both MPI and OpenMP is a topic that has been explored by many developers in order to determine the most optimum solution. Whether to use OpenMP for outer loops and MPI within, or by creating separate MPI processes and using OpenMP within can lead to various levels of performance. In most cases of determining which method will yield the best results will involve a deep understanding of the application, and not just rearranging directives.”
Today the Ohio Supercomputer Center (OSC) announced plans to boost scientific and industrial discovery and innovation with a powerful new supercomputer from Dell. To be deployed later this year, the new system is part of a $9.7 million investment that received approval from the State Controlling Board in January.
Intel has opened a second parallel computing center at the San Diego Supercomputer Center (SDSC), at the University of California, San Diego. The focus of this new engagement is on earthquake research, including detailed computer simulations of major seismic activity that can be used to better inform and assist disaster recovery and relief efforts.
In this special guest feature, Robert Roe from Scientific Computing World reports that a new Exascale computing architecture using ARM processors is being developed by a European consortium of hardware and software providers, research centers, and industry partners. Funded by the European Union’s Horizon2020 research program, a full prototype of the new system is expected to be ready by 2018.
Today, the OpenPOWER Foundation announced the lineup of speakers for the OpenPOWER Summit 2016, taking place April 5-8 at NVIDIA’s GPU Technology Conference (GTC) at the San Jose Convention Center. The Summit will bring together dozens of technology leaders from the OpenPOWER Foundation to showcase the latest advancements in the OpenPOWER ecosystem, including collaborative hardware, software and application developments – all designed to revolutionize the data center.
Today Pointwise announced the latest release of its meshing software featuring updated native interfaces to computational fluid dynamics (CFD) and geometry codes. Pointwise Version 17.3 R5 also includes geometry import and export to the native file format of Pointwise’s geometry kernel and a variety of bug fixes.
Today SURFsara in the Netherlands announced it will expand the capacity of their Cartesius national supercomputer in the second half of 2016. With an upgrade to 1.8 Petaflops, the Bull sequana system will enable researchers to work on more complex models for climate research, water management, improving medical treatment, research into clean energy, noise reduction and product and process optimization.