“Cavium ThunderX has significant differentiation in the 64-bit ARM market as Cavium is the first ARMv8 vendor to deliver dual socket support with full ARMv8.1 implementation and significant advantage in CPU cores with 48 cores per socket. In addition, ThunderX supports large memory capacity (512GB per socket, 1TB in a 2S system) with excellent memory bandwidth and low memory latency. In addition, ThunderX includes multiple 10 GbE / 40GbE network interfaces delivering excellent IO throughput. These features enable ThunderX to deliver the core performance & scale out capability that the HPC market requires.”
Allinea Software reports that the company is helping weather and climate researchers to adapt advanced weather models to better exploit today’s technology capability and get ready for future platforms. The company will address leading climatologists and meteorologists on best practices for scalable code development April 6-7 at the 4th ENES HPC Workshop. The session will reference the application of Allinea’s tools across over 20 weather and climate customers worldwide.
Today Allinea announced plans to champion what it sees as a key survival message for the Energy industry when it exhibits at the Rice Oil and Gas HPC Conference in Houston next week. “We’ll be underlining to geophysicists at the conference the real commercial gains to be had from focusing on code performance,” said Robert Rick, Allinea’s VP of Sales, Americas. “HPC is helping the industry to operate more efficiently. The next step is for this market is to use code optimization to speed up the valuable seismic imaging and reservoir modeling processes, which are now essential to this industry.”
In this special guest feature, Robert Roe from Scientific Computing World reports that a new Exascale computing architecture using ARM processors is being developed by a European consortium of hardware and software providers, research centers, and industry partners. Funded by the European Union’s Horizon2020 research program, a full prototype of the new system is expected to be ready by 2018.
Today the European Consortium announced a step towards Exascale computing with the ExaNeSt project. Funded by the Horizon 2020 initiative, ExaNeSt plans to build its first straw man prototype in 2016. The Consortium consists of twelve partners, each of which has expertise in a core technology needed for innovation to reach Exascale. ExaNeSt takes the sensible, integrated approach of co-designing the hardware and software, enabling the prototype to run real-life evaluations, facilitating its scalability and maturity into this decade and beyond.
Today Allinea announced that Oak Ridge National Laboratory has deployed its code performance profiler Allinea MAP in strength on the Titan supercomputer. Allinea MAP enables developers of software for supercomputers of all sizes to produce faster code. Its deployment on Titan will help to use the system’s 299,008 CPU cores and 18,688 GPUs more efficiently. Software teams at Oak Ridge are also preparing for the arrival of the next generation supercomputer, the Summit pre-Exascale system – which will be capable of over 150 PetaFLOPS in 2018.
Today Allinea reports that developers of Roxar Software Solutions at Emerson Process Management used the Allinea Forge to increase the performance of their Tempest MORE next-generation reservoir simulator by 30 percent.
Today Allinea released version 6.0 of their HPC development tools suite Allinea Forge and Performance Reports. Building on their commitment to serving the scientific HPC community, Allinea demonstrated the new features at SC15 last month in Austin.
“Just as representative benchmarks like HPCG are set to replace Linpack, so a focus on software is taking over. From industry analysts to users at SC15 we heard that software is the number one challenge and the number one opportunity to have world-class impact.”
In this video from SC15, Rich Brueckner from insideHPC talks to contestants in the Student Cluster Competition. Using hardware loaners from various vendors and Allinea performance tools, nine teams went head-to-head to build the fastest HPC cluster.