The University of Houston (UH) is adding a new, state-of-the-art supercomputer to its arsenal of research tools. With 1860 compute cores, the new Opuntia cluster will be used primarily for scientific and engineering work. The acquisition of this new system marks the start of a new era of supercomputing not only for the University of […]
The Barcelona Supercomputing Center and Intel have renewed their collaboration research agreement at the Intel and BSC Exascale Laboratory in Barcelona. Now funded through 2017, the Intel and BSC Exascale Lab in Barcelona focuses on software and extraordinary levels of parallelism that will be needed to use future Intel-architecture-based supercomputers.
“Powered by Intel’s Xeon E5-2600 v3 processor, Penguin Computing’s Tundra OpenHPC platform delivers density, performance and serviceability for demanding and extraordinary customers. Built to be compatible with Open Compute Open Rack specifications, the Tundra OpenHPC platform provides customers with a powerful and compact HPC server designed to reduce infrastructure costs when moving to the next generation of technology.”
In this Chip Chat podcast, Mike Bernhardt, the Community Evangelist for HPC and Technical Computing at Intel, discusses the importance of code modernization as we move into multi- and many-core systems. Markets as diverse as oil and gas, financial services, and health and life sciences can see a dramatic performance improvement in their code through parallelization.
This week Russian supercomputing vendor RSC Group announced that the company is now a TOP10 supercomputing vendor according to the current edition of the Top500 list. In fact, RSC is the only Russian developer and manufacturer of HPC systems in the leading group of the rating by the number of deployed supercomputers.
“The notion of High Performance Computing is evolving over time. So what was deemed a leadership class computer five years ago is a little bit obsolete. We are talking about the evolution not only in the hardware but also in the programming models because there are more and more cores available. Orchestrating the calculations in the way that can effectively take advantage of parallelism takes a lot of thinking and a lot of redesign of the algorithms behind the calculations.”