Archives for 2014

Piz Daint and Piz Dora: Productive, Heterogeneous Supercomputing

“The Cray XC30 system at CSCS, which includes “Piz Daint”, the most energy efficient peta-scale supercomputer in operation today, has been extended with additional multi-core CPU cabinets (aka “Piz Dora”). In this heterogeneous system we unify a variety for high-end computing services – extreme scale compute, data analytics, pre- and post processing, as well as visualization – that are all important parts for the scientific workflow.”

Job of the Week: HPC Software Engineer at Intel in Oregon

Intel in Oregon is seeking an HPC Software Engineer in our Job of the Week.

Will Containerization Eat Configuration Management?

Over at QNIB, Christian Kniep writes that his latest presentation examines intersection of Docker, Containerization, and Configuration Management. “In my humble opinion, Configuration Management might become a niche. As hard as it sounds.”

Helping Scientists with System Management Software

In this special guest feature from Scientific Computing World, Tom Wilkie writes that while end-user scientists and engineers fear the complexity of running jobs in HPC, there are software toolkits available to help.

Slidecast: UK HPC in 2015

In this slidecast, Julian Fielden from OCF describes the outlook for HPC in the UK for 2015.

How the Human Brain Project will Push Supercomputing

Over at TOP500.org, Bernd Mohr writes that Europe’s Human Brain Project will have a main production system located at the Juelich Supercomputing Centre. “The HBP supercomputer will be built in stages, with an intermediate “pre-exascale” system on the order of 50 petaflops planned for the 2016-18 timeframe. Full brain simulations are expected to require exascale capabilities, which, according to most potential suppliers’ roadmaps, are likely to be available in, approximately 2021-22.”

Machine Learning: What Computational Researchers Need to Know

Nvidia GPUs are powering a revolution in machine learning. With the rise of deep learning algorithms, in particular deep convolutional neural networks, computers are learning to see, hear, and understand the world around us in ways never before possible.

Podcast: Coding Illini Wins Parallel Universe Computing Challenge

In this Chip Chat podcast, Mike Bernhardt, the Community Evangelist for HPC and Technical Computing at Intel, discusses the importance of code modernization as we move into multi- and many-core systems. Markets as diverse as oil and gas, financial services, and health and life sciences can see a dramatic performance improvement in their code through parallelization.

New IPCC at CERN Pushes Code Modernization

“The work covered by this IPCC within the GeantV project aims at providing, in the first year, the first GEANT-V version that is vectorisable and thread wise scalable on Intel Architectures demonstrating a speedup of a factor between 5x and 10x over the scalar version on a simplified example.”

Europe Gets Why Big Data Needs Networks

In this special guest feature from Scientific Computing World, Tom Wilkie writes that the way Europe has joined up its networks not only supports supercomputing on the continent, but also offers a model for international cooperation that might have lessons for the development of next-generation technology.