A New Direction in HPC System Fabric: Intel’s Omni-Path Architecture

In this special guest feature, John Kirkley writes that Intel is using its new Omni-Path Architecture as a foundation for supercomputing systems that will scale to 200 Petaflops and beyond. “With its ability to scale to tens and eventually hundreds of thousands of nodes, the Intel Omni-Path Architecture is designed for tomorrow’s HPC workloads. The platform has its sights set squarely on Exascale performance while supporting more modest, but still demanding, future HPC implementations.”

Interview: Intel Taking Lustre into New Markets

“It’s been nearly three years since Intel acquired Whamcloud and its Lustre engineering team. With Intel’s recent announcement that Lustre will power the 2018 Aurora supercomputer at Argonne, we took the opportunity to catch up with Brent Gorda, general manager of Intel High Performance Data Division at Intel Corporation.”

Interview: Intel’s Alan Gara Discusses the 180 Petaflop Aurora Supercomputer

In this interview, Intel’s Alan Gara describes the Aurora system, a 180 Petaflop supercomputer coming to Argonne. “The Aurora system is based on our Omni-Path second generation. This is an Intel interconnect that we’ve been developing for some time now, and we’re really excited about the capabilities that we expect and scalability that we expect it to bring to high performance computing.”

Video: Growth of Lustre Adoption and Intel’s Continued Commitment

“We are now working with over 100 channel partners globally. You can get access to Intel Lustre from almost everyone who sells storage or compute worldwide. We’re expanding this to include software partners, cloud partners. We want to create the best product possible out of this open source technology, and make it available economically to the channel partner, and enable you to go after these hugely expanding markets of cloud and big data, while not giving up on HPC.”

Intel’s Diane Bryant on Bringing Diversity to the Tech Sector

How does a woman break through the glass ceiling? In this video from the Re/Code Conference, Intel’s Diane Bryant discusses the pathway to diversity in the tech sector.

Video: How Aurora Will Usher in a New Era for HPC

“The selection of Intel to deliver the Aurora supercomputer is validation of our unique position to lead a new era in HPC,” said Raj Hazra, vice president, Data Center Group and general manager, Technical Computing Group at Intel. “Intel’s HPC scalable system framework enables balanced, scalable and efficient systems while extending the ecosystem’s decades of software investment to future generations. We look forward to the numerous scientific discoveries and the far-reaching impacts on society that Aurora will enable.”

EPFL Unleashes Deneb, a Scalable HPC Architecture for Researchers and Students

EPFL launches Denab: a scalable HPC architecture for researchers and students. “Working with ClusterVision and Intel has proved to be a wise and productive decision for EPFL as we continue to grow our computational capabilities to support our many researchers and students.”

HPC’s Future Lies in Remote Visualization

Remote visualization is at the intersection of cloud, big data, and high performance computing. And the ability to look at complex data sets using only a mobile phone’s data rate is not some fantasy of the future. It is reality here and now.

Podcast: Coding Illini Wins Parallel Universe Computing Challenge

In this Chip Chat podcast, Mike Bernhardt, the Community Evangelist for HPC and Technical Computing at Intel, discusses the importance of code modernization as we move into multi- and many-core systems. Markets as diverse as oil and gas, financial services, and health and life sciences can see a dramatic performance improvement in their code through parallelization.

Interview: Advancing Computational Chemistry with NWChem

“The notion of High Performance Computing is evolving over time. So what was deemed a leadership class computer five years ago is a little bit obsolete. We are talking about the evolution not only in the hardware but also in the programming models because there are more and more cores available. Orchestrating the calculations in the way that can effectively take advantage of parallelism takes a lot of thinking and a lot of redesign of the algorithms behind the calculations.”