SC17 Session Preview: “Taking the Nanoscale to the Exascale”

Brian Ban continues his series of SC17 Session Previews with a look at an invited talk on nanotechnology. “This talk will focus on the challenges that computational chemistry faces in taking the equations that model the very small (molecules and the reactions they undergo) to efficient and scalable implementations on the very large computers of today and tomorrow.”

Firing up a Continent with HPC

In this special guest feature from Scientific Computing World, Nox Moyake describes the process of entrenching and developing HPC in South Africa. “The CHPC currently has about 1,000 users; most are in academia and others in industry. The centre supports research from across a number of domains and participates in a number of grand international projects such at the CERN and the SKA projects.”

How Manufacturing will Leap Forward with Exascale Computing

In this special guest feature, Jeremy Thomas from Lawrence Livermore National Lab writes that exascale computing will be a vital boost to the U.S. manufacturing industry. “This is much bigger than any one company or any one industry. If you consider any industry, exascale is truly going to have a sizeable impact, and if a country like ours is going to be a leader in industrial design, engineering and manufacturing, we need exascale to keep the innovation edge.”

NERSC lends a hand to 2017 Tapia Conference on Diversity in Computing

The recent Tapia Conference on Diversity in Computing in Atlanta brought together some 1,200 undergraduate and graduate students, faculty, researchers and professionals in computing from diverse backgrounds and ethnicities to learn from leading thinkers, present innovative ideas and network with peers.

Paving the Way for Theta and Aurora

In this special guest feature, John Kirkley writes that Argonne is already building code for their future Theta and Aurora supercomputers based on Intel Knights Landing. “One of the ALCF’s primary tasks is to help prepare key applications for two advanced supercomputers. One is the 8.5-petaflops Theta system based on the upcoming Intel® Xeon Phi™ processor, code-named Knights Landing (KNL) and due for deployment this year. The other is a larger 180-petaflops Aurora supercomputer scheduled for 2018 using Intel Xeon Phi processors, code-named Knights Hill. A key goal is to solidify libraries and other essential elements, such as compilers and debuggers that support the systems’ current and future production applications.”

Creating an Exascale Ecosystem Under the NSCI Banner

“We expect NCSI to run for the next two decades. It’s a bit audacious to start a 20 year project in the last 18 months of an administration, but one of the things that gives us momentum is that we are not starting from a clean sheet of paper. There are many government agencies already involved and what we’re really doing is increasing their coordination and collaboration. Also we will be working very hard over the next 18 months to build momentum and establish new working relationships with academia and industry.”

The Death and Life of Traditional HPC

The consensus of the panel was that making full use of Intel SSF requires system thinking at the highest level. This entails deep collaboration with the company’s application end-user customers as well as with its OEM partners, who have to design, build and support these systems at the customer site. Mark Seager commented: “For the high-end we’re going after density and (solving) the power problem to create very dense solutions that, in many cases, are water-cooled going forward. We are also asking how can we do a less dense design where cost is more of a driver.” In the latter case, lower end solutions can relinquish some scalability features while still retaining application efficiency.

PSC’s Bridges Supercomputer Brings HPC to a New Class of Users

The democratization of HPC got a major boost last year with the announcement of an NSF award to the Pittsburgh Supercomputing Center. The $9.65 million grant for the development of Bridges, a new supercomputer designed to serve a wide variety of scientists, will open the door to users who have not had access to HPC until now. “Bridges is designed to close three important gaps: bringing HPC to new communities, merging HPC with Big Data, and integrating national cyberinfrastructure with campus resources. To do that, we developed a unique architecture featuring Hewlett Packard Enterprise (HPE) large-memory servers including HPE Integrity Superdome X, HPE ProLiant DL580, and HPE Apollo 2000. Bridges is interconnected by Intel Omni-Path Architecture fabric, deployed in a custom topology for Bridges’ anticipated workloads.”

New GPUs accelerate HPC applications

In the past few years, accelerated computing has become strategically important for a wide range of applications. To gain performance on a variety of codes, hardware developers and software developers have concentrated their efforts to create systems that can accelerate certain applications by significant amount compared to what was previously possible.

insideBIGDATA Guide to Scientific Research

Daniel Gutierrez, Managing Editor, of insideBIGDATA has put together a terrific Guide to Scientific Research. The goal of this paper is to provide a road map for scientific researchers wishing to capitalize on the rapid growth of big data technology for collecting, transforming, analyzing, and visualizing large scientific data sets.