Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


AI Breakthroughs and Initiatives at the Pittsburgh Supercomputing Center

Nick Nystrom and Paola Buitrago from PSC gave this talk at the HPC User Forum in Milwaukee. “The Bridges supercomputer at PSC offers the possibility for experts in fields that never before used supercomputers to tackle problems in Big Data and answer questions based on information that no human would live long enough to study by reading it directly.”

Nick Nystrom Named Interim Director of PSC

Nick Nystrom has been appointed Interim Director of the Pittsburgh Supercomputing Center. Nystrom succeeds Michael Levine and Ralph Roskies, who have been co-directors of PSC since its founding in 1985. “During the interim period, Nystrom will oversee PSC’s state-of-the-art research into high-performance computing, data analytics, science and communications, working closely with Levine and Roskies to ensure a smooth and seamless transition.”

Upgraded Bridges Supercomputer Now in Production

“Bridges’ new nodes add large-memory and GPU resources that enable researchers who have never used high-performance computing to easily scale their applications to tackle much larger analyses,” says Nick Nystrom, principal investigator in the Bridges project and Senior Director of Research at PSC. “Our goal with Bridges is to transform researchers’ thinking from ‘What can I do within my local computing environment?’ to ‘What problems do I really want to solve?’”

Intel Omni-Path Architecture Fabric, the Choice of Leading HPC Institutions

Intel Omni-Path Architecture (Intel OPA) volume shipments started a mere nine months ago in February of this year, but Intel’s high-speed, low-latency fabric for HPC has covered significant ground around the globe, including integration in HPC deployments making the Top500 list for June 2016. Intel’s fabric makes up 48 percent of installations running 100 Gbps fabrics on the Top500 June list, and they expect a significant increase in Top500 deployments, including one that could end up in the stratosphere among the top ten machines on the list.

Chameleon Testbed Blazes New Trails for Cloud HPC at TACC

“It’s often a challenge to test the scalability of system software components before a large deployment, particularly if you need low level hardware access”, said Dan Stanzione, Executive Director at TACC and a Co-PI on the Chameleon project. “Chameleon was designed for just these sort of cases – when your local test hardware is inadequate, and you are testing something that would be difficult to test in the commercial cloud – like replacing the available file system. Projects like Slash2 can use Chameleon to make tomorrow’s cloud systems better than today’s.”

Bridges Supercomputer to Power Research at North Carolina School of Science and Mathematics

Today XSEDE announced it has awarded 30,000 core-hours of supercomputing time on the Bridges supercomputer to the North Carolina School of Science and Mathematics (NCSSM). Funded with a $9.65M NSF grant, Bridges contains a large number of research-grade software packages for science and engineering, including codes for computational chemistry, computational biology, and computational physics, along with specialty codes such as computational fluid dynamics. “NCSSM research students often pursue interdisciplinary research projects that involve computational and/or laboratory work in chemistry, physics, and other fields,” said Jon Bennett, instructor of physics and faculty mentor for physics research. “The availability of supercomputer computational resources would greatly expand the range and depth of projects that are possible for these students.”

Bridges Supercomputer Enters Production at PSC

“Bridges has enabled early scientific successes, for example in metagenomics, organic semiconductor electrochemistry, genome assembly in endangered species, and public health decision-making. Over 2,300 users currently have access to Bridges for an extremely wide range of research spanning neuroscience, machine learning, biology, the social sciences, computer science, engineering, and many other fields.”

Hewlett Packard Enterprise Rolls Out Software Defined HPC Platform

Today, Hewlett Packard Enterprise (HPE) introduced new high-performance computing solutions that aim to accelerate HPC adoption by enabling faster time-to-value and increased competitive differentiation through better parallel processing performance, reduced complexity and deployment time. These innovations include: HPE Core HPC Software Stack with HPE Insight Cluster Management Utility v8.0: Designed to meet the needs of […]

PSC Celebrates 30 Years of Supercomputing

The Pittsburgh Supercomputing Center (PSC) celebrated its 30th anniversary last week. “The beginning of PSC’s fourth decade will see the center with two new supercomputers—the NSF-funded Bridges system, already operational and due for completion this fall, and an Anton 2 molecular dynamics simulation system, provided at no charge by D. E. Shaw Research and with operational funding from the National Institutes of Health to be hosted at PSC also beginning in the Fall.”

Building Bridges to the Future

“The Pittsburgh Supercomputing Center recently added Bridges to its lineup of world-class supercomputers. Bridges is designed for uniquely flexible, interoperating capabilities to empower research communities that previously have not used HPC and enable new data-driven insights. It also provides exceptional performance to traditional HPC users. It converges the best of High Performance Computing (HPC), High Performance Data Analytics (HPDA), machine learning, visualization, Web services, and community gateways in a single architecture.”