• The Pending Age of Exascale

    In this special guest feature from Scientific Computing World, Robert Roe looks at advances in exascale computing and the impact of AI on HPC development. “There is a lot of co-development, AI and HPC are not mutually exclusive. They both need high-speed interconnects and very fast storage. It just so happens that AI functions better on GPUs. HPC has GPUs in abundance, so they mix very well.” [READ MORE…]

Featured Stories

  • GCS Sponsors University Teams for ISC19 Student Cluster Competition

    The Gauss Centre for Supercomputing is sponsoring the two German student teams for the Student Cluster Competition at ISC 2019 in Frankfurt. The teams, representing the University of Hamburg and Heidelberg University, are among the 14 student teams from Asia, North America, Africa, and Europe that will go head to head on the exhibition show floor. [READ MORE…]

  • GPU Hackathon gears up for Future Perlmutter Supercomputer

    NERSC recently hosted its first user hackathon to begin preparing key codes for the next-generation architecture of the Perlmutter system. Over four days, experts from NERSC, Cray, and NVIDIA worked with application code teams to help them gain new understanding of the performance characteristics of their applications and optimize their codes for the GPU processors in Perlmutter. “By starting this process early, the code teams will be well prepared for running on GPUs when NERSC deploys the Perlmutter system in 2020.” [READ MORE…]

  • Video: Exascale Deep Learning for Climate Analytics

    Thorsten Kurth Josh Romero gave this talk at the GPU Technology Conference. “We’ll discuss how we scaled the training of a single deep learning model to 27,360 V100 GPUs (4,560 nodes) on the OLCF Summit HPC System using the high-productivity TensorFlow framework. This talk is targeted at deep learning practitioners who are interested in learning what optimizations are necessary for training their models efficiently at massive scale.” [READ MORE…]

Featured Resource

HPC cluster

Five Essential Strategies for Successful HPC Clusters

Successful HPC clusters are powerful assets for an organization. However, these systems are complex and must be built and managed properly to realize their potential. If not done properly, your ability to meet implementation deadlines, quickly identify and resolve problems, perform updates and maintenance, accommodate new application requirements and adopt strategic new technologies will be jeopardized. Download the new white paper from Bright Computing that explores key strategies for HPC clusters.

Recent News

Industry Perspectives

  • Big Compute Podcast: HPC and Genomics to Revolutionize Medicine

    In this Big Compute Podcast episode, Gabriel Broner hosts Mark Borodkin, COO of Bionano Genomics, to discuss how genomics and HPC enable doctors and researchers to diagnose complex diseases and prescribe unique personalized treatments based on individual variations of the DNA. “We decided to abstract HPC in the cloud through Rescale and our software. All our customers need to know is that the solution offers the security and performance they need, and they don’t need to learn the new jargon of cloud.” [READ MORE…]

  • Epic 2018 HPC Road Trip begins at Idaho National Lab

    In this special guest feature, Dan Olds from OrionX begins his Epic HPC Road Trip series with a stop at Idaho National Laboratory. “The fast approach of SC18 gave me an idea: why not drive from my home base in Beaverton, Oregon, to Dallas, Texas and stop at national labs along the way? I love a good road trip and what could be better than a 5,879 mile drive with visits to supercomputer users mixed in?” [READ MORE…]

RSS Featured from insideBIGDATA

  • Unleash AI for Impact
    In this special guest feature, Brian D’alessandro, Director of Data Science at SparkBeyond, discusses how AI is a learning curve, and exploring opportunities within the technology further extends its potential to enable transformation and generate impact. It can shape workflows to drive efficiency and growth opportunities, while automating other workflows and create new business models. […]

Editor’s Choice

  • HPE to Acquire Cray for $1.3 Billion

    Today HPE announced that the company has entered into a definitive agreement to acquire Cray for approximately $1.3 billion. “This pending deal will bring together HPE, the global HPC market leader, and Cray, whose Shasta architecture is under contract to power America’s two fastest supercomputers in 2021,” said Steve Conway from Hyperion Research. “The Cray addition will boost HPE’s ability to pursue high-end procurements and will speed the combined company’s development of next-generation technologies that will benefit HPC and AI-machine learning customers at all price points.” [READ MORE…]

  • Achieving ExaOps with the CoMet Comparative Genomics Application

    Wayne Joubert’s talk at the HPC User Forum described how researchers at the US Department of Energy’s Oak Ridge National Laboratory (ORNL) achieved a record throughput of 1.88 ExaOps on the CoMet algorithm. As the first science application to run at the exascale level, CoMet achieved this remarkable speed analyzing genomic data on the recently launched Summit supercomputer. [READ MORE…]

  • Video: Can FPGAs compete with GPUs?

    John Romein from ASTRON gave this talk at the GPU Technology Conference. “We’ll discuss how FPGAs are changing as a result of new technology such as the Open CL high-level programming language, hard floating-point units, and tight integration with CPU cores. Traditionally energy-efficient FPGAs were considered notoriously difficult to program and unsuitable for complex HPC applications. We’ll compare the latest FPGAs to GPUs, examining the architecture, programming models, programming effort, performance, and energy efficiency by considering some real applications.” [READ MORE…]

  • EuroHPC – The EU Strategy in High Performance Computing

    Thomas Skordas from the European Commission gave this talk at EuroHPC Summit Week in Poland. “The EuroHPC Summit Week brings together relevant European supercomputing stakeholders and decision makers, allowing them to share on the one hand their needs and future expectations and on the other hand the latest technological developments, to define synergies and participate in shaping the future of European supercomputing.” [READ MORE…]

  • Design Work Completed for SKA Telescope Supercomputer

    An international group of scientists led by the University of Cambridge has finished designing the ‘brain’ of the Square Kilometre Array (SKA), the world’s largest radio telescope. When complete, the SKA will enable astronomers to monitor the sky in unprecedented detail and survey the entire sky much faster than any system currently in existence. “We estimate SDP’s total compute power to be around 250 PFlops – that’s 25% faster than IBM’s Summit, the current fastest supercomputer in the world.” [READ MORE…]

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC: