Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


How Intel is Fostering HPC in the Cloud

“Cloud computing offers a potential solution by allowing people to create and access computing resources on demand. Yet meeting the complex software demands of an HPC application can be quite challenging in a cloud environment. In addition, running HPC workloads on virtualized infrastructure may result in unacceptable performance penalties for some workloads. Because of these issues, relatively few organizations have run production HPC work- loads in either private or public clouds.”

How Intel is Driving the Convergence of HPC & Ai

“The emerging AI community on HPC infrastructure is critical to achieving the vision of AI,” said Pradeep Dubey, Intel Fellow. “Machines that don’t just crunch numbers, but help us make better and more informed complex decisions. Scalability is the key to AI-HPC so scientists can address the big compute, big data challenges facing them and to make sense from the wealth of measured and modeled or simulated data that is now available to them.”

Podcast: Announcing the New CryptoSuper500 List

Today OrionX Research today announced that Bitcoin and BTC.com top the first release of the CryptoSuper500 list. The list recognizes cryptocurrency mining as a new form of supercomputing and tracks the top mining pools. “The growth of the cryptocurrency market has put the spotlight on emerging decentralized applications, the new ways in which they are funded, and the software stack on which they are built.” Cryptocurrency technologies include blockchain, consensus algorithms, digital wallets, and utility and security tokens.

Slidecast: BigDL Open Source Machine Learning Framework for Apache Spark

In this video, Beenish Zia from Intel presents: BigDL Open Source Machine Learning Framework for Apache Spark. “BigDL is a distributed deep learning library for Apache Spark*. Using BigDL, you can write deep learning applications as Scala or Python* programs and take advantage of the power of scalable Spark clusters. This article introduces BigDL, shows you how to build the library on a variety of platforms, and provides examples of BigDL in action.”

Parabricks and SkyScale Raise the Performance Bar for Genomic Analysis

“In the modern world of genomics where analysis of 10’s of thousands of genomes is required for research, the cost per genome and the number of genomes per time are critical parameters. Parabricks adaption of the GATK4 Best Practice workflows running seamlessly on SkyScale’s Accelerated Cloud provides unparalleled price and throughput efficiency to help unlock the power of the human genome.”

Video: The Separation of Concerns in Code Modernization

In this video, Larry Meadows from Intel describes why modern processors require modern coding techniques. With vectorization and threading for code modernization, you can enjoy the full potential of Intel Scalable Processors. “In many ways, code modernization is inevitable. Even EDGE devices nowadays have multiple physical cores. And even a single-core machine will have hyperthreads. And keeping those cores busy and fed with data with Intel programming tools is the best way to speed up your applications.”

Predictions for SC18: A change in climate for HPC?

In this special guest feature, Dr. Rosemary Francis from Ellexus offers up her predictions for SC18 in Dallas. “It’s almost time for SC18 and this year it’s a biggie. Here is what we expect to hear about at SC18 as the Ellexus team treads the show floor.”

In-Network Computing Technology to Enable Data-Centric HPC and AI Platforms

Mellanox Technologies’ Gilad Shainer explores one of the biggest tech transitions over the past 20 years: the transition from CPU-centric data centers to data-centric data centers, and the role of in-network computing in this shift. “The latest technology transition is the result of a co-design approach, a collaborative effort to reach Exascale performance by taking a holistic system-level approach to fundamental performance improvements. As the CPU-centric approach has reached the limits of performance and scalability, the data center architecture focus has shifted to the data, and how to bring compute to the data instead of moving data to the compute.”

SC18 Plenary to focus on HPC & Ai on Nov. 12

Over at the SC18 Blog, SC Insider writes that the upcoming Conference Plenary session will examine the potential for advanced computing to help mitigate human suffering and elevate our capacity to protect the most vulnerable. “In this SC18 plenary session, you will hear from innovators who are redefining how we predict and prevent humanitarian crises by leveraging advanced computing. The session is the conference kick-off event, and will be follow by Exhibitor Opening Gala on Monday night, Nov. 12.”

Radio Free HPC Previews the SC18 Student Cluster Competition

In this podcast, Radio Free HPC Previews the SC18 Student Cluster Competition. “The Competition was developed in 2007 to provide an immersive high performance computing experience to undergraduate and high school students. With sponsorship from hardware and software vendor partners, student teams design and build small clusters, learn designated scientific applications, apply optimization techniques for their chosen architectures, and compete in a non-stop, 48-hour challenge at the SC conference to complete a real-world scientific workload, showing off their HPC knowledge for conference attendees and judges.”