Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Virtual SC20 Retrospective: Thinkers’ Thoughts on HPC Today and Tomorrow

Check out this assemblages’ reflections on last week’s virtual SC20. Their thoughts range from big-picture insights on how the event reflects the state of HPC to the keynotes, sessions and announcements they think were particularly notable (and worth going back and watching, if you missed them). Among the topics covered: The convergence of the Top500 and Green500 supercomputer lists, the emerging earmarks of machine learning HPC workloads and the validity of supercomputing predictions made at SC in 2006 over against predictions for 2035.

MBX Debuts Reference Platforms for Mixed Reality Deployments

Libertyville, IL  – MBX Systems, a manufacturer of purpose-built and deployment-ready hardware devices for technology companies, today unveiled a line of portable and rackmount reference platforms optimized to deliver next-generation fidelity and performance for mixed reality applications. The new MBX Varion systems are sized to support a variety of training needs, utilize NVIDIA GPUs and Intel CPUs to push […]

insideHPC Guide to QCT Platform-on-Demand Designed for Converged Workloads

Not too long ago, building a converged HPC/AI environment – with two domains: High Performance Computing (HPC) and Artificial Intelligence (AI) – would require spending a lot of money on proprietary systems and software with the hope that it would scale as business demands changed. In this insideHPC technology guide, as we’ll see, by relying on open source software and the latest high performance/low cost system architectures, it is possible to build scalable hybrid on-premises solutions that satisfy the needs of converged HPC/AI workloads while being robust and easily manageable.

IBM and AMD in ‘Confidential Computing’ Agreement for Cloud and AI Acceleration

IBM and AMD today announced a multi-year development agreement to enhance the security and artificial intelligence offerings of both companies. The effort builds on open-source software, open standards and open system architectures to drive Confidential Computing in the cloud (see below) and support accelerators across high-performance computing and enterprise critical capabilities such as virtualization and […]

Be (More) Wrong Faster – Dumbing Down Artificial Intelligence with Bad Data

In this white paper, our friends over at Profisee discuss how Master Data Management (MDM) will put your organization on the fast track to automating processes and decisions while minimizing  resource requirements, while simultaneously eliminating the risks associated with feeding AI and ML data  that is not fully trusted. In turn, your digital business transformation will be accelerated and your competitive  edge will be rock solid.

PNNL’S CENATE Taps ML to Guard DOE Supercomputers Against Illegitimate Workloads

Pacific Northwest National Lab sent along this article today by PNNL’s Allan Brettman, who writes about the advanced techniques used by the lab’s Center for Advanced Technology Evaluation (CENATE) “to judge HPC workload legitimacy that is as stealthy as an undercover detective surveying the scene through a two-way mirror.” This includes machine learning methods, such […]

Sandia: Material in House Paint Could Spur ‘Technology Revolution’

A new method to make non-volatile computer memory may have unlocked a problem been holding back machine learning and has the potential to revolutionize technologies like voice recognition, image processing and autonomous driving, according to a team of researchers at Sandia National Laboratories. Working with collaborators from the University of Michigan, the Sandia team published […]

Composable Supercomputing Optimizes Hardware for AI-driven Data Calculation

In this sponsored post, our friend John Spiers, Chief Strategy Officer at Liqid, discusses how composable disaggregated infrastructure (CDI) solutions are emerging as a solution to roadblocks to advancing the mission of high-performance computing. CDI orchestration software dynamically composes GPUs, NVMe SSDs, FPGA, networking, and storage-class memory to create software-defined bare metal servers on demand. This enables unparalleled resource utilization to deliver previously impossible performance for AI-driven data analytics.

Get Your HPC Cluster Productive Faster

In this sponsored post from our friends over at Quanta Cloud Technology (QCT), we see that by simplifying the deployment process from weeks or longer to days and preparing pre-built software packages, organizations can become productive in a much shorter time. Resources can be used to provide more valuable services to enable more research, rather than bringing up an HPC cluster. By using the services that QCT offers, HPC systems can achieve a better Return on Investment (ROI).

Nvidia to Acquire Arm for $40 Billion – Jensen Huang Comments on ‘the Next Major Computing Platform’

Nvidia and SoftBank Group Corp. (SBG) have announced an agreement under which Nvidia will acquire Arm Limited from SBG and the SoftBank Vision Fund in a transaction valued at $40 billion. As part of Nvidia, “Arm will continue to operate its open-licensing model while maintaining the global customer neutrality that has been foundational to its […]