Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Get Your HPC Cluster Productive Faster

In this sponsored post from our friends over at Quanta Cloud Technology (QCT), we see that by simplifying the deployment process from weeks or longer to days and preparing pre-built software packages, organizations can become productive in a much shorter time. Resources can be used to provide more valuable services to enable more research, rather than bringing up an HPC cluster. By using the services that QCT offers, HPC systems can achieve a better Return on Investment (ROI).

Supermicro and Preferred Networks (PFN) Collaborate to Develop the World’s Most Efficient Supercomputer

Supermicro and Preferred Networks (PFN) collaborated to develop the most efficient supercomputer anywhere on earth, earning the #1 position on the Green500 list. This supercomputer, the MN-3, is comprised of Intel® Xeon® CPUs and MN-Core™ boards developed by Preferred Networks. In this white paper, read more about this collaboration and how a record-setting supercomputer was developed.

Using AI to See What Eye Doctors Can’t

This white paper explains how Voxeleron, a leader in delivering advanced ophthalmic image analysis and machine learning solutions, is extending ophthalmology’s diagnostic horizons with image analysis based on artificial intelligence (AI) models, trained using Dell Precision workstations with NVIDIA GPUs.

Why developers are turning to ultra-powerful workstations for more creative freedom at less cost

This white paper from Dell Technologies discusses why developers are turning to ultra-powerful workstations for more creative freedom at less cost. Research shows that large and small companies alike are using  powerful workstations with even more powerful graphic processing units (GPUs) as integral parts of their artificial intelligence infrastructure.

IDC: Led by U.S., Global AI Spending Will More than Double by 2024

Those predicting another “AI winter” will have a quarrel on their hands with industry analyst firm International Data Corp., which forecasts global spending on AI will more than than double over the next four years, from $50.1 billion in 2020 to more than $110 billion in 2024. According to IDC’s Worldwide Artificial Intelligence Spending Guide, AI […]

An Adaptive Platform for Converged HPC/AI Workloads

In this sponsored post from our friends over at Quanta Cloud Technology (QCT), we see that from many years of experience working with numerous customers, the company found that converged HPC and AI environments can bring benefits for customers and remain flexible enough to meet their workload demands. A proven approach to getting customers HPC and AI systems productive quickly is to deliver all the components together, integrated, and tested. The components should be selected based on the known and anticipated workloads and optimized for these requirements.

Rugged COTS Platform Takes On Fast-Changing Needs of Self-Driving Trucks

This white paper by Advantech, “Rugged COTS Platform Takes On Fast-Changing Needs of Self-Driving Trucks” discusses how the fast-changing needs of autonomous vehicles are forcing compute platforms to evolve. Advantech and Crystal Group are teaming up to power that evolution based on AV trends, compute requirements, and a rugged COTS philosophy converging for breakthrough innovation in self-driving truck designs.

Rugged COTS Platform Takes On Fast-Changing Needs of Self-Driving Trucks

This white paper by Advantech, “Rugged COTS Platform Takes On Fast-Changing Needs of Self-Driving Trucks” discusses how the fast-changing needs of self-driving trucks are forcing compute platforms to evolve. Advantech and Crystal Group are teaming up to power that evolution based on AV trends, compute requirements, and a rugged COTS philosophy converging for breakthrough innovation in self-driving truck designs.

Scientists Look to Exascale and Deep Learning for Developing Sustainable Fusion Energy

Scientists from Princeton Plasma Physics Laboratory are leading an Aurora ESP project that will leverage AI, deep learning, and exascale computing power to advance fusion energy research. “With a suite of the world’s most powerful path-to-exascale supercomputing resources at their disposal, William Tang and colleagues are developing models of disruption mitigation systems (DMS) to increase warning times and work toward eliminating major interruption of fusion reactions in the production of sustainable clean energy.”

GPCNeT or GPCNoT?

In this special guest feature, Gilad Shainer from Mellanox Technologies writes that the new GPCNeT benchmark is actually a measure of relative performance under load rather than a measure of absolute performance. “When it comes to evaluating high-performance computing systems or interconnects, there are much better benchmarks available for use. Moreover, the ability to benchmark real workloads is obviously a better approach for determining system or interconnect performance and capabilities. The drawbacks of GPCNeT benchmarks can be much more than its benefits.”