Zero to an ExaFLOP in Under a Second

In this sponsored post, Matthew Ziegler at Lenovo discusses today’s metric for raw speed of compute. Much like racing cars, servers do “time trials” to gauge their performance relative to a given workload.  There are more Spec or Web benchmarks out there for servers than there are racetracks and drag strips.  Perhaps the most important measure is the raw calculating throughput that a system delivers: FLOPS, or Floating-Point Operations Per Second. 

Accelerate Your Applications with ROCm

In this sponsored post by our friends over at AMD, discuss how the ROCm platform is designed so that a wide range of developers can develop accelerated applications. An entire eco-system has been created, allowing developers to focus on developing their leading-edge applications.

Accelerate Your Development of GPU Based Innovative Applications

In this sponsored post by our friends over at AMD, we take a deep into how GPUs have become an essential component of innovative organizations that require the highest performing clusters, whether one server or thousands of servers. Many High-Performance Computing (HPC) and Machine Learning (ML) applications have demonstrated tremendous performance gains by using one or more GPUs in conjunction with powerful CPUs. Over 25% of the Top500 list of the most powerful supercomputers on this planet use accelerators, specifically GPUs, to achieve teraflop and petaflop speeds.

Smaller, faster and cooler: Innovations in Dell Precision mobile workstations surmount the latest thermal challenges

This white paper describes that with the latest Precision models, Dell continues to pioneer smarter thermals for mobile workstations that allow processor and GPU innovations to evolve so you can be more productive. To get the peak performance your demanding workloads require in easily portable sizes, Dell Technologies has Precision mobile workstations that best meets your needs.

Brightskies Deploys Open Source RTM Application for Easy Optimization across Multiple Architectures

In this sponsored post on behalf of Intel, we see that in today’s high-performance computing applications, many different pieces of hardware can perform data-centric functions. With diverse accelerators entering the market, programming for multiple architectures has created significant development barriers for software developers.

Paradigm Change: Reinventing HPC Architectures with In-Package Optical I/O

In this white paper, our friends over at Ayar Labs discuss an important paradigm change: reinventing HPC architectures with in-package optical I/O. The introduction of in-package optical I/O technology helps HPC centers accelerate the slope of compute progress needed to tackle ever-growing scientific problem sizes and HPC/AI convergence. Ayar Labs expects to not only see its technology extend the traditional type of architecture to put the HPC industry back on track, but also result in an inflection point that fundamentally changes the  slope of the compute performance efficiency curve. The key will be enabling converged HPC/AI centers to  build systems with disaggregated CPUs, GPUs, FPGAs and custom ASICs interconnected on equal footing.

Inspur Introduces Leading Designs of NVIDIA A100 Tensor Core GPU Servers for AI and HPC

In this special guest feature, our friends over at Inspur write about how the company is delivering new servers that address the most demanding performance from companies that are implementing AI and ML into their workflows.  Reducing the Total Cost of Ownership (TCO) while increasing the productivity of their teams is critical for CIOs and Line of Business leadership.

Interview: Mark Papermaster, CTO and EVP, Technology and Engineering, AMD

In this interview, Mark Papermaster, CTO and EVP, Technology and Engineering from AMD describes the company’s presence in the HPC space along with new trends in the industry. At a higher level, Mark also offers his views of the semiconductor industry in general as well as areas of innovation that AMD plans to cultivate. The discussion then turns to the exascale era of computing.

From Forty Days to Sixty-five Minutes without Blowing Your Budget Thanks to Gigaio Fabrex

In this sponsored post, Alan Benjamin, President and CEO of GigaIO, discusses how the ability to attach a group of resources to one server, run the job(s), and reallocate the same resources to other servers is the obvious solution to a growing problem: the incredible rate of change of AI and HPC applications is accelerating, triggering the need for ever faster GPUs and FPGAs to take advantage of the new software updates and new applications being developed.

Fast Track your AI Workflows

In this special guest feature, our friends over at Inspur write that for new workloads that are highly compute intensive, accelerators are often required. Accelerators can speed up the computation and allow for AI and ML algorithms to be used in real time. Inspur is a leading supplier of solutions for HPC and AI/ML workloads.