Optical I/O Takes Center Stage at SC23

[SPONSORED GUEST ARTICLE] Integration of optical I/O with an FPGA is the tip of the iceberg of a new vision to enable new HPC/AI architectural advances through ubiquitous optical interconnects for every piece of compute silicon. If you are attending SC23 November 12-17, be sure to visit Ayar Labs in booth #228 for an exclusive look at the future….

Kickstart Your Business to the Next Level with AI Inferencing

{SPONSORED GUEST ARTICLE] Check out this article form HPE (with NVIDIA.) The need to accelerate AI initiatives is real and widespread across all industries. The ability to integrate and deploy AI inferencing with pre-trained models can reduce development time with scalable secure solutions that would revolutionize how easily you can….

Federated GPU Infrastructure for AI Workflows

[Sponsored Guest Article] With the explosion of use cases such as Generative AI and ML Ops driving tremendous demand for the most advanced GPUs and accelerated computing platforms, there’s never been a better time to explore the “as-a-service” model to help get started quickly.  What could take months of shipping delays and massive CapEx investments can be yours on demand….

HPC, AI, ML and Edge Solutions Drive Duos Railcar Inspection System Powered by Dell Technologies and Kalray

Industries such as railways are moving from traditional inspection methods to using AI and ML to perform automated inspection of railcars. Data streaming from the edge at high rates requires the compute power of a HPC cluster, storage and advanced analytics to return results in real time….

Improving Product Quality with AI-based Video Analytics: HPE, NVIDIA and Relimetrics Automate Quality Control in European Manufacturing Facility

Manufacturers are using the power of AI and video analytics to enable better quality control and traceability of quality issues, bringing them one step closer to achieving zero defects and reducing the downstream impacts of poor….

NVIDIA L4 GPU Breakthrough Data Center Universal Accelerator for Efficient Video, AI, and Graphics

Breakthrough Data Center Universal Accelerator for Efficient Video, AI, and Graphics The NVIDIA Ada Lovelace architecture L4 Tensor Core GPU is NVIDIA’s most compact data center accelerator for use in mainstream PCIe-based servers and is an ideal means of adding GPU acceleration to CPU-based systems. Delivering universal acceleration and energy efficiency for video, AI, virtual […]

PNY Now Offers NVIDIA RTX 6000 Ada Generation for High Performance Computing (HPC) Workloads

The latest generation of graphics processing units (GPUs) from NVIDIA, based on their Ada Lovelace architecture, is optimized for high performance computing (HPC) workloads. The NVIDIA RTX™ 6000 Ada Generation, available from PNY,  is designed….

Overcoming Challenges to Deep Learning Infrastructure

With use cases like computer vision, natural language processing, predictive modeling, and much more, deep learning (DL) provides the kinds of far-reaching applications that change the way technology can impact human existence. The possibilities are limitless, and we’ve just scratched the surface of its potential. There are three significant obstacles for you to be aware of when designing a deep learning infrastructure: scalability, customizing for each workload, and optimizing workload performance.

Recent Results Show HBM Can Make CPUs the Desired Platform for AI and HPC

Third-party performance benchmarks show CPUs with HBM2e memory now have sufficient memory bandwidth and computational capabilities to match GPU performance on many HPC and AI workloads. Recent Intel and third-party benchmarks now provide hard evidence that the upcoming Intel® Xeon® processors codenamed Sapphire Rapids with high bandwidth memory (fast, high bandwidth HBM2e memory) and Intel® Advanced Matrix Extensions can match the performance of GPUs for many AI and HPC workloads.

Successfully Deploy Composable Infrastructure on the Edge to Improve HPC and AI Outside of Traditional Data Centers

Emerging CDI technologies allow you to achieve the cost and availability benefits of cloud computing using on-premises networking equipment. You also benefit from extreme flexibility, being able to dynamically recompose systems and support nearly any workload. Thanks to innovative engineering, these benefits are now available on the edge. ​