In this special guest feature, Tim Miller, Braden Cooper, Product Marketing Manager at One Stop Systems (OSS), suggests that for AI inferencing platforms, the data must be processed in real time to make the split-second decisions that are required to maximize effectiveness. Without compromising the size of the data set, the best way to scale the model training speed is to add modular data processing nodes.
Practical Hardware Design Strategies for Modern HPC Workloads
Many new technologies used in High Performance Computing (HPC) have allowed new application areas to become possible. Advances like multi-core, GPU, NVMe, and others have created application verticals that include accelerator assisted HPC, GPU based Deep Learning, Fast storage and parallel file systems, and Big Data Analytics systems. In this special insideHPC technology guide sponsored by our friends over at Tyan, we look at practical hardware design strategies for modern HPC workloads.
Video: PCI Express 6.0 Specification to Reach 64 GigaTransfers/sec
In this video, PCI-SIG President and Board Member, Al Yanes, shares and overview of PCI Express 5.0 and 6.0 specifications. “With the PCIe 6.0 specification, PCI-SIG aims to answer the demands of such hot markets as Artificial Intelligence, Machine Learning, networking, communication systems, storage, High-Performance Computing, and more.”
‘AI on the Fly’: Moving AI Compute and Storage to the Data Source
The impact of AI is just starting to be realized across a broad spectrum of industries. Tim Miller, Vice President Strategic Development at One Stop Systems (OSS), highlights a new approach — ‘AI on the Fly’ — where specialized high-performance accelerated computing resources for deep learning training move to the field near the data source. Moving AI computation to the data is another important step in realizing the full potential of AI.