oneAPI offers an open industry effort supported by over 100 organizations. oneAPI is an open, unified, cross-architecture programming model for CPUs and accelerator architectures (GPUs, FPGAs, and others). Based on standards, the programming model simplifies software development and delivers uncompromised performance for accelerated compute without proprietary lock-in, while enabling the integration of existing code.
Los Alamos Claims Quantum Machine Learning Breakthrough: Training with Small Amounts of Data
Researchers at Los Alamos National Laboratory today announced a quantum machine learning “proof” they say shows that training a quantum neural network requires only a small amount of data, “(upending) previous assumptions stemming from classical computing’s huge appetite for data in machine learning, or artificial intelligence.” The lab said the theorem has direct applications, including […]
MLPerf: Latest Results Highlight ‘More Capable ML Training’
Open engineering consortium MLCommons has released new results from MLPerf Training v2.0, which measures how fast various platforms train machine learning models. The organizations said the latest MLPerf Training results “demonstrate broad industry participation and up to 1.8X greater performance ultimately paving the way for more capable intelligent systems….” As it has done with previous […]
Azure Adopts AMD Instinct MI200 GPU for Large-Scale AI Training
SANTA CLARA, Calif. May 26, 2022 — Microsoft has announced the use of AMD Instinct MI200 GPU accelerators for large-scale AI training workloads. Microsoft also announced it is working with the PyTorch Core team and AMD data center software team to optimize the performance and developer experience for customers running PyTorch on Microsoft Azure. AMD […]
Rice Univ. Researchers Claim 15x AI Model Training Speed-up Using CPUs
Reports are circulating in AI circles that researchers from Rice University claim a breakthrough in AI model training acceleration – without using accelerators. Running AI software on commodity x86 CPUs, the Rice computer science team say neural networks can be trained 15x faster than platforms utilizing GPUs. If valid, the new approach would be a double boon for organizations implementing AI strategies: faster model training using less costly microprocessors.