Sign up for our newsletter and get the latest big data news and analysis.
Daily
Weekly

New NEC AI Supercomputer at Technische Universität Dresden

Düsseldorf, 22 April, 2021 – NEC Deutschland GmbH has announced that the Technische Universität Dresden has taken a new High Performance Computing (HPC) cluster into full operation. The new HPC cluster is operated by the Center for Information Services and High Performance Computing (ZIH) at TU Dresden for the German wide AI competence centre ScaDS.AI […]

Arm Releases Details on 2 Neoverse Platforms and Mesh Interconnect for HPC, ML

Arm Holdings this morning released information on two new compute platforms and an interconnect for HPC, machine learning and other workloads introduced last September: The Arm Neoverse V1 platform is a new computing tier for Arm and the first Arm-designed core to support Scalable Vector Extension (SVE), delivering 50 percent more performance for HPC and […]

Scality and HPE Launch Object Storage Software for Kubernetes

SAN FRANCISCO – April 27, 2021 – Scality today introduced ARTESCA, a lightweight, enterprise-grade, cloud-native object storage solution designed for the Kubernetes era. Supported on HPE all-flash and hybrid data storage servers, ARTESCA addresses use cases from the edge to the core to the cloud, with emphasis on cloud-native, AI/ML, big data analytics and in-memory […]

insideHPC Guide to Perfectly Tailored AI, ML or HPC Environment

In this insideHPC technology guide, “How Expert Design Engineering and a Building Block Approach Can Give You a Perfectly Tailored AI, ML or HPC Environment,”we will present things to consider when building a customized supercomputer-in-a-box system with the help of experts from Silicon Mechanics. When considering a large complex system, such as a high-performance computing […]

Rice Univ. Researchers Claim 15x AI Model Training Speed-up Using CPUs

Reports are circulating in AI circles that researchers from Rice University claim a breakthrough in AI model training acceleration – without using accelerators. Running AI software on commodity x86 CPUs, the Rice computer science team say  neural networks can be trained 15x faster than platforms utilizing GPUs. If valid, the new approach would be a double boon for organizations implementing AI strategies: faster model training using less costly microprocessors.

5 Considerations When Building an AI / GPU Cluster

AI continues to change the way many organizations conduct their work and research. Deep learning applications are constantly evolving and organizations are adapting to new technologies, improving their performance and capabilities. Companies that fail to adapt to these emerging technologies run the risk of falling behind the competition. At PSSC Labs we want to make sure that doesn’t happen to you. There is a lot going on in the world of AI and even more to think about when building a GPU-heavy AI server or cluster system. This article offers five essential elements to an AI/GPU computing environment.

Nvidia Romps in Latest MLPerf AI Benchmark Results

What has become Nvidia’s regular romp through MLPerf AI benchmarks continued with today’s release of the latest performance measurements across a range of inference workloads, including computer vision, medical imaging, recommender systems, speech recognition and natural language processing. The latest benchmark round received submissions from 17 organizations and includes 1,994 performance and 862 power efficiency […]

Human Brain Project and EBRAINS: Call to Build Services for Sensitive Data

April 21, 2021 – The Human Brain Project and its research infrastructure EBRAINS are looking for partners to help them offer compliant data solutions to the scientific community. In a new call, institutions and companies can apply for funding of up to 1 million Euro to support this work. On Friday, May 7, 11:00 to […]

Cerebras Systems: 2.5 Trillion-plus Transistors in 7nm-based 2nd Gen Wafer Scale Engine

Maker of the world’s largest microprocessors, Cerebras Systems, today unveiled what it said is the largest AI chip, the Wafer Scale Engine 2 (WSE-2) — successor to the first WSE introduced in 2019. The 7nm-based WSE-2 exceeds Cerebras’ previous world record with a chip that has 2.6 trillion transistors and 850,000 AI-optimized cores. By comparison, […]

CEA-Leti Announces EU Project to Mimic Multi-Timescale Processing of Biological Neural Systems

 GRENOBLE, France – April 20, 2021 – CEA-Leti announced today the launch of an EU project to develop a novel class of algorithms, devices and circuits that reproduce multi-timescale processing of biological neural systems. The results will be used to build neuromorphic computing systems that can process efficiently real-world sensory signals and natural time-series data […]