Why Hardware Acceleration Is The Next Battleground In Processor Design

In this special guest feature, Theodore Omtzigt from Stillwater Supercomputing writes that as workloads specialize due to scale, hardware accelerated solutions will continue to be cheaper than approaches that utilize general purpose components. “If you’re a CIO who manages integrations of third-party hardware and software, be aware of new hardware acceleration technologies that can reduce the cost of service delivery by orders of magnitude.”

Announcing Google’s New TPU Dev Board for Machine Learning on the Edge

Google just launched Coral, a Beta platform for building intelligent devices with local AI. To enable this initiative, Google is making an edge version of its TensorFlow Processing Unit available for sale for the first time. “Our first hardware components feature the new Edge TPU, a small ASIC designed by Google that provides high-performance ML inferencing for low-power devices. For example, it can execute state-of-the-art mobile vision models such as MobileNet V2 at 100+ fps, in a power efficient manner.”

Video: TensorFlow for HPC?

In this podcast, Peter Braam looks at how TensorFlow framework could be used to accelerate high performance computing. “Google has developed TensorFlow, a truly complete platform for ML. The performance of the platform is amazing, and it begs the question if it will be useful for HPC in a similar manner that GPU’s heralded a revolution.”

Micron Joins CERN openlab

Last week at SC18, Micron announced that the company has joined CERN openlab, a unique public-private partnership, by signing a three-year agreement. Under the agreement, Micron will provide CERN with advanced next-generation memory solutions to further machine learning capabilities for high-energy physics experiments at the laboratory. Micron’s memory solutions that combine neural network capabilities will be tested in the data-acquisition systems of experiments at CERN.

Video: Quantum Computing and Quantum Supremacy at Google

John Martinis from Google presents: Quantum Computing and Quantum Supremacy. “The goal of the Google Quantum AI lab is to build a quantum computer that can be used to solve real-world problems. Our strategy is to explore near-term applications using systems that are forward compatible to a large-scale universal error-corrected quantum computer. In order for a quantum processor to be able to run algorithms beyond the scope of classical simulations, it requires not only a large number of qubits.”

Google Goes for Quantum Supremacy with 72-Qbit Bristlecone Chip

Over at the Google Research Blog, Julian Kelly writes that the company has developed a new 72-Qbit quantum processor called Bristlecone. “We are cautiously optimistic that quantum supremacy can be achieved with Bristlecone, and feel that learning to build and operate devices at this level of performance is an exciting challenge!”

Agenda Posted for April HPC User Forum in Tucson

The HPC User Forum has posted their speaker agenda for their upcoming meeting in Tucson. Hosted by Hyperion Research, the event takes place April 16-18 at Loews Ventana Canyon. “The April meeting will explore the status and prospects for quantum computing and HPC use of HPC for environmental research, especially natural disasters such as earthquakes and the recent California wildfires. As always, the meeting will also look at new developments in HPDA-AI, cloud computing and other areas of continuing interest to the HPC community. A special session will look at the growing field of processors and accelerators supporting HPC systems.”

USRA Upgrades D-Wave Quantum Computer to 2000 Qubits

Today the Universities Space Research Association (USRA) announced it has upgraded its current quantum annealing computer to a D-Wave 2000Q system. The computer offers the promise for solving challenging problems in a variety of applications including machine learning, scheduling, diagnostics, medicine and biology among others.

Video: What is Wrong with Convolutional Neural Nets?

Geoffrey Hinton from the University of Toronto gave this talk at the Vector Institute. “What is Wrong with ‘standard’ Convolutional Neural Nets? They have too few levels of structure: Neurons, Layers, and Whole Nets. We need to group neurons in each layer in ‘capsules’ that do a lot of internal computation and then output a compact result.”

Google becomes STEM-Trek Supporter for PEARC17 Student Program

Today STEM-Trek announced that Google, Inc. is a STEM-Trek Platinum supporter of the PEARC17 Student Program. The donation will increase the number of students who can participate in the Practice & Experience in Advanced Research Computing conference which will be held July 9-13 in New Orleans.