Ayar Labs Joins DARPA PIPES Project as Intel Optical IO Provider

Optical startup Ayar Labs has been selected as Intel’s optical I/O solution partner for their recently awarded DARPA PIPES research project. “The goal of PIPES (Photonics in Package for Extreme Scalability) is to develop integrated optical I/O solutions co-packaged with next generation FPGA/CPU/GPU and accelerators in Multi-Chip Packages (MCP) to provide extreme data rates (input/output) at ultra-low power over much longer distances than supported by current technology. In the first phase of the project, the Ayar Labs TeraPHY chiplet will be co-packaged with an Intel FPGA using the AIB (Advanced Interconnect Bus) interface and Intel’s EMIB silicon-bridge packaging.”

Intel Unveils New GPU Architecture and oneAPI Software Stack for HPC and AI

Today at SC19, Intel unveiled its new GPU architecture optimized for HPC and AI as well as an ambitious new software initiative called oneAPI that represents a paradigm shift from today’s single-architecture, single-vendor programming models. “HPC and AI workloads demand diverse architectures, ranging from CPUs, general-purpose GPUs and FPGAs, to more specialized deep learning NNPs which Intel demonstrated earlier this month,” said Raja Koduri, senior vice president, chief architect, and general manager of architecture, graphics and software at Intel. “Simplifying our customers’ ability to harness the power of diverse computing environments is paramount, and Intel is committed to taking a software-first approach that delivers unified and scalable abstraction for heterogeneous architectures.”

Enabling FPGAs

Field Programmable Gate Arrays (FPGAs) are an exciting technology that allows hardware designers to create new digital circuits through a programming environment. Compared to hardware that is designed once or software which must adhere to the hardware architecture, an FPGA allows developers to draw a circuit to solve a specific problem.

How Intel FPGAs Power Azure Deep Learning

Microsoft Azure CTO Mark Russinovich recently disclosed major advances in Microsoft’s hyperscale deployment of Intel field programmable gate arrays (FPGAs). These advances have resulted in the industry’s fastest public cloud network, and new technology for acceleration of Deep Neural Networks (DNNs) that replicate “thinking” in a manner that’s conceptually similar to that of the human brain.