Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Accelerating Finite Element Analysis with Intel Xeon Phi

With the introduction of the Intel Scalable System Framework, the Intel Xeon Phi processor can speed up Finite Element Analysis significantly. Using highly tuned math libraries such as the Intel Math Kernel Library (Intel MKL), FEA applications can execute math routines in parallel on the Intel Xeon Phi processor.

Machine Learning and the Intel Xeon Phi Processor

“With up to 72 processing cores, the Intel Xeon Phi processor x200 can accelerate applications tremendously. Each core contains two Advanced Vector Extensions, which speeds up the floating point performance. This is important for machine learning applications which in many cases use the Fused Multiply-Add (FMA) instruction.”

CCIX Open Acceleration Framework to Coherently Share Data with Accelerators

Today AMD, ARM, Huawei, IBM, Mellanox, Qualcomm, and Xilinx announced a collaboration to bring the CCIX high-performance open acceleration framework to data centers. The companies are collaborating on the specification for the new Cache Coherent Interconnect for Accelerators (CCIX). For the first time in the industry, a single interconnect technology specification will ensure that processors using different instruction set architectures (ISA) can coherently share data with accelerators and enable efficient heterogeneous computing – significantly improving compute efficiency for servers running data center workloads.

2016 Predictions from Radio Free HPC

In this podcast, the Radio Free HPC team makes their tech predictions for 2016. Will secure firmware be the key differentiator for HPC vendors? Will this be the year of FPGAs? And could we see a 100 Petaflop machine on the TOP500 before the year ends?

New GPUs accelerate HPC applications

In the past few years, accelerated computing has become strategically important for a wide range of applications. To gain performance on a variety of codes, hardware developers and software developers have concentrated their efforts to create systems that can accelerate certain applications by significant amount compared to what was previously possible.

Load Balancing Using OpenMP 4.0

OpenMP 4.0 standard now allows for the offloading of portions of the application, in order to take more advantage of many-core accelerators such as the Intel Xeon Phi coprocessor.

This Week in HPC: Startup Unveils Storage-Centric Architecture and PayPal Meets Moonshot

In this episode of This Week in HPC, Michael Feldman and Addison Snell from Intersect360 Research discuss the new Fortissimo Foundation from A3Cube, a clustered, pervasive, global direct-remote I/O access system. For more details, check out our A3Cube Slidecast over at insideBIGDATA. After that, they look at Paypal’s use of TI Keystone DSP processors for systems intelligence. By analyzing their chaotic real-time server data, Paypal is getting real-time, organized, intelligent results with extreme energy efficiency using HP’s Moonshot servers.

Nvidia Rolls Out Second-Generation Maxwell GPUs

Nvidia has introduced the new GM204 GPU based on the second-generation of the Maxwell architecture. And while the device is designed for advanced gaming graphics, it also makes for a great CUDA development platform for HPC.

NSF Study to Probe Advantages of FPGAs for Deep Learning in Computer Vision

The National Science Foundation is sponsoring a preliminary study to demonstrate the performance and power advantages of FPGAs over GPUs for Deep Learning in Computer Vision.

Eurotech Combines 64-bit ARM CPUs and NVIDIA GPUs

Today Eurotech announced that the company has teamed up with AppliedMicro Circuits Corporation and NVIDIA to develop a unique HPC system architecture that combines extreme density and best-in-class energy efficiency.