MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Programming Many Tasks for Many Cores

“Tasks keep the CPUs busy. When a core is working, rather than waiting for work to be sent to it, the application progresses towards it conclusion. A caveat to all of this is to remember that tasking and threading models remain on the system it was created on. Tasks that use a shared memory space only work within the shared memory segment that the processing cores can get to. Shared memory on the CPU side of the system is separate from the shared memory on the coprocessor. The threads created will remain on the part of the system where it started.”

Video: Rolling Out the New Intel Xeon Phi Processor at ISC 2016

In this video from ISC 2016, Barry Davis from Intel describes the company’s brand new Intel Xeon Phi Processor and how it fits into the Intel Scalable System Framework. “Eliminate node bottlenecks, simplify your code modernization and build on a power-efficient architecture with the Intel Xeon Phi™ processor, a foundational element of Intel Scalable System Framework. The bootable host processor offers an integrated architecture for powerful, highly parallel performance that will pave your path to deeper insight, innovation and impact for today’s most-demanding High Performance Computing applications, including Machine Learning. Supported by a comprehensive technology roadmap and robust ecosystem, the Intel Xeon Phi processor is a future-ready solution that maximizes your return on investment by using open standards code that are flexible, portable and reusable.”

Hewlett Packard Enterprise Rolls Out Software Defined HPC Platform

Today, Hewlett Packard Enterprise (HPE) introduced new high-performance computing solutions that aim to accelerate HPC adoption by enabling faster time-to-value and increased competitive differentiation through better parallel processing performance, reduced complexity and deployment time. These innovations include: HPE Core HPC Software Stack with HPE Insight Cluster Management Utility v8.0: Designed to meet the needs of […]

Cray Adds Intel Xeon Phi Processor to Flagship Line of Supercomputers

Today Cray introduced new performance breakthroughs that will provide customers with the fastest Cray XC supercomputers and Cray Sonexion storage systems to date. “Our customers are taking on increasingly complex computational problems that are expanding the boundaries of supercomputing and storage performance capabilities,” said Ryan Waite, Cray’s senior vice president of products. “We partner closely with our customers to understand their unique requirements and deliver new systems that deliver peak performance. For many of our customers, Intel Xeon Phi processors and Lustre parallel file systems are critical components of their supercomputing infrastructure. Our close collaboration with Intel helps to ensure our Intel Xeon Phi processor-based solutions scale to the most demanding performance requirements and our close partnership with Seagate helps scale Lustre to new levels of performance and stability.”

Supermicro Launches Wide Range of HPC Solutions at ISC 2016

SuperMicro is showcasing its High Performance Computing solutions at ISC 2016 this week in Frankfurt, Germany. “With Supermicro HPC solutions deep learning, engineering, and scientific fields can scale out compute clusters to accelerate their most demanding workloads and achieve fastest time-to-results with maximum performance per watt, per square foot, and per dollar,” said Charles Liang, President and CEO of Supermicro. “With our latest innovations incorporating Intel Xeon Phi processors in a performance and density optimized Twin architecture, 8-socket scalable servers, 100Gbps OPA switch for high bandwidth connectivity, and high-performance NVMe for Lustre based storage, our customers can accelerate their applications and innovations to address the most complex real world problems.”

Cray Announces Big Wins at ISC 2016

Today Cray announced that the company has been awarded new contracts for its Cray XC40 supercomputer, two Cray CS400 cluster supercomputers, a Cray Urika-GX agile analytics platform, and its DataWarp applications I/O accelerator to customers in Japan, the United Kingdom, and the United States.

Univa Grid Engine Supports New Intel Xeon Phi Processor

Today Univa announced the release of Univa Grid Engine Version 8.4.0 with preview support for the Intel Xeon Phi processor (formerly code-named “Knights Landing”), enabling enterprises to launch and control jobs on Intel Xeon Phi processor-based systems. The update simplifies running and managing applications on Intel Xeon Phi processor-based clusters.

EXTOLL Network Chip Enables Network-attached Accelerators

Today EXTOLL in Germany released its new TOURMALET high-performance network chip for HPC. “The key demands of HPC are high bandwidth, low latency, and high message rates. The TOURMALET PCI-Express gen3 x16 board shows an MPI latency of 850ns and a message rate of 75M messages per second. The message rate value is CPU-limited, while TOURMALET is designed for well above 100M msg/s.”

ASRock Rack to Showcase 2U & 3U HPC Platforms at ISC 2016

Today ASRock Rack announced plans to showcase its 2U and 3U systems for the HPC market at ISC 2016. “First of all, ASRock Rack is showing its new product 3U16N, which is by far the highest-density among all the microservers features with Intel Xeon D processors. With multiple computing nodes, this microserver can easily handle intensive critical tasks under low power consumption.”

Helping the Compiler Speed Intel Xeon Phi

The vector parallel capabilities of the Intel Xeon Phi coprocessor are similar in many ways with vectorizing code for the main CPU. The performance improvement when coding smartly and using the tools available can be tremendous. Since the Intel Xeon Phi coprocessor can show very large gains in performance due to its extra wide processing units. “Although it is time consuming to look at each and every loop in a large application, by doing so, and both telling the compiler what to do, and letting the compiler do its work, performance increases can be quite large, leading to shorter run times and/or more complete results.”