BioScience gets a Boost Through Order-of-Magnitude Computing Gains

Intel is working with leaders in the field to eliminate today’s data processing bottlenecks. In this guest post from Intel, the company explores how BioScience is getting a leg up from order-of-magnitude computing progress. “Intel’s framework is designed to make HPC simpler, more affordable, and more powerful.”

Jülich to Build 5 Petaflop Supercomputing Booster with Dell

Today Intel and the Jülich Supercomputing Centre together with ParTec and Dell today announced plans to develop and deploy a next-generation modular supercomputing system. Leveraging the experience and results gained in the EU-funded DEEP and DEEP-ER projects, in which three of the partners have been strongly engaged, the group will develop the necessary mechanisms required to augment JSC’s JURECA cluster with a highly-scalable component named “Booster” and being based on Intel’s Scalable Systems Framework (Intel SSF).

Video: Intel Scalable System Framework

Gary Paek from Intel presented this talk at the HPC User Forum in Austin. “Traditional high performance computing is hitting a performance wall. With data volumes exploding and workloads becoming increasingly complex, the need for a breakthrough in HPC performance is clear. Intel Scalable System Framework provides that breakthrough. Designed to work for small clusters to the world’s largest supercomputers, Intel SSF provides scalability and balance for both compute- and data intensive applications, as well as machine learning and visualization. The design moves everything closer to the processor to improve bandwidth, reduce latency and allow you to spend more time processing and less time waiting.”

Facilitate HPC Deployments with Reference Designs for Intel Scalable System Framework

With Intel Scalable System Framework Architecture Specification and Reference Designs, the company is making it easier to accelerate the time to discovery through high-performance computing. The Reference Architectures (RAs) and Reference Designs take Intel Scalable System Framework to the next step—deploying it in ways that will allow users to confidently run their workloads and allow system builders to innovate and differentiate designs

Hewlett Packard Enterprise and Intel Alliance Delivers New Centers of Excellence for HPC

Intel and Hewlett Packard Enterprise (HPE) have recently created two new Centers of Excellence (CoE) to help customers gain hands-on experience with High Performance Computing (HPC). This plus collaboration with customers on implementing the latest technology solutions are highlights being celebrated by the two companies on the one-year anniversary of their alliance.

Intel Scalable System Framework Facilitates Deep Learning Performance

In this special guest feature, Rob Farber from TechEnablement writes that the Intel Scalable Systems Framework is pushing the boundaries of Machine Learning performance. “machine learning and other data-intensive HPC workloads cannot scale unless the storage filesystem can scale to meet the increased demands for data.”

Raj Hazra Presents: Driving to Exascale

Raj Hazra presented this talk at ISC 2016. “As part of the company’s launch of the Intel Xeon Phi processor, Hazra describes how how cognitive computing and HPC are going to work together. “Intel will introduce and showcase a range of new technologies helping to fuel the path to deeper insight and HPC’s next frontier. Among this year’s new products is the Intel Xeon Phi processor. Intel’s first bootable host processor is specifically designed for highly parallel workloads. It is also the first to integrate both memory and fabric technologies. A bootable x86 CPU, the Intel Xeon Phi processor offers greater scalability and is capable of handling a wider variety of workloads and configurations than accelerator products.”

Faster Route to Insights with Hardware and Visualization Advances from Intel

An eye-popping visualization of two black holes colliding demonstrates 3D Adaptive Mesh Refinement volume rendering on next-generation Intel® Xeon Phi™ processors. “It simplifies things when you can run on a single processor and not have to offload the visualization work,” says Juha Jäykkä, system manager of the COSMOS supercomputer. Dr. Jäykkä holds a doctorate in theoretical physics and also serves as a scientific consultant to the system’s users. “Programming is easier. The Intel Xeon Phi processor architecture is the next step for getting more performance and more power efficiency, and it is refreshingly convenient to use.”

Intel Furthers Machine Learning Capabilities

“Intel provided a wealth of machine learning announcements following the Intel Xeon Phi processor (formerly known as Knights Landing) announcement at ISC’16. Building upon the various technologies in Intel Scalable System Framework, the machine learning community can expect up to 38% better scaling over GPU-accelerated machine learning and an up to 50x speedup when using 128 Intel Xeon Phi nodes compared to a single Intel Xeon Phi node. The company also announced an up to 30x improvement in inference performance (also known as scoring or prediction) on the Intel Xeon E5 product family due to an optimized Intel Caffe plus Intel Math Kernel Library (Intel® MKL) package.”

With the Help of Dijkstra’s Law, Intel’s Mark Seager is Changing the Scientific Method

Our in-depth series on Intel architects continues with this profile of Mark Seager, a key driver in the company’s mission to achieve Exascale performance on real applications. “Creating and incentivizing an exascale program is huge. Yet more important, in Mark’s view, NCSI has inspired agencies to work together to spread the value from predictive simulation. In the widely publicized Project Moonshot sponsored by Vice President Biden, the Department of Energy is sharing codes with the National Institutes of Health to simulate the chemical expression pathway of genetic mutations in cancer cells with exascale systems.”