MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Altair HPC Cloud Challenge Shows Customers a New Way Forward

Today Altair announced that eleven international customers participated in the company’s recent HPC Cloud Challenge. The contest was set up to demonstrate the benefits of leveraging the cloud for large-scale design exploration in the area of computer-aided engineering. Organizations of all sizes from manufacturing and academic fields participated in the Challenge, utilizing Altair technologies in structural, CFD and design studies, and expressed great satisfaction with the program overall.

XSEDE Awards 324 Million CPU hours to NSF Research Projects

The Extreme Science and Engineering Discovery Environment (XSEDE), a five-year project supported by the US National Science Foundation, has awarded 324 million cpu hours, valued at $16.2 million, to 150 research projects throughout the US.

Video: Europe’s Fastest Supercomputer and the World Around It

Michael Resch from HLRS gave this rousing talk at the HPC User Forum. “HLRS supports national and European researchers from science and industry by providing high-performance computing platforms and technologies, services and support. Supercomputer Hazel Hen, a Cray XC40-system, is at the heart of the HPC system infrastructure of the HLRS. With a peak performance of 7.42 Petaflops (quadrillion floating point operations per second), Hazel Hen is one of the most powerful HPC systems in the world (position 8 of TOP500, 11/2015) and is the fastest supercomputer in the European Union. The HLRS supercomputer, which was taken into operation in October 2015, is based on the Intel Haswell Processor and the Cray Aries network and is designed for sustained application performance and high scalability.”

Slidecast: Advantages of Offloading Architectures for HPC

In this slidecast, Gilad Shainer from Mellanox describes the advantages of InfiniBand and the company’s off-loading network architecture for HPC. “The path to Exascale computing is clearly paved with Co-Design architecture. By using a Co-Design approach, the network infrastructure becomes more intelligent, which reduces the overhead on the CPU and streamlines the process of passing data throughout the network. A smart network is the only way that HPC data centers can deal with the massive demands to scale, to deliver constant performance improvements, and to handle exponential data growth.”

ORiGAMI – Oak Ridge Graph Analytics for Medical Innovation

Rangan Sukumar from ORNL presented this talk at the HPC User Forum in Tucson. “ORiGAMI is a tool for discovering and evaluating potentially interesting associations and creating novel hypothesis in medicine. ORiGAMI will help you “connect the dots” across 70 million knowledge nuggets published in 23 million papers in the medical literature. The tool works on a ‘Knowledge Graph’ derived from SEMANTIC MEDLINE published by the National Library of Medicine integrated with scalable software that enables term-based, path-based, meta-pattern and analogy-based reasoning principles.”

Intersect360 Publishes New Report on the Hyperscale Market

Today Intersect360 Research published a new research report on the Hyperscale market. “This report provides definitions, segmentations, and dynamics of the hyperscale market and describes its scope, the end-user applications it touches, and the market drivers and dampers for future growth. It is the foundational report for the Intersect360 Research hyperscale market advisory service.”

Jetstream – Adding Cloud-based Computing to the National Cyberinfrastructure

Matt Vaughan from TACC presented this talk at the HPC User Forum. “Jetstream is the first user-friendly, scalable cloud environment for XSEDE. The system enables researchers working at the “long tail of science” and the creation of truly customized virtual machines and computing architectures. It has a web-based user interface integrated with XSEDE via Globus Auth. The architecture is derived from the team’s collective experience with CyVerse Atmosphere, Chameleon and Quarry. The system also fosters reproducible, sharable computing with geographically isolated clouds located at Indiana University and TACC.”

Exxact to Distribute NVIDIA DGX-1 Deep Learning System

The NVIDIA DGX-1 features up to 170 teraflops of half precision (FP16) peak performance, 8 Tesla P100 GPU accelerators with 16GB of memory per GPU, 7TB SSD DL Cache, and a NVLink Hybrid Cube Mesh. Packaged with fully integrated hardware and easily deployed software, it is the world’s first system built specifically for deep learning and with NVIDIA’s revolutionary, Pascal-powered Tesla P100 accelerators, interconnected with NVIDIA’s NVLink. NVIDIA designed the DGX-1 to meet the never-ending computing demands of artificial intelligence and claims it can provide the throughput of 250 CPU-based servers delivered via a single box.

Who Is Using HPC (and Why)?

In today’s highly competitive world, High Performance Computing (HPC) is a game changer. Though not as splashy as many other computing trends, the HPC market has continued to show steady growth and success over the last several decades. Market forecaster IDC expects the overall HPC market to hit $31 billion by 2019 while riding an 8.3% CAGR. The HPC market cuts across many sectors including academic, government, and industry. Learn which industries are using HPC and why.

Video: How to Build a Neural Net in 4 Minutes

In this video, Siraj Rival from Twilio presents a quick tutorial on How to Build a Neural Net in 4 Minutes. Siraj describes himself as the Bill Nye of Computer Science.