MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


NERSC Paper on Burst Buffers Recognized at Cray User Group

A new paper outlining NERSC’s Burst Buffer Early User Program and the center’s pioneering efforts in recent months to test drive the technology using real science applications on Cori Phase 1 has won the Best Paper award at this year’s Cray User Group (CUG) meeting.

Cowboy Supercomputer Powers Research at Oklahoma State

In this video, Oklahoma State Director of HPC Dana Brunson describes how the Cowboy supercomputer powers research. “High performance computing is often used for simulations that may be too big, too small, too fast, too slow, too dangerous or too costly, another thing it’s used for involves data. So you may remember the human genome project it took nearly a decade and cost a billion dollars, these sorts of things can now be done over the weekend for under a thousand dollars. Our current super computer is named Cowboy and it was funded by a 2011 National Science Foundation Grant and it has been serving us very well.”

Building Efficient HPC Clouds with MVAPICH2 and OpenStack over SR-IOV Enabled InfiniBand Clusters

“Single Root I/O Virtualization (SR-IOV) technology has been steadily gaining momentum for high-performance interconnects such as InfiniBand. SR-IOV can deliver near native performance but lacks locality-aware communication support. This talk presents an efficient approach to build HPC clouds based on MVAPICH2 over OpenStack with SR-IOV. We discuss the high-performance design of virtual machine-aware MVAPICH2 library over OpenStack-based HPC Clouds with SR-IOV. A comprehensive performance evaluation with micro-benchmarks and HPC applications has been conducted on an experimental OpenStack-based HPC cloud and Amazon EC2. The evaluation results show that our design can deliver near bare-metal performance.”

Understanding Your HPC Application Needs

Many HPC applications began as single processor (single core) programs. If these applications take too long on a single core or need more memory than is available, they need to be modified so they can run on scalable systems. Fortunately, many of the important (and most used) HPC applications are already available for scalable systems. Not all applications require large numbers of cores for effective performance, while others are highly scalable. Here is how to better understand your HPC application needs.

Mellanox Powers OpenStack Cloud at University of Cambridge

Today Mellanox announced that University of Cambridge has selected Mellanox End-to-End Ethernet interconnect solution including Spectrum SN2700 Ethernet switches, ConnectX-4 Lx NICs and LinkX cables for its OpenStack-based scientific research cloud. This new win has expanded Mellanox’s existing footprint of InfiniBand solution and empowers the UoC to realize its vision of HPC and Cloud convergence through high-speed cloud networks at 25/50/100Gb/s throughput.

Video: HPC Trends from the Trenches at Bio-IT World

In this video, Chris Dagdigian from Bioteam delivers his annual assessment of the best, the worthwhile, and the most overhyped information technologies for life sciences at the 2016 Bio-IT World Conference & Expo in Boston. “The presentation tries to recap the prior year by discussing what has changed (or not) around infrastructure, storage, computing, and networks. This presentation will help scientists, leadership and IT professionals understand the basic topics involved in supporting data intensive science.”

Job of the Week: HPC Application Performance Engineer at Mellanox

Mellanox is seeking an HPC Application Performance Engineer in our Job of the Week. “Mellanox Technologies is looking for a talented engineer to lead datacenter application performance optimization and benchmarking over Mellanox networking products. This individual will primarily work with marketing and engineering to execute low-level and application level benchmarks focused on High Performance Computing (HPC) open source and ISV applications in addition to providing software and hardware optimization recommendations. In addition, this individual will work closely with hardware and software partners, and customers to benchmark Mellanox products under different system configurations and workloads.”

Ohio Supercomputer Center Names New Cluster after Jesse Owens

The Ohio Supercomputer Center has named its newest HPC cluster after Olympic champion Jesse Owens. The new Owens Cluster will be powered by Dell PowerEdge servers featuring the new Intel Xeon processor E5-2600 v4 product family, include storage components manufactured by DDN, and utilize interconnects provided by Mellanox. “Our newest supercomputer system is the most powerful that the Center has ever run,” ODHE Chancellor John Carey said in a recent letter to Owens’ daughters. “As such, I thought it fitting to name it for your father, who symbolizes speed, integrity and, most significantly for me, compassion as embodied by his tireless work to help youths overcome obstacles to their future success. As a first-generation college graduate, I can relate personally to the value of mentors in the lives of those students.”

Call for Papers: 2016 Hot Interconnects Conference

The 2016 Hot Interconnects Conference has issued its Call for Papers. The event takes place August 24-26 at Huawei in Santa Clara, California. “Hot Interconnects is the premier international forum for researchers and developers of state-of-the-art hardware and software architectures and implementations for interconnection networks of all scales, ranging from multi-core on-chip interconnects to those within systems, clusters, datacenters and Clouds.”

Slidecast: Advantages of Offloading Architectures for HPC

In this slidecast, Gilad Shainer from Mellanox describes the advantages of InfiniBand and the company’s off-loading network architecture for HPC. “The path to Exascale computing is clearly paved with Co-Design architecture. By using a Co-Design approach, the network infrastructure becomes more intelligent, which reduces the overhead on the CPU and streamlines the process of passing data throughout the network. A smart network is the only way that HPC data centers can deal with the massive demands to scale, to deliver constant performance improvements, and to handle exponential data growth.”