MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


HPC Advisory Council Announces 4th Annual RDMA Programming Competition

Today the HPC Advisory Council announced its Fourth Annual RDMA Programming Competition in China. Designed to support undergraduate curriculum and talent development, this unique hands-on competition furthers students study, experience and mastery.

Cowboy Supercomputer Powers Research at Oklahoma State

In this video, Oklahoma State Director of HPC Dana Brunson describes how the Cowboy supercomputer powers research. “High performance computing is often used for simulations that may be too big, too small, too fast, too slow, too dangerous or too costly, another thing it’s used for involves data. So you may remember the human genome project it took nearly a decade and cost a billion dollars, these sorts of things can now be done over the weekend for under a thousand dollars. Our current super computer is named Cowboy and it was funded by a 2011 National Science Foundation Grant and it has been serving us very well.”

Mellanox Powers OpenStack Cloud at University of Cambridge

Today Mellanox announced that University of Cambridge has selected Mellanox End-to-End Ethernet interconnect solution including Spectrum SN2700 Ethernet switches, ConnectX-4 Lx NICs and LinkX cables for its OpenStack-based scientific research cloud. This new win has expanded Mellanox’s existing footprint of InfiniBand solution and empowers the UoC to realize its vision of HPC and Cloud convergence through high-speed cloud networks at 25/50/100Gb/s throughput.

Job of the Week: HPC Application Performance Engineer at Mellanox

Mellanox is seeking an HPC Application Performance Engineer in our Job of the Week. “Mellanox Technologies is looking for a talented engineer to lead datacenter application performance optimization and benchmarking over Mellanox networking products. This individual will primarily work with marketing and engineering to execute low-level and application level benchmarks focused on High Performance Computing (HPC) open source and ISV applications in addition to providing software and hardware optimization recommendations. In addition, this individual will work closely with hardware and software partners, and customers to benchmark Mellanox products under different system configurations and workloads.”

Ohio Supercomputer Center Names New Cluster after Jesse Owens

The Ohio Supercomputer Center has named its newest HPC cluster after Olympic champion Jesse Owens. The new Owens Cluster will be powered by Dell PowerEdge servers featuring the new Intel Xeon processor E5-2600 v4 product family, include storage components manufactured by DDN, and utilize interconnects provided by Mellanox. “Our newest supercomputer system is the most powerful that the Center has ever run,” ODHE Chancellor John Carey said in a recent letter to Owens’ daughters. “As such, I thought it fitting to name it for your father, who symbolizes speed, integrity and, most significantly for me, compassion as embodied by his tireless work to help youths overcome obstacles to their future success. As a first-generation college graduate, I can relate personally to the value of mentors in the lives of those students.”

Slidecast: Advantages of Offloading Architectures for HPC

In this slidecast, Gilad Shainer from Mellanox describes the advantages of InfiniBand and the company’s off-loading network architecture for HPC. “The path to Exascale computing is clearly paved with Co-Design architecture. By using a Co-Design approach, the network infrastructure becomes more intelligent, which reduces the overhead on the CPU and streamlines the process of passing data throughout the network. A smart network is the only way that HPC data centers can deal with the massive demands to scale, to deliver constant performance improvements, and to handle exponential data growth.”

Video: InfiniBand as Core Network in an Exchange Application

“Group Deutsche Boerse is a global financial service organization covering the entire value chain from trading, market data, clearing, settlement to custody. While reliability has been a fundamental requirement for exchanges since the introduction of electronic trading systems in the 1990s, since about 10 years also low and predictable latency of the entire system has become a major design objective. Both issues have been important architecture considerations, when Deutsche Boerse started to develop an entirely new derivatives trading system T7 for its options market in the US (ISE) in 2008. As the best fit at the time a combination of InfiniBand with IBM WebSphere MQ Low Latency Messaging (WLLM) as the messaging solution was determined. Since then the same system has been adopted for EUREX, one of the largest derivatives exchanges in the world, and is now also extended to cover cash markets. The session will present the design of the application and its interdependence with the combination of InfiniBand and WLLM. Also practical experiences with InfiniBand in the last couple of years will be reflected upon.”

Interview: Cavium to Move ARM Forward for HPC at ISC 2016

“Cavium ThunderX has significant differentiation in the 64-bit ARM market as Cavium is the first ARMv8 vendor to deliver dual socket support with full ARMv8.1 implementation and significant advantage in CPU cores with 48 cores per socket. In addition, ThunderX supports large memory capacity (512GB per socket, 1TB in a 2S system) with excellent memory bandwidth and low memory latency. In addition, ThunderX includes multiple 10 GbE / 40GbE network interfaces delivering excellent IO throughput. These features enable ThunderX to deliver the core performance & scale out capability that the HPC market requires.”

University of New Orleans Wins Silicon Mechanics Research Cluster Grant

Today Silicon Mechanics announced that the University of New Orleans is the recipient of the company’s fifth annual Research Cluster Grant (RCG). Each grant awardee will receive a High-Performance Computing cluster with the latest high-performance processing and GPU technologies, valued at over $100,000 for use in demonstrated research purposes going forward. This is the second year that Silicon Mechanics has made the award to two institutions.

Mellanox Rolls Out EDR InfiniBand Routers

Today Mellanox announced a new line of InfiniBand router systems. The new EDR 100Gb/s InfiniBand Routers enable a new level of scalability critical for the next generation of mega data-center deployments as well as expanded capabilities for data center isolations between different users and applications. The network router delivers a consistent, high-performance and low latency router solution that is mission critical for high performance computing, cloud, Web 2.0, machine learning and enterprise applications.