MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Slidecast: Announcing Mellanox ConnectX-5 100G InfiniBand Adapter

“Today, scalable compute and storage systems suffer from data bottlenecks that limit research, product development, and constrain application services. ConnectX-5 will help unleash business potential with faster, more effective, real-time data processing and analytics. With its smart offloading, ConnectX-5 will enable dramatic increases in CPU, GPU and FPGA performance that will enhance effectiveness and maximize the return on data centers’ investment.”

Paul Messina on the New ECP Exascale Computing Project

Argonne Distinguished Fellow Paul Messina has been tapped to lead the Exascale Computing Project, heading a team with representation from the six major participating DOE national laboratories: Argonne, Los Alamos, Lawrence Berkeley, Lawrence Livermore, Oak Ridge and Sandia. The project will focus its efforts on four areas: Applications, Software, Hardware, and Exascale Systems.

Mellanox Rolls Out ConnectX-4 25 Gb/s Ethernet Adapters

“The performance expectations placed on data centers today are unprecedented and require organizations to make infrastructure decisions that meet today’s demands while anticipating future growth needs,” said Kevin Deierling, vice president of marketing at Mellanox Technologies. “Mellanox’s 25GbE adapters are designed to serve as direct replacement for commonly deployed 10 Gigabit Ethernet adapters. For example, just one port of 25GbE delivers 2.5 times more bandwidth per link, consumes less power, requires half the cabling, and half the switch ports versus multiple 10GbE ports, and thus enables application acceleration at a lower cost.”

Specification Released for NVM Express over Fabrics

“Storage technologies are quickly innovating to reduce latency, providing a significant performance improvement for today’s cutting-edge applications. NVM Express (NVMe) is a significant step forward in high-performance, low-latency storage I/O and reduction of I/O stack overheads. NVMe over Fabrics is an essential technology to extend NVMe storage connectivity such that NVMe-enabled hosts can access NVMe-enabled storage anywhere in the datacenter, ensuring that the performance of today’s and tomorrow’s solid state storage technologies is fully unlocked, and that the network itself is not a bottleneck.”

Call for Participation: Women in IT Networking at SC16

“The Women in IT Networking at SC (WINS) program was developed as a means for addressing the prevalent gender gap that exists in Information Technology particularly in the fields of network engineering and high performance computing. The 2015 program* enabled five talented early to mid-career women from diverse regions of the U.S. research and education community IT field to participate in the ground-up construction of SCinet, one of the fastest and most advanced computer networks in the world.”

Building Bridges to the Future

“The Pittsburgh Supercomputing Center recently added Bridges to its lineup of world-class supercomputers. Bridges is designed for uniquely flexible, interoperating capabilities to empower research communities that previously have not used HPC and enable new data-driven insights. It also provides exceptional performance to traditional HPC users. It converges the best of High Performance Computing (HPC), High Performance Data Analytics (HPDA), machine learning, visualization, Web services, and community gateways in a single architecture.”

With the Help of Dijkstra’s Law, Intel’s Mark Seager is Changing the Scientific Method

Our in-depth series on Intel architects continues with this profile of Mark Seager, a key driver in the company’s mission to achieve Exascale performance on real applications. “Creating and incentivizing an exascale program is huge. Yet more important, in Mark’s view, NCSI has inspired agencies to work together to spread the value from predictive simulation. In the widely publicized Project Moonshot sponsored by Vice President Biden, the Department of Energy is sharing codes with the National Institutes of Health to simulate the chemical expression pathway of genetic mutations in cancer cells with exascale systems.”

BeeGFS Certified on Intel Omni-Path at 12GB/s per server

Today ThinkParQ from Germany announced certification of BeeGFS over Intel Omni-Path Architecture (OPA). “Without a doubt, Intel has made a big leap in performance with the new 100Gbps OPA technology compared to previous interconnect generations,” said Sven Breuner, CEO of ThinkParQ. “The fact that we didn’t need to modify even a single line of the BeeGFS source code to deliver this new level of throughput, confirms that the BeeGFS internal design is really future-proof.”

HPE and Mellanox: Advanced Technology Solutions for HPC

In this special guest feature, Scot Schultz from Mellanox and Terry Myers from HPE write that the two companies are collaborating to push the boundaries of high performance computing. “So while every company must weigh the cost and commitment of upgrading its data center or HPC cluster to EDR, the benefits of such an upgrade go well beyond the increase in bandwidth. Only HPE solutions that include Mellanox end-to-end 100Gb/s EDR deliver efficiency, scalability, and overall system performance that results in maximum performance per TCO dollar.”

Mellanox Introduces BlueField SoC Programmable Processors

Today Mellanox announced the BlueField family of programmable processors for networking and storage applications. “As a networking offload co-processor, BlueField will complement the host processor by performing wire-speed packet processing in-line with the network I/O, freeing the host processor to deliver more virtual networking functions (VNFs),” said Linley Gwennap, principal analyst at the Linley Group. “Network offload results in better rack density, lower overall power consumption, and deterministic networking performance.”