Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

NVIDIA Receives Approval to Proceed with Mellanox Acquisition

Today NVIDIA announced that it has received approval from all necessary authorities to proceed with its planned acquisition of Mellanox, as announced in March 2019. “This exciting transaction would unite two HPC industry leaders and strengthen the combined company’s ability to create data-centric system architectures for the convergence of the HPC and hyperscale markets around AI and other HPDA tasks,” said Steve Conway from Hyperion Research.

SDSC Expanse Supercomputer from Dell Technologies to serve 50,000 Users

In this special guest feature, Janet Morss at Dell Technologies writes that the company will soon deploy a new flagship supercomputer at SDSC. “Expanse will deliver the power of 728 dual-socket Dell EMC PowerEdge C6525 servers with 2nd Gen AMD EPYC processors connected with Mellanox HDR InfiniBand. The system will have 93,000 compute cores and is projected to have a peak speed of 5 petaflops. That will almost double the performance of SDSC’s current Comet supercomputer, also from Dell Technologies.”

Newly named Ethernet Technology Consortium Announces 800 Gigabit Ethernet Specification

The 25 Gigabit Ethernet Consortium, originally established to develop 25, 50 and 100 Gbps Ethernet specifications, announced today it has changed its name to the Ethernet Technology Consortium in order to reflect a new focus on higher-speed Ethernet technologies. “Ethernet is evolving very quickly and as a group, we felt that having 25G in the name was too constraining for the scope of the consortium,” said Brad Booth, chair of the Ethernet Technology Consortium. “We wanted to open that up so that the industry could have an organization that could enhance Ethernet specifications for new and developing markets.”

Student Teams Encouraged to Join the 3rd APAC HPC-AI Competition

Student teams are encouraged to apply for the 2020 APAC HPC-AI Competition. Continuing the success of the previous competitions, student teams will square off against international teams to produce solutions and applications in the High-Performance Computing and Artificial Intelligence domains. “We hope that the HPC-AI training established among our young aspiring programmers can help us tackle global threats such as COVID-19 and accelerate an improved response to future pandemics.” 

AiMOS Supercomputer at Rensselaer to Battle COVID-19

Rensselaer Polytechnic Institute is enlisting AiMOS, one of the most powerful supercomputers in the world, in the battle against the COVID-19 pandemic. Rensselaer is reaching out to the research community, including government entities, universities, and industry, to offer access to AiMOS in support of research related to the new coronavirus disease. “This effort requires expertise, collaboration, and the ability to process incredible amounts of data, and Rensselaer is offering all three at this critical time. In particular, the ability to model at very large scales requires the unique capabilities of AiMOS.”

12.8 Tbps Mellanox Spectrum-3 Ethernet Switches Optimized for Cloud, Storage, and AI

Today Mellanox announced it has commenced shipments of SN4000 Ethernet switches. The SN4000 family is powered by Mellanox Spectrum-3 – the world’s best performing, most scalable, and most flexible 12.8 Tbps Ethernet switch ASIC, which is optimized for Cloud, Ethernet Storage Fabric, and AI interconnect applications. SN4000 platforms come in flexible form-factors supporting a combination of up to 32 ports of 400GbE, 64 ports of 200GbE and 128 ports of 100/50/25/10GbE. The SN4000 platforms complement the 200/400GbE SN3000 leaf switches to form an efficient and high bandwidth leaf/spine network.

Mellanox to Acquire Titan IC for Security and Data Analytics

Today Mellanox announced that it has reached a definitive agreement to acquire privately held Titan IC, the leading developer of network intelligence (NI) and security technology to accelerate search and big data analytics across a broad range of applications in data centers worldwide. The acquisition will further strengthen Mellanox’s network intelligence capabilities delivered through the […]


In this special guest feature, Gilad Shainer from Mellanox Technologies writes that the new GPCNeT benchmark is actually a measure of relative performance under load rather than a measure of absolute performance. “When it comes to evaluating high-performance computing systems or interconnects, there are much better benchmarks available for use. Moreover, the ability to benchmark real workloads is obviously a better approach for determining system or interconnect performance and capabilities. The drawbacks of GPCNeT benchmarks can be much more than its benefits.”

UK to establish Northern Intensive Computing Environment (NICE)

The N8 Centre of Excellence in Computationally Intensive Research, N8 CIR, has been awarded £3.1m from the Engineering and Physical Sciences Resources Council to establish a new Tier 2 computing facility in the north of England. This investment will be matched by £5.3m from the eight universities in the N8 Research Partnership which will fund operational costs and dedicated research software engineering support. “The new facility, known as the Northern Intensive Computing Environment or NICE, will be housed at Durham University and co-located with the existing STFC DiRAC Memory Intensive National Supercomputing Facility. NICE will be based on the same technology that is used in current world-leading supercomputers and will extend the capability of accelerated computing. The technology has been chosen to combine experimental, modelling and machine learning approaches and to bring these specialist communities together to address new research challenges.”

Predictions for HPC in 2020

In this special guest feature from Scientific Computing World, Laurence Horrocks-Barlow from OCF predicts that containerization, cloud, and GPU-based workloads are all going to dominate the HPC environment in 2020. “Over the last year, we’ve seen a strong shift towards the use of cloud in HPC, particularly in the case of storage. Many research institutions are working towards a ‘cloud first’ policy, looking for cost savings in using the cloud rather than expanding their data centres with overheads, such as cooling, data and cluster management and certification requirements.”