Sign up for our newsletter and get the latest HPC news and analysis.

Diablo Technologies to Resume Shipments of Memory Channel Storage after Court Ruling

diablo

This week Diablo Technologies announced that the company will resume shipments of its MCS based chipsets. The news comes on the heels of ruling by a federal jury in favor of Diablo in a lawsuit brought by Netlist, Inc.

Video: Multipath RDMA

rdma

The main motivation for Multipath RDMA is to support three types of features: Failovers and High Availability Support, Bandwidth Aggregation, and L3 datacenter support.

Lenovo Steps Up to HPC

images

In this special guest feature, Tom Wilkie from Scientific Computing World reports how the launch of its HPC innovation centre in Stuttgart yesterday shows the company’s commitment to HPC.

High Throughput Data Acquisition at the CMS experiment at CERN

imgres

“The CMS detector at the Large Hadron Collider at CERN underwent a replacement of its data acquisition network to be able to process the increased data rate expected in the coming years. We will present the architecture of the system and discuss the design of its layers which are based on Infiniband as well as 10 and 40 GBit/s Ethernet.”

Supercomputing the Mass Difference of Neutron and Protons

The fact that the neutron is slightly more massive than the proton is the reason why atomic nuclei have exactly those properties that make our world and ultimately our existence possible.

Researchers using the JUQUEEN supercomputer have computed the small mass difference between protons and neutrons. The fact that the neutron is slightly more massive than the proton is the reason why atomic nuclei have exactly those properties that make our world and ultimately our existence possible. The fact that the neutron is slightly more massive […]

HPC Market Update from Intersect360 Research

addison2

“The drive toward exascale computing, a renewed emphasis on data-centric processing, energy efficiency concerns, and the limitations of memory and I/O performance are all working to reshape High Performance Computing platforms. Many-core accelerators, flash storage, 3D memory, integrated networking, and optical interconnects are just some of the technologies propelling these future architectures. In concert with those developments, the HPC vendor landscape has been churning in response to broader market forces, and these events are going to drive some interesting changes in the coming year.”

Optimizing Navier-Stokes Equations

jet-eddies-web_cropped

Solving Navier-Sokes equations are popular because they describe the physics of in a number of areas of interest to scientists and engineers. By solving these equations, the flow velocity can be calculated, and then other quantities of interest, such as pressure or temperature may be determined.

Going from the Lab to the Data Center

Genomic Sequencing

In the late 1980s, genomic sequencing began to shift from wet lab work to a computationally intensive science; by end of the 1990s this trend was in full swing. The application of computer science and high performance computing (HPC) to these biological problems became the normal mode of operation for many molecular biologists.

Video: Slim Fly – A Cost Effective Low-Diameter Network Topology

torsten

“We introduce a high-performance cost-effective network topology called Slim Fly that approaches the theoretically optimal network diameter. Slim Fly is based on graphs that approximate the solution to the degree-diameter problem. We analyze Slim Fly and compare it to both traditional and state-of-the-art networks. Our analysis shows that Slim Fly has significant advantages over other topologies in latency, bandwidth, resiliency, cost, and power consumption.”

How Supercomputers Give Universities a Competitive Edge

As one of the fastest supercomputers in Academia, Clemson's Palmetto2 cluster is an HP system with 12,080 compute cores and a peak performance of 739 Teraflops.

In an NSF-funded study, a Clemson University team found that universities with locally available supercomputers were more efficient in producing research in critical fields than universities that lacked supercomputers.