Achieving better scalability and performance at Exascale will require full data reach. Without this capability, onload architectures force all data to move to the CPU before allowing any analysis. The ability to analyze data everywhere means that every active component in the cluster will contribute to the computing capabilities and boost performance. In effect, the interconnect will become its own “CPU” and provide in-network computing capabilities.
Hailing from Norway, big-memory appliance maker Numascale has been a fixture at the ISC conference since the company’s formation in 2008. At ISC 2016, Numascale was noticeably absent from the show and the word on the street was that the company was retooling their NumaConnect™ technology around NVMe. To learn more, we caught up with Einar Rustad, Numascale’s CTO.
With Intel Scalable System Framework Architecture Specification and Reference Designs, the company is making it easier to accelerate the time to discovery through high-performance computing. The Reference Architectures (RAs) and Reference Designs take Intel Scalable System Framework to the next step—deploying it in ways that will allow users to confidently run their workloads and allow system builders to innovate and differentiate designs
The move to network offloading is the first step in co-designed systems. A large amount of overhead is required to service the huge number of packets required for modern data rates. This amount of overhead can significantly reduce network performance. Offloading network processing to the network interface card helped solve this bottleneck as well as some others.
Today Mellanox announced that SysEleven in Germany used the company’s 25/50/100GbE Open Ethernet solutions to build a new SSD-based, fully-automated cloud datacenter. “We chose the Mellanox suite of products because it allows us to fully automate our state-of-the-art Cloud data center,” said Harald Wagener, CTO, SysEleven. “Mellanox solutions are highly scalable and cost effective, allowing us to leverage the Company’s best-in-class Ethernet technology that features the industry’s best bandwidth with the flexibility of the OpenStack open architecture.”
Seven women who work in IT departments at research institutions around the country have been selected to help build and operate the high performance SCinet conference network at SC16. The announcement came from the Women in IT Networking at SC program, also known as WINS.
Coming in the second half of 2016: The HPE Apollo 6500 System provides the tools and the confidence to deliver high performance computing (HPC) innovation. The system consists of three key elements: The HPE ProLiant XL270 Gen9 Server tray, the HPE Apollo 6500 Chassis, and the HPE Apollo 6000 Power Shelf. Although final configurations and performance are not yet available, the system appears capable of delivering over 40 teraflop/s double precision, and significantly more in single or half precision modes.
Over at the SC16 Blog, JP Vetters writes that planning for the SCinet high-bandwidth conference network is a multiyear process. “The success of any large conference depends on the, often unseen, hard work of many. During the last quarter century, the SCinet team has strived to perfect its routine so that conference-goers can experience a smoothly run Show.”
“The Simons Foundation is beginning a new computational science organization called the Flatiron Institute. Flatiron will seek to explore challenging science problems in astrophysics, biology and chemistry. Computational science techniques involve processing and simulation activities and large-scale data analysis. This position is intended to help manage and fully exploit the data and storage resources at Flatiron to further the scientific mission.”
In this video from the 4th Annual MVAPICH User Group, DK Panda from Ohio State University presents: Overview of the MVAPICH Project and Future Roadmap. “This talk will provide an overview of the MVAPICH project (past, present and future). Future roadmap and features for upcoming releases of the MVAPICH2 software family (including MVAPICH2-X, MVAPICH2-GDR, MVAPICH2-Virt, MVAPICH2-EA and MVAPICH2-MIC) will be presented. Current status and future plans for OSU INAM, OEMT and OMB will also be presented.”