The University of Tokyo has chosen SGI to perform advanced data analysis and simulation within its Information Technology Center. The center is one of Japan’s major research and educational institutions for building, applying, and utilizing large computer systems. The new SGI system will begin operation July 1, 2016. “The SGI integrated supercomputer system for data analysis and simulation will support the needs of scientists in new fields such as genome analysis and deep learning in addition to scientists in traditional areas of computational science,” said Professor Hiroshi Nakamura, director of Information Technology Center, the University of Tokyo. “The new system will further ongoing research and contribute to the development of new academic fields that combine data analysis and computational science.”
Today Mellanox announced that the company’s interconnect technology accelerates the world’s fastest supercomputer at the supercomputing center in Wuxi, China. The new number one supercomputer delivers 93 Petaflops (3 times higher compared to the previous top system), connecting nearly 41 thousand nodes and more than ten million CPU cores. The offloading architecture of the Mellanox interconnect solution is the key to providing world leading performance, scalability and efficiency, connecting the highest number of nodes and CPU cores within a single supercomputer.
FrostByte is a complete solution that integrates Penguin Computing’s new Scyld FrostByte software with an optimized high-performance storage platform. FrostByte will support multiple open software storage technologies including Lustre, Ceph, GlusterFS and Swift, and will first be available with Intel Enterprise Edition for Lustre. The entry-level FrostByte is a single rack with 500TB of highly available storage that can deliver up to 18GB/s and 500K/s metadata ops/s over Intel Omni-Path, Mellanox EDR InfiniBand or Penguin Arctica 100GbE network solutions. A single FrostByte “Scalable Unit” can deliver up to 15PB and greater than 500GB/s in 5 racks. Multiple Scalable Units can be combined to scale up to 100s of petabytes and 10s of terabytes/sec of aggregate storage bandwidth.
“Today’s acquisition of QLogic is highly complementary and strategic to Cavium and it creates a diversified pure-play infrastructure semiconductor leader,” stated Syed Ali, President and Chief Executive Officer of Cavium. “QLogic’s industry leading products extend our market position in data center, cloud and storage markets, and further diversifies our revenue and customer base. In addition to the compelling strategic benefits, the manufacturing, sales and operating synergies will create significant value for our shareholders.”
“Today, scalable compute and storage systems suffer from data bottlenecks that limit research, product development, and constrain application services. ConnectX-5 will help unleash business potential with faster, more effective, real-time data processing and analytics. With its smart offloading, ConnectX-5 will enable dramatic increases in CPU, GPU and FPGA performance that will enhance effectiveness and maximize the return on data centers’ investment.”
In this special guest feature, Scot Schultz from Mellanox and Terry Myers from HPE write that the two companies are collaborating to push the boundaries of high performance computing. “So while every company must weigh the cost and commitment of upgrading its data center or HPC cluster to EDR, the benefits of such an upgrade go well beyond the increase in bandwidth. Only HPE solutions that include Mellanox end-to-end 100Gb/s EDR deliver efficiency, scalability, and overall system performance that results in maximum performance per TCO dollar.”
Today Mellanox announced the BlueField family of programmable processors for networking and storage applications. “As a networking offload co-processor, BlueField will complement the host processor by performing wire-speed packet processing in-line with the network I/O, freeing the host processor to deliver more virtual networking functions (VNFs),” said Linley Gwennap, principal analyst at the Linley Group. “Network offload results in better rack density, lower overall power consumption, and deterministic networking performance.”
In this video, Oklahoma State Director of HPC Dana Brunson describes how the Cowboy supercomputer powers research. “High performance computing is often used for simulations that may be too big, too small, too fast, too slow, too dangerous or too costly, another thing it’s used for involves data. So you may remember the human genome project it took nearly a decade and cost a billion dollars, these sorts of things can now be done over the weekend for under a thousand dollars. Our current super computer is named Cowboy and it was funded by a 2011 National Science Foundation Grant and it has been serving us very well.”
Today Mellanox announced that University of Cambridge has selected Mellanox End-to-End Ethernet interconnect solution including Spectrum SN2700 Ethernet switches, ConnectX-4 Lx NICs and LinkX cables for its OpenStack-based scientific research cloud. This new win has expanded Mellanox’s existing footprint of InfiniBand solution and empowers the UoC to realize its vision of HPC and Cloud convergence through high-speed cloud networks at 25/50/100Gb/s throughput.
Mellanox is seeking an HPC Application Performance Engineer in our Job of the Week. “Mellanox Technologies is looking for a talented engineer to lead datacenter application performance optimization and benchmarking over Mellanox networking products. This individual will primarily work with marketing and engineering to execute low-level and application level benchmarks focused on High Performance Computing (HPC) open source and ISV applications in addition to providing software and hardware optimization recommendations. In addition, this individual will work closely with hardware and software partners, and customers to benchmark Mellanox products under different system configurations and workloads.”