The new TOP500 list is out, and Rad is Free HPC is here podcasting the scoop in their own special way. With two new systems in the TOP10, there are many different perspectives to share. “The Cori supercomputer, a Cray XC40 system installed at Berkeley Lab’s National Energy Research Scientific Computing Center (NERSC), slipped into the number 5 slot with a Linpack rating of 14.0 petaflops. Right behind it at number 6 is the new Oakforest-PACS supercomputer, a Fujitsu PRIMERGY CX1640 M1 cluster, which recorded a Linpack mark of 13.6 petaflops.”
Welcome to the Mobile Edition for the Print ‘n Fly Guide to SC16 in Salt Lake City. Inside this guide you will find technical features on supercomputing, HPC interconnects, and the latest developments on the road to exascale. It also has great recommendations on food, entertainment, and transportation in SLC.
“We go to the show for the technology, the engineering, the science, and the math. It’s HPCMatters and STEM. The vendors are showcasing their technology and the science their technology has enabled. The research exhibits are showing how they are contributing to the scientific process with the largest supercomputers that have cool names. That’s what’s so great about SC: It brings together many of the brilliant minds behind these technologies.”
“Real-time-analytics and Big Data environments are extremely demanding and the network is critical in linking together the extra high performance IBM POWER based servers and Tencent Cloud’s massive amounts of data,”said Amir Prescher, Sr. Vice President, Business Development, at Mellanox Technologies. “Tencent Cloud developed an optimized hardware/software platform to achieve new computing records, showing that Mellanox’s 100Gb/s Ethernet technology can deliver total infrastructure efficiency and improves application performance, making them ideal for Big Data applications.”
Intel Omni-Path Architecture (Intel OPA) volume shipments started a mere nine months ago in February of this year, but Intel’s high-speed, low-latency fabric for HPC has covered significant ground around the globe, including integration in HPC deployments making the Top500 list for June 2016. Intel’s fabric makes up 48 percent of installations running 100 Gbps fabrics on the Top500 June list, and they expect a significant increase in Top500 deployments, including one that could end up in the stratosphere among the top ten machines on the list.
In this slidecast, Gilad Shainer from Mellanox announces the world’s first HDR 200Gb/s data center interconnect solutions. “These 200Gb/s HDR InfiniBand solutions maintain Mellanox’s generation-ahead leadership while enabling customers and users to leverage an open, standards-based technology that maximizes application performance and scalability while minimizing overall data center total cost of ownership. Mellanox 200Gb/s HDR solutions will become generally available in 2017.”
Today ECI announced plans to demonstrate a 400G backbone at SCinet, the world’s largest and fastest high-performance network at SC16 in Salt Lake City. “ECI was delighted to receive the invitation to participate in this exciting demonstration for the high performance computing sector. ECI is no stranger to this sector. We provide services to many research and education networks worldwide. Some of our wins include DFN (Germany), Switch (Sweden), GRNet (Greece), and most recently an exciting win at DeIC (Denmark), details of which will be disclosed in the near future,” said Tony Gomez, VP of Business Development for ECI in N. America.”
Attendees of SC16 who are interested in open source data management will have plenty of opportunities to learn about the integrated Rule-Oriented Data System (iRODS) and the new iRODS 4.2, which will be released just in time for the conference.
NERSC is working with Cray to explore new ways to more efficiently move data in and out of Cori, a powerful supercomputer being constructed in California. “We need to take advantage of a network guru’s design for moving data for a specific experiment but have SDN do all of the bookkeeping for which compute nodes need to be connected to what networks,” said Brent Draney, group lead for the Networking, Servers and Security Group at NERSC. “I would rather see our network engineers analyze the data flow and how to meet the need instead of having to manually reconfigure the network for the demands of each job.”