In this video from SC12, Solarflare CEO Russell Stern describes the company’s new “bump in the wire” ApplicationOnLoad Engine (AOE). By enabling applications to be processed on the fly right on the NIC server adapter, the company is opening up a new paradigm of computation, ransforming the way networks process data and overcoming performance obstacles that cannot be solved by simply adding more processors.
Leveraging our high-performance 28-nm Stratix V FPGA, Solarflare has created a comprehensive firmware development kit that provides a straightforward integrated application development environment,” said Jeff Waters, senior vice president and general manager of the Military, Industrial and Computing Division of Altera. “With its ApplicationOnLoad Engine, Solarflare is delivering an integrated application on-load solution that enables application processing to be moved directly to the network adapter for lower latency, CPU offload or compliance.”
In this video from SC12, Fred Homewood from Gnodal describes the company’s high bandwidth, low latency 40 GigE Ethernet switch solutions for HPC and the Enterprise.
Today’s world is seeing the ever-increasing growth of the data center, with 10 GbE becoming the dominant standard. This convergence of requirements has led to the High Performance Data Center, with demands for high bandwidth, low latency and scalability. This is where Gnodal leads the way, with our products offering outstanding performance. With the GS-Series, ease of deployment and management through a scalable switch fabric solution is now achievable.”
On the product side, Connect-IB dual-port 56Gb/s FDR InfiniBand adapter recently achieved world record throughput of more than 100Gb/s utilizing PCI Express 3.0 x16 and over 135 million messages per second, 4.5X higher than previous or competing solutions.
Over at Datacenter Knowledge, Rich Miller writes that CERN’s new data center in Budapest is set to be one of the first beneficiaries of a new terabit network created by GÉANT, a European data network for researchers and scientists.
GÉANT’s migration to the latest transmission and switching technology is designed to support up to 2Tbps (terabits per second) capacity across the core network. 500Gbps capacity will be available across the core network from first implementation, delivering circuits across Europe that will allow individual users to transfer data at speeds of up to 100Gbps, or multiples thereof, thereby enabling faster collaboration on critical projects and meeting the rapidly increasing demand for data transfer.
Talk about Big Data–the CERN Large Hadron Collider generates over 100 petabytes of data per year at its home near Geneva, Switzerland. Read the Full Story.
In this video, Bill Lee from IBTA and Rupert Dance from the Open Fabrics Alliance discuss the latest developments in high performance networking at the SC12 conference in Salt Lake City.
This week the OpenFabrics Alliance (OFA) announced 224 TOP500* supercomputers are using OpenFabrics Software (OFS) in their high performance computing (HPC) clusters, including two of the top 10. Clusters using OFA’s OFS driver stacks and application libraries achieve the highest performance of all clusters using standard interconnects.
According to the recently published list, OFS is present in the following:
224 clusters, 45% of the TOP500 list
All 10 of the standards-based Petascale systems
86% of the accelerator-based system
The results from the TOP500 are a clear indication that OFS adoption continues to grow and is the leading open source software stack for running applications over InfiniBand, iWARP and RoCE,” said Jim Ryan, chairman, OFA. “The success of OFS is due to the hard work of OFA members and users whose mission is to identify and implement the highest-performing interconnect in the industry.”
Mellanox has introduced the MetroX series of long distance InfiniBand switch solutions. Currently, InfiniBand solutions are being deployed within the data center to effectively connect between servers and between servers and storage. MetroX enables native InfiniBand and RDMA connectivity between data centers across multiple geographically distributed sites. MetroX can transfer data to distances of up to 10km now, and at up to 100km in the future. Running six long haul ports at 40Gb/s and six downlink FDR 56Gb/s InfiniBand ports enables star-like campus deployments and provides clear capital expense reduction versus single port-to-port long haul solutions in the market today.
The MetroX series extends InfiniBand beyond a single, data center network location to help deliver higher performance to local, campus and even metro applications,” said Gilad Shainer, vice president of market development at Mellanox Technologies. “Mellanox’s MetroX is the perfect cost-effective, low power, easily managed solution that enables today’s data centers to run over local and distributed InfiniBand fabrics, with management under a single unified network infrastructure.”
The MetroX TX6100, supporting up to 10km over 10Gb/s speed, will be available in the first quarter of 2013. The MetroX TX6200, supporting up to 10km over 40Gb/s speed, will be available in the third quarter of 2013.
Mellanox announced availability of L3 software features in MLNX-OS, the company’s switch software stack. The addition of these Ethernet routing protocols enables end-users to build large Ethernet networks that leverage Mellanox’s SwitchX based 10/40GbE switch systems for superior throughput, latency and power. NVGRE, VXLAN and similar technologies run on top of Mellanox L2/L3 Ethernet switches to enable IP-based scalable, fully virtualized networks.
Our new routing capabilities help us provide a great solution for the growing needs for 40GbE, non-blocking storage networks, typically built in a CLOS-3 topology, that leverage SwitchX’s 4Tb/s switching/routing capacity,” said Michael Kagan, CTO at Mellanox Technologies. “Providing IP-based scalable 10/40GbE and even 56GbE networks increases Mellanox’s ability to deliver solutions for the increasing need of bandwidth sensitive applications.”
Today Mellanox announced that the company’s Connect-IB networking technolgy for scalable computing demonstrates record performance with more than 135 million messages per second and data throughput higher than 100Gb/s. Based on 56Gb/s FDR InfiniBand, the Connect-IB dual-port adapter achieves the industry’s highest throughput 4.5X higher performance than competing solutions. These performance capabilities are critical to High-Performance Computing (HPC), Web 2.0, Cloud, Big Data and financial applications which require a high rate of message communication in order to deliver faster results and provide a competitive advantage to their users.
Mellanox Connect-IB delivers the next generation interconnect architecture and performance for the world leading HPC, Web 2.0, cloud and storage infrastructures,” said Michael Kagan, CTO at Mellanox Technologies. “By providing double the interconnect throughput and 4.5X the message rate, we enable applications to scale with greater performance and efficiency. Together with our server and storage partners, we pave the road to Exascale platforms.”
The InfiniBand Trade Association (IBTA) and the Open Fabrics Alliance (OFA) are joining up at a booth this year at SC12 as well as a panel session on future I/O architectures.
I/O is a significant factor in enabling the performance and scalability for HPC and Big Data analysis. The panel session, “Exascale and Big Data I/O” will be moderated by Bill Boas from System fabric Works and will discuss the Tier1 OEM and End Customer/User requirements for future I/O architectures, standards and protocols, and whether they should be open or proprietary. Panelists will include top industry technologists such as Larry Kaplan, I/O architect at Cray, Sorin Faibish, chief scientist, Fast Data Group at EMC, Ronald Luijten, data motion architect at IBM Zurich Research, Michael Kagan, co-founder and chief technology officer at Mellanox Technologies, Manoj Wadekar, chief scientist QLogic, and Peter Braam, storage software fellow at Xyratex.
The panel session will take place on Wednesday, Nov. 14 at 1:30 p.m. in Room 355-BC. Visit IBTA and OFA at SC12 booth #3630 for more information or read the Full Story.
The SC12 conference this week in Salt Lake City will be home to one of the fastest computer networks in the world.
Known as SCinet, the network is built each year to support the international conference for high performance computing, networking, storage and analysis. Over 100 engineers representing industry, academia and government institutions have volunteered their time over the past year to plan and build SCinet using nearly $28 million in donated equipment. The network will serve as the primary backbone supporting all 10,000+ SC conference attendees as they unveil their latest innovations in high performance computing applications.
Unlike typical Internet traffic, scientific workflows tend to demand high capacity network links for long duration large data flows,” said Linda Winkler, Senior Network Engineer at Argonne National Laboratory and chair of SCinet for SC12. “The SCinet infrastructure was architected to meet these demanding requirements.”
The SCinet Research Sandbox (SRS) will once again showcase the nextgeneration of HPC applications this year at SC12 with seven innovative network research projects. As a key component of the conference SCinet infrastructure, the SRS will provide researchers with dedicated access to multiple 100 Gbps wide area network links as well as a 10 Gbps OpenFlow network testbed.
In addition to supporting the extreme demands of the HPC-based demonstrations that have become the trademark of the conference, SCinet also seeks to foster and highlight developments in network research that will be necessary to support the next-generation of science applications,” said Brian Tierney, SRS co-chair for SC12 and head of ESnet’s Advanced Network Technologies Group. “Both 100 Gbps networking and OpenFlow have become some of the most influential networking technologies of this decade. SRS allows the community to showcase innovations on these platforms while in their infancy to demonstrate the impact they may have on the entire HPC community in the future.”
Today Spectra Logic announced the introduction of support for 10 Gigabit Ethernet iSCSI connectivity as an interface option for the Spectra T-Series tape libraries. Offered in partnership with Bridgeworks, the solution enables simple integration of tape systems into 10GbE SANs.
We’re excited to continue on the path of keeping tape storage systems easy to integrate in the modern data center. By supporting the Bridgeworks solution for 10GbE iSCSI connectivity to our T-series tape libraries, our customers who are designing data center solutions based on 10GbE iSCSI no longer need to maintain a FC SAN just for their tape storage system,” said Molly Rector, executive vice president of product management and worldwide marketing, Spectra Logic.
The 10GbE iSCSI to FC bridge is available for purchase through Bridgeworks. Read the Full Story.
This week Mellanox announced the creation of Mellanox Federal Systems, a wholly owned subsidiary of Mellanox Technologies. Mellanox Federal Systems will be responsible for driving business development for all federal government agencies and the federal integrator market.
Mellanox has supported the government’s IT server and storage interconnect needs for more than 10 years and has established itself as a trusted leader in delivering high-performance interconnect solutions,” said Eyal Waldman, chairman, president and CEO of Mellanox Technologies. “The formation of Mellanox Federal Systems is a natural evolution for us in expanding and enhancing our relationship with the U.S. government.”
Dale D’Alessio has been named CEO of Mellanox Federal Systems, which will be based in Vienna, Virginia. Mr. D’Alessio was previously co-founder and managing member of YottaStor, a professional service and product company specializing in big data storage solutions for the Intelligence and U.S. Department of Defense markets. Mellanox solutions are used for a variety of applications and are used by federal agencies in the areas of cloud computing, defense/intelligence, big-data analytics and HPC. Read the Full Story.
Over at EE Times, Rick Merritt writes that Andy Bechtolsheim’s recent keynote at the Linley Tech Processor Conference shed new light on the future of the high performance and networking market.
By 2013, half of all servers could include 10GE interfaces, rising to 80 percent in 2015, Bechtolsheim predicted. That’s about when Intel’s 22-nm server processors based on its Haswell design will emerge, starting a shift to 40GE on servers. The good news is large-scale data centers are creating a robust growth market for such systems. Bechtolsheim showed market research figures estimating the data center switch business alone will rise from $4-5 billion in 2010 to $15 billion in 2015. “This is basically tripling in five years–one of the few IT markets growing at this rate and it’s driven by the shifts from 1 to 10 and 40GE,” he said.
This week Cray announced it has strengthened its storage and data management team with the addition of key individuals from SystemFabricWorks (SFW), a recognized leader in storage interconnect solutions and software. Through an agreement with SFW, Cray has hired the majority of SFW’s employees as Cray continues its leadership in developing and deploying high-performance production parallel file system solutions for the HPC and Big Data marketplaces.
Expanding our team with some of the top storage and Infiniband engineering experts is an important step in continuing to drive Cray’s growth in the storage market,” said Barry Bolding, Cray’s vice president of storage and data management. “Our expanding base of HPC storage customers have demanding integration and deployment requirements and are looking for vendors like Cray that have experience in both storage and storage interconnects such as Infiniband. At Cray, we are creating a storage team with the design expertise and best practices that are critical to the performance, management and scalability of parallel storage systems over time.”
Among the SFW employees now working at Cray are Robert (Bob) Pearson, the former CEO of SFW, and Bill Boas, previously the vice president of business development for SFW and a co-founder of the Open Fabric Alliance. A number of highly-skilled engineers from SFW have also joined Cray’s storage and data management team, significantly enhancing Cray’s expertise in Lustre-based storage solutions. SFW is continuing its operations with a new management team. Read the Full Story.
Update: Bill Boas called to let us know that SFW is still going strong with Kevin J. Moran as President & CEO. The company has a number of interesting projects ongoing in the HPC and Cloud space, including a new server based on the Calxeda EnergyCore platform. Boas will continue on at SFW on a part-time basis.