Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Production Trial Shows Global Science Possible with CAE-1 100Gbps link

In early November, A*CRC, ICM, and Zettar conducted a production trial over the newly built Collaboration Asia Europe-1 (CAE-1) 100Gbps link connecting Europe and Singapore.

The project has established a historical first,” said Zettar CEO Chin Fang. “For the first time over the newly built CAE-1 link, with a production setup at ICM end, it has shown that moving data at great speed and scale between Poland (and thus Eastern Europe) and Singapore is a reality. Furthermore, although the project was initiated only in mid-October, all goals have been reached and a few new grounds have also been broken as well. It is also a true international collaboration.

The CAE-1 link provides shorter, faster, and cheaper connectivity than the links routed via the North Atlantic Ocean, across North America, and across the Pacific Ocean that have carried much of the R&E traffic to date between Europe and Asia Pacific region. Going forward, the Middle East region will be able to participate in globally distributed data-intensive research and scientific endeavors with Europe, Asia Pacific region, and beyond.

Data is the new “oil” of the modern digital age. Just like having ready means to transport the liquid oil has enabled the rapid progress the world has seen for more than a century, in this digital age, having a complete solution for transporting data over great distances at rapid speed will surely spur more progress. That the project employs only the existing equipment, production setup, and GA grade software has shown that there is a complete solution available and the solution can be put together in a very short time. Cost-effectiveness should also be evident.

Conclusions from production trial:

  • More R&E regions are reachable. From now on, distributed data-intensive science and engineering collaboration between Europe, Middle East, Asia and Pacific regions are not only feasible, but also can be efficient if the right data moving solution is used.
  • More world-wide participation in distributed data-intensive research collaboration is a reality. The achievement should encourage and motivate more parties along the data path and beyond to collaborate on the advancement of the global sciences and engineering.
  • Date gravity is no longer a barrier to progress. Even the tight time for preparation, the attained transfer speed already shows it’s possible to move 1PB in less than two days between any two points along the data path used by the project.
  • Modest hardware can produce world-class top results, if the resources are utilized intelligently.

According to Fang, this was a production trial – not a “for show demo”. For example, at ICM, two production Lustre file systems are employed; both formed with 20 OSTs; each OST has 4 x 7200RPM HDDs. Not even a single SSD is employed. Only a single DTN at each end. Both DTNs are from existing hardware inventory. Both DTNs are more than 2 years old.

Highlights:

  • Attained result is world’s top level (~60Gbps average)
  • Stock TCP is used. There is no need to use any proprietary protocol.
  • Vast distance: 19,800 km, 12,375 miles
  • Stunningly short preparation: 2 weeks total
  • InfiniBand (IB), typically used for interconnect in the HPC space, is not amenable to interface bonding, unlike Ethernet, but the two storage pools with IB interconnects are aggregated by the data mover software Zettar zx

Next steps

Singapore is an important hub of high-speed international connectivity. Australia is one of the main sites of the ambitious and demanding Square Kilometer Array (SKA) project. AARNet is one of the six CAE-1 consortium members. The SKA project produces huge amount of data that needs to be shared efficiently among international collaborating organizations. Thus, a likely the next step is to engage a supercomputing center in Australia and conduct a similar project, although other even more ambitious possibilities also exist.

The current setup has been prepared within a very tight timeline,” said Fang. “Further polish should improve the overall efficiency and higher transfer rates should be attainable.”

For further details, visit A*CRC at SC19 booth #2049.

Sign up for our insideHPC Newsletter

Leave a Comment

*

Resource Links: