Cray ClusterStor Powers Joliot-Curie Supercomputer in France

Print Friendly, PDF & Email

Last week at ISC 2018, Cray announced that the new GENCI supercomputer in France is powered by Cray ClusterStor storage systems integrated by Atos. Named after the Nobel Prize-winning French couple, the 9 Petaflop Joliot-Curie system is made up of a BullSequana X1000 system from Atos, which includes Intel Xeon Scalable processors, Intel Xeon Phi processors, and the Cray ClusterStor storage system. The system delivers 300 gigabytes per second sustained throughput from five petabytes usable capacity through the Lustre file system in a footprint of just three racks.

The new Joliot-Curie system is operated by CEA in its TGCC facility. It will be used for both PRACE and national scientific and national needs.

At GENCI we believe that supercomputers will be at the very heart of accelerated scientific discovery process enabled by the convergence of high-performance computing (HPC) simulation/modeling with high-performance data analytics/big data and artificial intelligence,” said Philippe Lavocat, CEO of GENCI. “This convergence creates order of magnitude increases in input and output data in increasingly heterogeneous scientific workflows. With the Cray ClusterStor storage platform, we can cope with this data growth without the need to spend more and more on HPC storage at the expense of the “Computing” in HPC. This allows us to be able to deliver balanced supercomputers to the French and European scientific communities, within a reasonable and controlled budget.”


This supercomputer consists of two separate partitions:

Irene SKL:

  • 1,656 Intel Skylake 8168 2.7 GHz dual-processor twin nodes with 24 cores per processor, making a total of 79,488 compute cores and a power of 6.86 Petaflops
  • 192 GB of DDR4 memory / node
  • EDR InfiniBand interconnect

Irene KNL:

  • 666 Intel KNL 7250 1.4GHz multi-core nodes with 68 cores per processor, making a total of 45,288 cores for a power of 2 Petaflops
  • 96 GB of DDR4 memory + 16 GB of memory MCDRAM / node
    Atos-BULL BXI Interconnection Network
  • 20 visualization nodes with Nvidia-P100 co-processors and 5 nodes called “big memory” for the pre / post processing with 3 TB of memory / node and a Nvidia-P100 co-processor, complete the nodes of calculations.
  • I/O bandwidth of 300 GB/s

It is critical for CEA to work with partners that understand the unique challenges of building solutions at the leading edge of supercomputing,” said Jacques-Charles Lafoucrière, department lead at CEA. “Cray ClusterStor excels in performance efficiency and, over the past few years, the ClusterStor platform has proven its stability in the most demanding HPC environments at CEA. Key environmental factors such as density and power requirements also weighed in favor of the ClusterStor solution.”

Sign up for our insideHPC Newsletter