Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


NeSI in New Zealand Installs Pair of Cray Supercomputers

NeSI’s Fabrice Cantos and Greg Hall at the Cray factory checking in on our XC50

The New Zealand Science Infrastructure (NeSI) is commissioning a new HPC system with the main computation and storage infrastructure at NIWA’s High Performance Computing Facility at Greta Point in Wellington and with a secondary copy of all critical data held at the University of Auckland’s Tamaki Data Centre.

The new systems, provide a step change in power to NeSI’s existing services, including a Cray XC50 Supercomputer and a Cray CS400 cluster High Performance Computer, both sharing the same high performance and offline storage systems.

Beyond these core components, the new systems will deliver new capabilities in data analytics and artificial intelligence (AI), virtual laboratory services – to provide interactive access to data on the HPC filesystems, remote visualization, and support for high performance end-to-end data transfers between institutions and NeSI. These features will be rolled out over time, alongside training to ensure ease of use.

The new supercomputing platform enables new capabilities:

  • Interactive analyses and exploratory visualization supported by high performance data increase clarity and insight.
  • Pre- and post-processing using specialized large memory nodes, GPUs, and a rich catalogue of software enables more efficient workflows.
  • Storing and archiving big data offline supports research teams working together across projects and enables the most data intensive research workflows.
  • Performing advanced data analytics and opening up the world of artificial intelligence to discover new insights and resolve complex problems.
  • End-to-end integration supporting high performance data transfers across institutional boundaries allows you to quickly and efficiently transfer big data to and from NeSI.
  • Research communities working together within virtual laboratories, which are a customized, integrated and easy to use one stop shop of domain specific tools and data.
Capability Supercomputer
Feature Hardware Operating Environment
Capability Supercomputer

Cray XC50 Massively Parallel Capability Computer

464 nodes (of which NeSI has access to 265)* Intel Xeon “Skylake” 6148 processors, 2.4 GHz, 40 cores/node (18,560 cores total)

Aries Dragonfly interconnect

Memory: 50% nodes with 96GB/node, 50% nodes with 192GB/node (66.8TB total)

SUSE, Spectrum Scale filesystem, Slurm scheduler, Cray Programming Environment, Cray Compilers and toola, Intel compilers and tools

Allinea Forge  (DDT & MAP)  software development and debugging tools

Pre and Post Processing and Virtual Laboratories

 

28 nodes (of which NeSI has access to 16)*: Intel Xeon “Skylake” 6148 processors, 2.4 GHz, 40 cores/node (1,200 cores total)

Memory: 768GB/node (23TB total)

GPUs: 8 NVIDIA Pascal GPGPUs

CentOS 7

Spectrum Scale filesystem

Intel Parallel Studio Cluster

Nice DCV Visualisation

 

*This HPC system has been procured in collaboration with NIWA. The XC50 Supercomputer is shared between the two organisations.

 

Capacity High Performance Computer
Feature Hardware Operating Environment
Capacity HPC

Cray CS400 Capacity High Performance Computing cluster

234 nodes: Intel Xeon® E5-2695v4  Broadwell processors, 2.1 GHz, 36 cores/node (8,424 cores total)

Memory: 128 GB/node (30TB total)

Interconnect: FDR Infiniband (from the node) to EDR Infiniband (100 GB/s) backbone network

 CentOS 7, Spectrum Scale filesystem, Slurm scheduler, Cray Programming Environment, Cray Compilers and toola, Intel compilers and tools

Allinea Forge  (DDT & MAP)  software development and debugging tools

Pre and Post Processing and Virtual Laboratories 16 Large Memory and Virtual Laboratory nodes:  Intel Xeon® E5-2695v4  Broadwell processors, 36 cores/node (576 cores total)

Memory: 512 GB/node (8.2TB total)

GPUs: 8 Nvidia Pascal GPGPUs

1 Huge Memory node, 64 cores, 4 TB memory

CentOS 7

Spectrum Scale filesystem

Nice DCV Visualisation

 

High Performance and Offline Storage
Feature Hardware Operating Environment
High Performance Storage IBM Elastic Storage Server GS4s and GL6s:  6.9 PB (shared across Capability Supercomputer and Capacity HPC)

Mellanox EDR 100 GB/s Infiniband network

Total bandwidth ~130GB/s

Spectrum Scale (previously called GPFS)
Offline Storage IBM TS3500 Library, 12 ´ LTO7 drives, 5.8PB (uncompressed) (expandable to 30PB (uncompressed)) replicated across two sites

Sign up for our insideHPC Newsletter

Resource Links: