Over at Lawrence Livermore National Laboratory, Don Johnston writes that their new ground-breaking Catalyst system is now available to industry collaborators to test big data technologies, architectures and applications. Developed by a partnership of Cray, Intel and Lawrence Livermore, Catalyst is Cray CS300 high performance computing (HPC) cluster is available for collaborative projects with industry through Livermore’s High Performance Computing Innovation Center (HPCIC).
Over the next decade, global data volume is forecasted to reach more than 35 zettabytes,” (a zettabyte is a trillion gigabytes) said Fred Streitz, director of the HPCIC. “That enormous amount of unstructured data provides an opportunity. But how do we extract value and inform better decisions out of that wealth of raw information?”
- 150 Teraflops peak
- 12-core Intel Xeon E5-2695v2 processors
- 324 nodes
- 7,776 cores
- 128 Gigabytes DRAM per node
- 800 GB of NVRAM per compute node
- 3.2 terabytes (TB) of NVRAM per Lustre router node
- Dual-rail Quad Data Rate (QDR-80) Intel TrueScale fabric
Deployed in October 2013, the Catalyst architecture already has begun to provide insights into the kind of technologies the ASC program will require over the next decade to meet high performance simulation and big data computing mission needs. The increased storage capacity of the system (in both volatile and nonvolatile memory) represents the major departure from classic simulation-based computing architectures common at DOE laboratories and opens new opportunities for exploring the potential of combining floating point focused capability with data analysis in one environment. The machine’s expanded DRAM and fast, persistent NVRAM are well suited to a broad range of big data problems including bioinformatics, business analytics, machine learning and natural language processing.