LSU adds new cluster for large memory jobs

Print Friendly, PDF & Email

LSU announced this week that they’ve installed a new cluster for user’s of the university’s HPC resources at the LSU Center for Computation & Technology.

Philip is a high-performance computing cluster that will support research requiring high-performance processing and very large memory resources.  The new system allows researchers to take advantage of shared memory programming techniques, and gives researchers the means to experiment with and take advantage of new computing models.

…Philip is a 37-node cluster with 3.5 teraflops peak performance of computing power, providing more memory per core than is available on previous LSU computing clusters. Each node contains two of the latest Intel Quad Core Nehalem Xeon 64-bit processors, making Philip capable of operating at higher core processing speeds than the University’s current high-performance computing systems.

That first statement about the shared memory confused me a little bit when I read it in the original release a few days ago, so I sent an email to Honggao Liu, LSU’s HPC director. I was curious to know whether they were using something like the RNA Networks memory appliance or ScaleMP’s vSMP software to get shared memory across the cluster. Turns out, no. According to Dr. Liu the large memory aspects are strictly inside a node

It is shared memory between the cores in a node (we have several nodes with 8 cores and 96GB memory per node).


  1. […] post:  LSU adds new cluster for large memory jobs | Tom Donohue: Creating American Jobs Through Global TradeNC Media Watch: Are local green jobs just a […]