Sign up for our newsletter and get the latest HPC news and analysis.

Interview: Michigan Tech Uses “Superior” Supercomputer for Advanced Research

Any university involved in compute-intensive research would love to have a supercomputer at its disposal. Michigan Technological University is one of the fortunate ones to have a super-fast machine accessible by the entire research community on campus. The computer is known as “Superior” and we sat down with Gowtham S., Director of Research Computing at the University, to hear more about it.

insideHPC: The HPC cluster known as Superior resides at MTU. Where did the name come from and what is the machine generally used for?

Gowtham S.

Gowtham S.: Michigan Tech is located very close to Lake Superior. Given that the lake is biggest of all the Great Lakes, and given that we were building something that was bigger and more powerful than anything we had available in-house at that time, we chose to call this HPC cluster as Superior. We wanted it to convey a sense of grandeur and power, and every researcher from just about every field at Michigan Tech to be at home with it.

It is our central and shared computing resource, and is used by for a variety of projects by researchers from science and engineering departments.

insideHPC: How about the technical aspects of it? Can you share some of specifications with us?

Gowtham S.: The initial design included 72 CPU compute nodes (Intel E5-2670 2.60 GHz with 16 cores and 64 GB RAM per node) and 5 GPU compute nodes (four NVIDIA Tesla M2090 GPUs per node). Since then, an additional 20 CPU compute nodes have been added by researchers and research institutes. The overall compute capacity as of now is 31 TFLOPS (CPU) and 13 TFLOPS (GPU).

insideHPC: What were the initial goals of this super computer? Do you feel you have met these goals?

Gowtham S.: Keeping the usage pattern from days before Superior in mind, our initial expectation was to consistently grow the usage to about 50% in first six months, and to about 80% by end of year one. The most important initial goal was to engrain the “greatest good for the greatest number” philosophy in our researchers. We wanted to help them make more time for teaching, research, and family by minimizing the time they would spend on system administrative tasks.

Thanks to the unequivocal support from Michigan Tech’s executive committee for policies and procedures, we believe we have exceeded our initial goals and expectations: Superior has been consistently averaging 80% (or higher) since the first month, and 95+% for last six weeks. Our in-house algorithm rewards productive researchers by assigning a higher priority for their subsequent simulations. And no researcher has had to do any system administrative task in the last year (for e.g., install/compile a software suite and/or integrate it with the queuing system).

insideHPC: What did Michigan Tech researchers use before Superior came along?

Gowtham S.: We had eight computing clusters spread all around campus — each comprised of varying generations of hardware, configured and operated in a different way by various departments or research groups. Overall usage on any given day was about 20%. Many researchers in need didn’t have (the access to) the resources they needed. It took us about three years to streamline the available research computing infrastructure, and make a strong case for shared, central resource.

insideHPC: The system’s installation just had its one year anniversary. What are some of the current projects that are harnessing all of this power?

Gowtham S.: Modeling the circulation and particle transport in the Great Lakes system, multi scale modeling of advanced materials and structures, nanostructured materials for electronics, biosensing and human health implications, and unsupervised learning in Big Data and social networks are some of the on going projects that use the power of Superior. Here is the complete listing of all 30 projects.

These projects have produced nearly two dozen publications as well, and several proposals are underway for even more projects. That makes us quite happy.

insideHPC: What about the future? What’s down the road for Superior?

Gowtham S.: We would certainly like to make it bigger and better, both in its computing capacity as well as the end user experience. In turn, we hope to attract world-class researchers to Michigan Tech, and help them make it their new home. We would also like to help support the computational endeavors of small-to-middle scale industries in our community.

We have a “$0.10 per CPU core per hour” model with an in-house algorithm to understand how well Superior is being used. It is fairly similar to Amazon Web Services and Google Compute Engine pricing. Our model indicates over 95% of the initial cost recovered via usage in just one year, and over 60% of all simulations wait less than five minutes to start running. Please see our Usage Reports and Analytics.

Funding agencies and our alumni, whose support is paramount in achieving these goals, can be rest assured that their investment will be well spent to advance the frontiers of arts, science and technology.

Comments

  1. Sharan Kalwani says:

    Excellent article/interview!
    We could use a few more of these, picked from Universities not necessarily the top10….

Resource Links: