Ohio Supercomputer Center Launches ‘Ascend’ HPC GPU Cluster

COLUMBUS, Ohio — The Ohio Supercomputer Center (OSC) has officially launched Ascend, its new high performance computing (HPC) cluster for artificial intelligence (AI), data analytics and machine learning. Ascend is comprised of Dell PowerEdge servers with 48 AMD EPYC CPUs and 96 NVIDIA A100 80GB Tensor Core GPUs with NVIDIA NVLink and interconnected by the […]

Job of the Week: HPC Operations Manager at Ohio Supercomputer Center

“OSC is seeking an HPC Operations Manager, a new position inside the HPC Systems Team. The operations manager will manage a team of three systems administrators and two student employees. The person who fills this role will be responsible for both supervising operations staff as well as providing technical and organizational expertise for all aspects of the HPC operations.”

OSC Doubles Down on IBM HPC Storage

The Ohio Supercomputer Center is working with IBM to expand the center’s HPC storage capacity by 8.6 petabytes. Slated for completion in December, the new storage system will expand capacity for scratch and project storage, but also allow OSC to offer data encryption and full file-system audit capabilities that can support secure storage of sensitive data, such as medical data or other personally identifiable information.

NSF funds second round for OSC’s Open OnDemand

The National Science Foundation (NSF) recently awarded funding to a team led by the Ohio Supercomputer Center (OSC) for further development of Open OnDemand, an open-source software platform supporting web-based access to high performance computing services. “The Open OnDemand 2.0 project will deliver an improved open-source platform for HPC, cloud and remote computing access,” said David Hudak, Ph.D., executive director of OSC. “Additionally, interaction with a growing user base has generated requests for new technical capabilities and more engagements with the science community to extend this platform and deepen its science impact.”

Time-Lapse Video: Building the Pitzer Cluster at the Ohio Supercomputing Center

In this video, Dell EMC specialists and CoolIT technicians build the Ohio Supercomputing Center’s newest, most efficient supercomputer system, the Pitzer Cluster. Named for Russell M. Pitzer, a co-founder of the center and emeritus professor of chemistry at The Ohio State University, the Pitzer Cluster is expected to be at full production status and available to clients in November. The new system will power a wide range of research from understanding the human genome to mapping the global spread of viruses.

OSC to Deploy Pitzer Cluster built by Dell EMC

Today the Ohio Supercomputer Center announced plans to deploy the center’s newest, most efficient supercomputer system, the liquid-cooled, Dell EMC-built Pitzer Cluster. “Ohio continues to make significant investments in the Ohio Supercomputer Center to benefit higher education institutions and industry throughout the state by making additional high performance computing (HPC) services available,” said John Carey, chancellor of the Ohio Department of Higher Education. “This newest supercomputer system gives researchers yet another powerful tool to accelerate innovation.”

Call for Participation: OSC Statewide User Group Conference in October

The Ohio Supercomputer Center Statewide Users Group (SUG) has issued its Call for Participation. Featuring a talk on OSC’s pending Pitzer cluster, the event takes place Oct. 4 in Columbus, Ohio. The purposes of the SUG conference are to foster connections, update OSC’s user base on OSC’s direction, highlight new scientific developments produced using OSC resources, and obtain constructive feedback as to the future of OSC and our role in supporting science across Ohio. “We will have a flash talk and poster session to highlight the research and emerging ideas from OSC clients. Talks and posters will be selected from abstracts submitted via the registration form. We encourage posters by students just starting their work, to show creative ideas on how high-performance computing will enhance their research.”

Ohio Supercomputing Center Hosts User Group Meeting

At the Ohio Supercomputer Center Statewide Users Group spring conference this week, OSC clients in fields spanning everything from astrophysics to linguistics gathered to share research highlights and hear updates about the center’s direction and role in supporting science across Ohio. “SUG is a great vehicle for us to not only communicate to our clients about what is going on from a policy perspective or hardware roadmaps and new services, but for us to hear back from the clients about what they are doing,” said Brian Guilfoos, HPC client services manager at OSC.“Our normal interaction with someone is very technical – ‘This is the thing I’m trying to do, what I’m having a problem with, etc.’ Here we get to take a broader view and look at the science, and it’s good for our staff to be reminded what is being done with our services and what we are enabling.”

Video: Scientel Runs Record Breaking Calculation on Owens Cluster at OSC

In this video, Norman Kutemperor from Scientel describes how his company ran a record-setting big data problem on the Owens supcomputer at OSC.

“The Ohio Supercomputer Center recently displayed the power of its new Owens Cluster by running the single-largest scale calculation in the Center’s history. Scientel IT Corp used 16,800 cores of the Owens Cluster on May 24 to test database software optimized to run on supercomputer systems. The seamless run created 1.25 Terabytes of synthetic data.”

Video: How MVAPICH & MPI Power Scientific Research

Adam Moody from LLNL presented this talk at the MVAPICH User Group. “High-performance computing is being applied to solve the world’s most daunting problems, including researching climate change, studying fusion physics, and curing cancer. MPI is a key component in this work, and as such, the MVAPICH team plays a critical role in these efforts. In this talk, I will discuss recent science that MVAPICH has enabled and describe future research that is planned. I will detail how the MVAPICH team has responded to address past problems and list the requirements that future work will demand.”