Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

TACC to Hosts HPC for Managers Institute in Austin

Today the TACC announced that the first High Performance Computing for Managers Institute will take place September 12-14 in Austin. “With course materials by TACC and the Numerical Algorithms Group, this three-day workshop is specifically tailored to managers and decision makers who are using, or considering using, HPC within their organizations. It is also applicable to those with a real opportunity to make this career step in the future.”

Supercomputing Cancer Data for Treatment Clues

In this video, researchers use TACC supercomputers in war against cancer. “Next-generation sequencing technology allows us to observe genomes and their activity in unprecedented detail,” he said. “It’s also making a lot of biomedical research increasingly computational, so it’s great to have a resource like TACC available to us.”

Avere Systems Powers BioTeam Test Lab at TACC

“In cooperation with vendors and TACC, BioTeam utilizes the lab to evaluate solutions for its clients by standing up, configuring and testing new infrastructure under conditions relevant to life sciences in order to deliver on its mission of providing objective, vendor agnostic solutions to researchers. The life sciences community is producing increasingly large amounts of data from sources ranging from laboratory analytical devices, to research, to patient data, which is putting IT organizations under pressure to support these growing workloads.”

Supercomputing High Energy Cancer Treatments

Over at TACC, Aaron Dubrow writes that researchers are using TACC supercomputers to improve, plan, and understand the basic science of radiation therapy. “The science of calculating and assessing the radiation dose received by the human body is known as dosimetry – and here, as in many areas of science, advanced computing plays an important role.”

Leveraging HPC for Real-Time Quantitative Magnetic Resonance Imaging

W. Joe Allen from TACC gave this talk at the HPC User Forum. “The Agave Platform brings the power of high-performance computing into the clinic,” said William (Joe) Allen, a life science researcher for TACC and lead author on the paper. “This gives radiologists and other clinical staff the means to provide real-time quality control, precision medicine, and overall better care to the patient.”

Podcast: PortHadoop Speeds Data Movement for Science

In this TACC Podcast, host Jorge Salazar interviews Xian-He Sun, Distinguished Professor of Computer Science at the Illinois Institute of Technology. Computer Scientists working in his group are bridging the file system gap with a cross-platform Hadoop reader called PortHadoop, short for portable Hadoop. “We tested our PortHadoop-R strategy on Chameleon. In fact, the speedup is 15 times faster,” said Xian-He Sun. “It’s quite amazing.”

TACC’s Dan Stanzione on the Challenges Driving HPC

In this video from KAUST, Dan Stanzione, executive director of the Texas Advanced Computing Center, shares his insight on the future of high performance computing and the challenges faced by institutions as the demand for HPC, cloud and big data analysis grows. “Dr. Stanzione is the Executive Director of the Texas Advanced Computing Center (TACC) at The University of Texas at Austin. A nationally recognized leader in high performance computing, Stanzione has served as deputy director since June 2009 and assumed the Executive Director post on July 1, 2014.”

Supercomputing Transportation System Data using TACC’s Rustler

Over at TACC, Faith Singer-Villalobos writes that researchers are using the Rustler supercomputer to tackle Big Data from self-driving connected vehicles (CVs). “The volume and complexity of CV data are tremendous and present a big data challenge for the transportation research community,” said Natalia Ruiz-Juri, a research associate with The University of Texas at Austin’s Center for Transportation Research. While there is uncertainty in the characteristics of the data that will eventually be available, the ability to efficiently explore existing datasets is paramount.

Memory Bandwidth and System Balance in HPC Systems

“This talk reviews the history of the changing balances between computation, memory latency, and memory bandwidth in deployed HPC systems, then discusses how the underlying technology changes led to these market shifts. Key metrics are the exponentially increasing relative performance cost of memory accesses and the massive increases in concurrency that are required to obtain increased memory throughput. New technologies (such as stacked DRAM) allow more pin bandwidth per package, but do not address the architectural issues that make high memory bandwidth expensive to support.”

Podcast: LLNL’s Lori Diachin Reviews the SC16 Technical Program

“I think the most important thing I’d like people to know about SC16 is that it is a great venue for bringing the entire community together, having these conversations about what we’re doing now, what the environment looks like now and what it’ll look like in five, ten fifteen years. The fact that so many people come to this conference allows you to really see a lot of diversity in the technologies being pursued, in the kinds of applications that are being pursued – from both the U.S. environment and also the international environment. I think that’s the most exciting thing that I think about when I think about supercomputing.”