Search Results for: tacc

TACC’s Dan Stanzione on the Challenges Driving HPC

In this video from KAUST, Dan Stanzione, executive director of the Texas Advanced Computing Center, shares his insight on the future of high performance computing and the challenges faced by institutions as the demand for HPC, cloud and big data analysis grows. “Dr. Stanzione is the Executive Director of the Texas Advanced Computing Center (TACC) at The University of Texas at Austin. A nationally recognized leader in high performance computing, Stanzione has served as deputy director since June 2009 and assumed the Executive Director post on July 1, 2014.”

Supercomputing Transportation System Data using TACC’s Rustler

Over at TACC, Faith Singer-Villalobos writes that researchers are using the Rustler supercomputer to tackle Big Data from self-driving connected vehicles (CVs). “The volume and complexity of CV data are tremendous and present a big data challenge for the transportation research community,” said Natalia Ruiz-Juri, a research associate with The University of Texas at Austin’s Center for Transportation Research. While there is uncertainty in the characteristics of the data that will eventually be available, the ability to efficiently explore existing datasets is paramount.

Chameleon Testbed Blazes New Trails for Cloud HPC at TACC

“It’s often a challenge to test the scalability of system software components before a large deployment, particularly if you need low level hardware access”, said Dan Stanzione, Executive Director at TACC and a Co-PI on the Chameleon project. “Chameleon was designed for just these sort of cases – when your local test hardware is inadequate, and you are testing something that would be difficult to test in the commercial cloud – like replacing the available file system. Projects like Slash2 can use Chameleon to make tomorrow’s cloud systems better than today’s.”

Podcast: Solar-Powered Hikari Supercomputer at TACC Demonstrates HVDC Efficiencies

Engineers of the Hikari HVDC power feeding system predict it will save 15 percent compared to conventional systems. “The 380 volt design reduces the number of power conversions when compared to AC voltage systems,” said James Stark, director of Engineering and Construction at the Electronic Environments Corporation (EEC), a Division of NTT FACILITIES. “What’s interesting about that,” Stark added, “is the computers themselves – the supercomputer, the blade servers, cooling units, and lighting – are really all designed to run on DC voltage. By supplying 380 volts DC to Hikari instead of having an AC supply with conversion steps, it just makes a lot more sense. That’s really the largest technical innovation.”

Video: Stampede II Supercomputer to Advance Computational Science at TACC

In this video, Dan Stanzione from TACC describes how the Stampede II supercomputer will driving computational science. “Announced in June, a $30 million NSF award to the Texas Advanced Computing Center will be used acquire and deploy a new large scale supercomputing system, Stampede II, as a strategic national resource to provide high-performance computing capabilities for thousands of researchers across the U.S. This award builds on technology and expertise from the Stampede system first funded in by NSF 2011 and will deliver a peak performance of up to 18 Petaflops, over twice the overall system performance of the current Stampede system.”

Podcast: UT Chancellor William McCraven on What Makes TACC Successful

“It’s great to have these incredible servers and incredible processors, but if you don’t have the people to run them – if you don’t have the people that are passionate about supercomputing, we would never get there from here.”Behind all of this magnificent technology are the fantastic faculty, researchers, interns, our corporate partners that are part of this, the National Science Foundation, there are people behind all of the success of the TACC. I think that’s the point we can never forget.”

Stampede 2 Supercomputer at TACC to Sport 18 Petaflops

Over at the Dell HPC Community, Jim Ganthier writes that TACC is planning to deploy its 18 Petflop Stampede 2 supercomputer based on Dell servers running Intel Knights Landing processors. “Stampede 2 will do more than just meet growing demand from those who run data-intensive research. Imagine the discoveries that will be made as a result of this award and the new system. Now more than ever is an exciting time to be in HPC.”

Podcast: TACC Powers Zika Hackathon to Fight Disease

In this TACC podcast, Ari Kahn from the Texas Advanced Computing Center and Eddie Garcia from Cloudera describe a recent Hackathon in Austin designed to tackle data challenges in the fight against the Zika virus. The Texas Advanced Computing Center provided time on the Wrangler data intensive supercomputer as a virtual workspace for the Zika hackers.

Podcast: Using Docker for Science at TACC

In this TACC podcast, Joe Stubbs from the Texas Advanced Computing Centter describes potential benefits to scientists of open container platform Docker in supporting reproducibility, NSF-funded Agave API. “As more scientists share not only their results but their data and code, Docker is helping them reproduce the computational analysis behind the results. What’s more, Docker is one of the main tools used in the Agave API platform, a platform-as-a-service solution for hybrid cloud computing developed at TACC and funded in part by the National Science Foundation.”

TACC’s Lonestar 5 Begins Full Production

Today the Texas Advanced Computing Center (TACC) announced that the Lonestar 5 supercomputer is in full production and is ready to contribute to advancing science across the state of Texas. Managed by TACC, the center’s second petaflop system is primed to be a leading computing resource for the engineering and science research community. “An analysis of strong-scaling on Lonestar 5 shows gains over other comparable resources,” said Scott Waibel, a graduate student in the Department of Geological Sciences at Portland State University. “Lonestar 5 provides the perfect high performance computing resource for our efforts.”