From interviews with the people and companies making news in the HPC community, to in-depth audio features that examine pressing technological and social issues in supercomputing, this is exclusive content you’ll only hear at insideHPC.com.
In this podcast, the Radio Free HPC team regroups after SC12 to discuss an industry trend that was in evidence at the show: vendor consolidation.
Cray just acquired Appro
Intel acquired Qlogic Truescale InfiniBand, Whamcloud, and Cray interconnect IP
IBM bought Platform
Xyratex bought ClusterStor
Hitachi acquired BlueArc
NetApp bought Engenio storage
And so on…
The guys discuss how acquisitions need to be integrated the ‘right’ way and how it’s more than just slapping up new logos on websites and combining slide decks. They also talk about some cautionary tales in the world of tech acquisitions along with some success stories. Dan offers a success story: IBM and Platform. Rich predicts which major vendor will next be swallowed whole; Henry predicts a challenge to Intel’s x86 dominance.
In “SC After Hours” chatter, Dan describes “My Dinner With Henry” in Salt Lake City, and Rich is accused of being the biggest Apple fanboi in all of HPC and, perhaps, the world. Watch for a special cameo appearance by someone who knows all about buyouts: Larry Ellison.
Over at inside-Cloud, ProfitBricks USA CEO Bob Rizika writes that the world has changed since the first generation of Clouds and that high performance computing today can be a viable Infrastructure as a Service.
What’s different now in Cloud computing? It’s not first generation difficult-to-use, costly, slow and limiting – but second-generation Infrastructure as a Service (IaaS). We now live in a world where instances can be connected at 80Gb/s, and instances can have 196GB of RAM and 48 cores. The real clouds are arriving. And despite what some might think, they are priced and packaged for the masses.
In this slidecast, Arnon Friedmann from Texas Instruments describes the company’s new multicore System-on-Chips (SoCs). Based on its award winning KeyStone architecture, TI’s SoCs are designed as accelerate traditional x86 servers as well as enable the building purpose-built, energy efficient devices for specific applications powered by ARM processors and TI DSPs on the same package.
Targeted for applications such as networking, radar, imaging, high performance computing, gaming and media processing, TI’s new KeyStone multicore processors offer developers more than twice the capacity & performance at half the power relative to existing solutions.
In this video, Bill Lee from IBTA and Rupert Dance from the Open Fabrics Alliance discuss the latest developments in high performance networking at the SC12 conference in Salt Lake City.
This week the OpenFabrics Alliance (OFA) announced 224 TOP500* supercomputers are using OpenFabrics Software (OFS) in their high performance computing (HPC) clusters, including two of the top 10. Clusters using OFA’s OFS driver stacks and application libraries achieve the highest performance of all clusters using standard interconnects.
According to the recently published list, OFS is present in the following:
224 clusters, 45% of the TOP500 list
All 10 of the standards-based Petascale systems
86% of the accelerator-based system
The results from the TOP500 are a clear indication that OFS adoption continues to grow and is the leading open source software stack for running applications over InfiniBand, iWARP and RoCE,” said Jim Ryan, chairman, OFA. “The success of OFS is due to the hard work of OFA members and users whose mission is to identify and implement the highest-performing interconnect in the industry.”
In this video, Nvidia’s Sumit Gupta describes the Kepler K20x accelerator for HPC applications. Today Nvidia unveiled the Tesla K20 family of GPU accelerators as the technology powering Titan, the world’s fastest supercomputer according to the TOP500 list released this morning at the SC12 supercomputing conference.
We are taking advantage of NVIDIA GPU architectures to significantly accelerate simulations in such diverse areas as climate and meteorology, seismology, astrophysics, fluid mechanics, materials science, and molecular biophysics.” said Dr. Thomas Schulthess, professor of computational physics at ETH Zurich and director of the Swiss National Supercomputing Center. “The K20 family of accelerators represents a leap forward in computing compared to NVIDIA’s prior Fermi architecture, enhancing productivity and enabling us potentially to achieve new insights that previously were impossible.”
Additional early customers include: Clemson University, Indiana University, Thomas Jefferson National Accelerator Facility (Jefferson Lab), King Abdullah University of Science and Technology (KAUST), National Center for Supercomputing Applications (NCSA), National Oceanic and Atmospheric Administration (NOAA), Oak Ridge National Laboratory (ORNL), University of Southern California (USC), and Shanghai Jiao Tong University (SJTU).
The K20X GPU is now shipping. According to Nvidia more than 30 petaflops of performance have already been delivered in the last 30 days. This is equivalent to the computational performance of last year’s 10 fastest supercomputers combined.
In this podcast, the Radio Free HPC team discusses what they’re expecting to see at this week’s SC12 conference in Salt Lake City. Get the scoop on what’s new, what’s old, and what’s just plain played out.
In this podcast, Steve Henn from NPR talks to Buddy Bland from ORNL and others about the new Titan supercomputer and how its powered by the same GPU technology that drives the video gaming market. Read the Full Story. Download the MP3.
In this video, Samplify CEO Alan Evans presents: APAX: Lowering the Cost of Big Science, Big Data, and Cloud Computing.
Multi-core CPUs are hitting the memory wall,” said Al Wegener, CTO and founder of Samplify. “With each new process node, the number of processor cores on a die can double with Moore’s Law, but the throughput of memory, I/O, and storage fails to keep up with this growth. Hence, the performance of multi-core applications is increasingly memory, I/O, and storage bound. APAX is the only solution that accelerates the throughput DDRx, SAS/SATA, SSD, PCIe, Ethernet, and Infiniband, by up to six times.”
Samplify will demonstrate the APAX profiler and hardware IP at the SC12 conference in booth #4151.
The Parallella Kickstarter project has surpassed its funding goal and will go forward a product. As reported here, Parallella is Adapteva’s open-source development platform for low-power parallel computing. With a mission to “bring supercomputing to everyone,” Parallella development platform will be offered for only $99 dollars.
In this podcast, the Radio Free HPC team takes a look at Glacier, Amazon’s cloud archive and backup offering.
Amazon is pitching Glacier as a solution for customers who don’t need frequent access to their data and can handle retrieval times of several hours. The big enticements are low, low cost — as little as a penny per gigabyte per month — and durability. Dan and Henry weed through each facet of Amazon’s marketing claims and — well — rip each one to shreds. Henry thinks this is aimed at the unsuspecting/unfortunate home or small business consumer, as anyone with technology expertise will run far, far away from Glacier. Dan compares it to the “Roach Motel” of storage: once you’re in, you can never get out. And don’t even get them started on the definition of “durable.”
Viewer tip: keep an eye on the “consecutive hours awake” timer at the bottom of the screen.
In this podcast, the Radio Free HPC team looks at tape storage. Is that parrot completely dead, or is it just resting? Is tape now legacy technology, or is it alive and well? And can’t our data just all go in the cloud anyway? In more ‘legacy’ talk, Dan posits that Henry whittled the first punch cards by hand, and Rich claims that Henry invented the chad. Henry retaliates by claiming to have more hair… a must-see.
In this video, Nvidia’s Will Ramey presents an introduction to the CUDA 5 programming environment.
CUDA is a parallel computing platform and programming model invented by NVIDIA. It enables dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU). With millions of CUDA-enabled GPUs sold to date, software developers, scientists and researchers are finding broad-ranging uses for GPU computing with CUDA.”
In this podcast, the Radio Free HPC team takes a look at the OSCON Open Source Convention, where Tim Tim O’Reilly’s presentation on the “Clothesline Paradox” aptly illustrated the way developers create value. Since many large companies such as Comcast make a living on open-source software, Dan digresses into a string of complaints about his Comcast bill, but Henry and Rich reel in the discussion. Was the Internet created out of generosity, or enlightened self-interest? And we hear again from one of our sponsors: Glade ‘Data Center Edition’ air fresheners.
Power is a major challenge standing in the way of the Exascale computing. While the target is to consume 20 MW or less for an exascale machine, current technology trends will not take us there by 2018. In this podcast, the Radio Free HPC team discusses why this is such a tough challenge, where such a system might need to be hosted, and types of infrastructure that will need to be considered. Along the way, you’ll hear scary “power” music and figure out how this all relates to Mad Max, lasers, unicorns, and Planet of the Apes.