From interviews with the people and companies making news in the HPC community, to in-depth audio features that examine pressing technological and social issues in supercomputing, this is exclusive content you’ll only hear at insideHPC.com.
In this podcast, the program co-chairs of the 2013 Hot Inteconnects Conference discuss how the annual symposium covers cross-cutting issues spanning computer systems, networking technologies, and communication protocols for high-performance interconnection networks.
Madeleine Glick (APIC Corporation)
Torsten Hoefler (ETH Zurich)
Fabrizio Petrini (IBM TJ Watson)
Hot Interconnects takes place in San Jose, California on August 21-23. The conference Call for Papers has been issued, with abstracts due April 26, 2013.
In this follow-up podcast to the GPU Technology Conference, the Radio Free HPC team mulls over a talk by GE’s Dustin Franklin, GPU app specialist. Dustin’s topic was GPU-direct RDMA; was this a first look at real-world RDMA with GPU-to-GPU communications?
Follow along as the guys describe flow charts on technical slides that are not yet approved by viewing for the “great unwashed masses” – but make no mistake, they’re impressed by what they saw. Dan “knows a guy” who can divulge more, and offers to arrange an inquisition with Henry. Henry promised to “be nice,” whatever he means by that. Rich missed this GTC session and several others while “conducting interviews,” whatever he means by that. Dan offers another characterization. And this just in: there’s a great deal of information available on the Internet.
In this podcast, the Radio Free HPC team discusses the recent buzz surrounding FGPAs. After being sidelined by accelerators, they’re increasingly being used in appliances.
Big vendors are talking about FPGAs not only for appliances but for general-purpose systems as performance assists. Are we headed back to the future? The guys discuss the ins and outs of FPGAs and why, in some cases, they could be a huge win for the organizations that implement them. But is the architecture flexible enough? For enterprise and Big Data, perhaps it is. If you need to perform the same algorithms over and over again, FPGAs could be a perfect fit. As with all things tech, there are a few cautionary notes to be sounded. Amassing more and more appliances can lead down a tricky road. Will their use in workload-optimized systems lead to vendor lock-in? Can you really teach an old FPGA new tricks? And can they be weaponized?
Most importantly: how are servers like cattle? Tune in to find out…
The LUG 2013 Lustre User Group conference kicked off this morning in San Diego with a surprise announcement of a change in the management structure for its governing body, OpenSFS. Norman Morse, who has been the CEO of the organization since it was founded in 2010, has resigned.
In his opening address, Morse called for called for continued unity in the Lustre user community.
The mission of OpenSFS and EOFS is still of critical importance to this community,” said Morse. “You, the Lustre community can be very proud of what you’ve accomplished. Lustre, at one time was a system in doubt. It increases every year now. It rose to become a feature-rich, stable, major file system. As we’ve seen recently, it’s no small feat to bring a Terabyte per second to disk. So the “little file system that could” has become “the big file system that can and does.”
“Unfortunately the success that you have created could attract selfish interests and political maneuvering from folks who want to help “manage” the Lustre success for their own personal gain. So I would say selfish interests and political maneuvering are enemies to the spirit and success of Lustre that has existed from the very beginning. I’m going to say that this community’s future is bright as long as you continue to work together.”
Following Morse’s talk, Galen Shipman from ORNL thanked Norm for his service and announced that OpenSFS will now move to a new “association management company” called VTM.
LUG 2013 continues through Thursday, March 18. Please stay tuned to insideHPC for more updates, interviews, and a full set of presentation videos.
Our Video Sunday feature continues with this time-lapse movie of the construction of NCSA’s Blue Waters supercomputer and the National Petascale Computing Facility. NCSA launched Blue Waters this week in an official dedication ceremony.
The 683,000-pound computer has a sustained speed of more than 1 petaflop and is capable of performing more than 1 quadrillion calculations per second. It is built with more than 235 Cray XE6 cabinets and more than 30 cabinets of the Cray XK6 supercomputer with NVIDIA Tesla GPU computing capability, all housed in the National Petascale Computing Facility off Oak Street in Champaign.
In this slidecast, Fritz Ferstl from Univa presents: Grid Engine State of the Union.
Univa Grid Engine is the next generation product that open source Grid Engine users have been waiting for. Our customers save time and money through increased uptime, and with our innovative feature and product evolution they can significantly reduce the total cost of ownership of running Grid Engine. We have improved the speed of several aspects of the product with new features and functionality designed to improve the speed of dispatching and throughput. The following features drive performance of Grid Engine to a new height. They are only available from Univa.”
In this podcast from the Leonard Lopate Show, Author Viktor Mayer-Schönberger explores how Big Data will affect the economy, science, and society at large.
Big data” refers to our burgeoning ability to crunch vast collections of information, analyze it instantly, and draw sometimes profoundly surprising conclusions from it. Big Data: A Revolution that Will Transform How We Live, Work, and Think shows how this emerging science can translate myriad phenomena—from the price of airline tickets to the text of millions of books—into searchable form, and uses our increasing computing power to reach epiphanies that we never could have seen before.
In this slidecast, Narayan Venkat from Violin Memory describes how the company’s new alliance with Toshiba will help foster a whole new world of applications that perform at the speed of memory.
Our new focus on PCIe cards will allow both companies to drive radical new economics that lead to the mass adoption of memory-based architectures,” said Don Basile, CEO of Violin Memory. “NAND memory is now a requirement at every level from the smart connected device to the core of the cloud and the enterprise data center. Violin’s combined portfolios continue our leadership across the evolving memory-based solution market.”
In a new RCE Podcast, Brock Palen and Jeff Squyres speak with Jeff Darcy of Red Hat about Gluster FS, a distributed scalable open source filesystem.
Jeff has been working on distributed storage since NFS version 2 at Encore in 1990. Most recently he worked on Lustre at SiCortex, and then started his own project HekaFS at Red Hat. Since Red Hat acquired Gluster, he has been an architect and ambassador for that project.
In this slidecast, Jeff Denworth from DDN describes the company’s new hScalar storage system — the World’s First Enterprise Apache Hadoop Appliance.
DDN has developed a Hadoop solution that is all about time to value: It simplifies rollout so that enterprises can get up and running more quickly, provides typical DDN performance to accelerate data processing, and reduces the amount of time needed to maintain a Hadoop solution.” said Dave Vellante, Chief Research Officer, Wikibon.org. “For enterprises with a deluge of data but a limited IT budget, the DDN hScaler appliance should be on the short list of potential solutions.”
As Hadoop finds its way into more and more areas data intensive scientific computing, the lack of security in this platform is a continuing challenge. In this slidecast, Brian Christian from Zettaset presents: Examining Hadoop as a Big Data Risk in the Enterprise.
While the open source framework has enabled Hadoop to logically grow and expand, business and government enterprise organizations face deployment and management challenges with Hadoop. Hadoop’s core specifications are still being developed by the Apache community, and thus far, do not adequately address enterprise requirements, such as support for robust security and regulatory compliance mandates such as HIPAA and SOX, for example.”
In this slidecast, Josh Judd from Warp Mechanics describes MicroPod HPC initiative. Currently a Kickstarter project, MicroPod HPC will enable users to “stand up” a parallel computer using inexpensive commodity hardware, or even use the images as VMs to run a completely virtual development environment.
The MicroPod HPC is a parallel computer that you can afford to use at home. You can “stand up” a parallel computer using inexpensive commodity hardware, or even use the images as VMs to run a completely virtual development environment. The intent is to provide a turn-key framework for R&D of parallel software, and to use as a learning tool.”
In this slidecast, Jeff Squyres from Cisco presents: Ethernet Secrets of TCP.
TCP? Who cares about TCP in HPC? More and more people, actually. With the commoditization of HPC, lots of newbie HPC users are intimidated by special, one-off, traditional HPC types of networks and opt for the simplicity and universality of Ethernet. And it turns out that TCP doesn’t suck nearly as much as most (HPC) people think, particularly on modern servers, Ethernet fabrics, and powerful Ethernet NICs. I’ll cut to the chase: I surprised myself by being able to get ~10us half-round-trip ping-pong MPI latency over TCP (using NetPIPE).”