From interviews with the people and companies making news in the HPC community, to in-depth audio features that examine pressing technological and social issues in supercomputing, this is exclusive content you’ll only hear at insideHPC.com.
In this slidecast, Scott Gnau from Teradata Labs presents: Teradata Intelligent Memory.
The introduction of Teradata Intelligent Memory allows our customers to exploit the performance of memory within Teradata Platforms, which extends our leadership position as the best performing data warehouse technology at the most competitive price,” said Scott Gnau, president, Teradata Labs. “Teradata Intelligent Memory technology is built into the data warehouse and customers don’t have to buy a separate appliance. Additionally, Teradata enables its customers to buy and configure the exact amount of in-memory capability needed for critical workloads. It is unnecessary and impractical to keep all data in memory, because all data do not have the same value to justify being placed in expensive memory.”
How does Intelligent Memory work? This animation video does a good job of making this advanced technology look simple.
In this slidecast, Ken Claffey from Xyratex describes the company’s new ClusterStor 1500 storage system. Designed for scale-out HPC storage solutions, the ClusterStor 1500 delivers HPC performance and efficiency with help from the Lustre file system.
Departments within larger organizations or medium-sized enterprises today, especially in the commercial, academic and government sectors, represent an underserved market. They need high-performance and scalable storage solutions that are cost-efficient, easy to deploy and manage and reliable even under heavy workloads,” said Ken Claffey, senior vice president of the ClusterStor business at Xyratex. “Growth in this market segment is being driven by the increasing adoption of simulation applications in a wide range of industries from car and aircraft design to chemical interactions and financial modeling. Traditional enterprise storage systems are simply not designed to meet the performance needs of these applications, so we engineered and built the affordable and modular ClusterStor 1500 to bring the performance power of Lustre to this underserved and growing market in the way that only ClusterStor can.”
With the ability to scale performance from 1.25GB/s to 110GB/s and raw capacity from 42TB to 7.3PB, ClusterStor 1500 is purpose-built to satisfy data intensive department level compute cluster needs, ClusterStor 1500 is designed to provide best in class scale-out storage for middle tier high performance computing environments. The ClusterStor 1500 solution features scale-out storage building blocks, the Lustre parallel filesystem and a comprehensive management platform that eliminates the guesswork usually associated with building and optimizing your own HPC storage solution.
In this slidecast, the Radio Free HPC team interviews Fritz Ferstl, CTO of Univa. Topics include Big Data, HPC, and the continuing convergence of both.
While what we think of as traditional HPC may differ greatly from Big Data analytics, that seems to be changing. With a long history in high performance computing and customers in both worlds, Ferstl shares his unique perspective on where the two worlds overlap and where the potential is greatest for synergy in the future.
This has to be our best show yet, so be sure to check it out.
In this podcast, the program co-chairs of the 2013 Hot Inteconnects Conference discuss how the annual symposium covers cross-cutting issues spanning computer systems, networking technologies, and communication protocols for high-performance interconnection networks.
Madeleine Glick (APIC Corporation)
Torsten Hoefler (ETH Zurich)
Fabrizio Petrini (IBM TJ Watson)
Hot Interconnects takes place in San Jose, California on August 21-23. The conference Call for Papers has been issued, with abstracts due April 26, 2013.
In this follow-up podcast to the GPU Technology Conference, the Radio Free HPC team mulls over a talk by GE’s Dustin Franklin, GPU app specialist. Dustin’s topic was GPU-direct RDMA; was this a first look at real-world RDMA with GPU-to-GPU communications?
Follow along as the guys describe flow charts on technical slides that are not yet approved by viewing for the “great unwashed masses” – but make no mistake, they’re impressed by what they saw. Dan “knows a guy” who can divulge more, and offers to arrange an inquisition with Henry. Henry promised to “be nice,” whatever he means by that. Rich missed this GTC session and several others while “conducting interviews,” whatever he means by that. Dan offers another characterization. And this just in: there’s a great deal of information available on the Internet.
In this podcast, the Radio Free HPC team discusses the recent buzz surrounding FGPAs. After being sidelined by accelerators, they’re increasingly being used in appliances.
Big vendors are talking about FPGAs not only for appliances but for general-purpose systems as performance assists. Are we headed back to the future? The guys discuss the ins and outs of FPGAs and why, in some cases, they could be a huge win for the organizations that implement them. But is the architecture flexible enough? For enterprise and Big Data, perhaps it is. If you need to perform the same algorithms over and over again, FPGAs could be a perfect fit. As with all things tech, there are a few cautionary notes to be sounded. Amassing more and more appliances can lead down a tricky road. Will their use in workload-optimized systems lead to vendor lock-in? Can you really teach an old FPGA new tricks? And can they be weaponized?
Most importantly: how are servers like cattle? Tune in to find out…
The LUG 2013 Lustre User Group conference kicked off this morning in San Diego with a surprise announcement of a change in the management structure for its governing body, OpenSFS. Norman Morse, who has been the CEO of the organization since it was founded in 2010, has resigned.
In his opening address, Morse called for called for continued unity in the Lustre user community.
The mission of OpenSFS and EOFS is still of critical importance to this community,” said Morse. “You, the Lustre community can be very proud of what you’ve accomplished. Lustre, at one time was a system in doubt. It increases every year now. It rose to become a feature-rich, stable, major file system. As we’ve seen recently, it’s no small feat to bring a Terabyte per second to disk. So the “little file system that could” has become “the big file system that can and does.”
“Unfortunately the success that you have created could attract selfish interests and political maneuvering from folks who want to help “manage” the Lustre success for their own personal gain. So I would say selfish interests and political maneuvering are enemies to the spirit and success of Lustre that has existed from the very beginning. I’m going to say that this community’s future is bright as long as you continue to work together.”
Following Morse’s talk, Galen Shipman from ORNL thanked Norm for his service and announced that OpenSFS will now move to a new “association management company” called VTM.
LUG 2013 continues through Thursday, March 18. Please stay tuned to insideHPC for more updates, interviews, and a full set of presentation videos.
Our Video Sunday feature continues with this time-lapse movie of the construction of NCSA’s Blue Waters supercomputer and the National Petascale Computing Facility. NCSA launched Blue Waters this week in an official dedication ceremony.
The 683,000-pound computer has a sustained speed of more than 1 petaflop and is capable of performing more than 1 quadrillion calculations per second. It is built with more than 235 Cray XE6 cabinets and more than 30 cabinets of the Cray XK6 supercomputer with NVIDIA Tesla GPU computing capability, all housed in the National Petascale Computing Facility off Oak Street in Champaign.
In this slidecast, Fritz Ferstl from Univa presents: Grid Engine State of the Union.
Univa Grid Engine is the next generation product that open source Grid Engine users have been waiting for. Our customers save time and money through increased uptime, and with our innovative feature and product evolution they can significantly reduce the total cost of ownership of running Grid Engine. We have improved the speed of several aspects of the product with new features and functionality designed to improve the speed of dispatching and throughput. The following features drive performance of Grid Engine to a new height. They are only available from Univa.”
In this podcast from the Leonard Lopate Show, Author Viktor Mayer-Schönberger explores how Big Data will affect the economy, science, and society at large.
Big data” refers to our burgeoning ability to crunch vast collections of information, analyze it instantly, and draw sometimes profoundly surprising conclusions from it. Big Data: A Revolution that Will Transform How We Live, Work, and Think shows how this emerging science can translate myriad phenomena—from the price of airline tickets to the text of millions of books—into searchable form, and uses our increasing computing power to reach epiphanies that we never could have seen before.
In this slidecast, Narayan Venkat from Violin Memory describes how the company’s new alliance with Toshiba will help foster a whole new world of applications that perform at the speed of memory.
Our new focus on PCIe cards will allow both companies to drive radical new economics that lead to the mass adoption of memory-based architectures,” said Don Basile, CEO of Violin Memory. “NAND memory is now a requirement at every level from the smart connected device to the core of the cloud and the enterprise data center. Violin’s combined portfolios continue our leadership across the evolving memory-based solution market.”
In a new RCE Podcast, Brock Palen and Jeff Squyres speak with Jeff Darcy of Red Hat about Gluster FS, a distributed scalable open source filesystem.
Jeff has been working on distributed storage since NFS version 2 at Encore in 1990. Most recently he worked on Lustre at SiCortex, and then started his own project HekaFS at Red Hat. Since Red Hat acquired Gluster, he has been an architect and ambassador for that project.
In this slidecast, Jeff Denworth from DDN describes the company’s new hScalar storage system — the World’s First Enterprise Apache Hadoop Appliance.
DDN has developed a Hadoop solution that is all about time to value: It simplifies rollout so that enterprises can get up and running more quickly, provides typical DDN performance to accelerate data processing, and reduces the amount of time needed to maintain a Hadoop solution.” said Dave Vellante, Chief Research Officer, Wikibon.org. “For enterprises with a deluge of data but a limited IT budget, the DDN hScaler appliance should be on the short list of potential solutions.”