In this slidecast, John Gustafson discusses how AMD is meeting customer challenges for energy efficient computing. He also shares three common misconceptions about HPC.
Driven by the demand for new datacenter services to support mobile and cloud computing, ARM will continue to gain in-roads into the datacenter server market because of the low-power and energy efficient design of SOC’s based on ARM’s technology”, said Karl Freund, VP Marketing at Calxeda. “As enterprises shift towards highly scalable solutions such as Calxeda, a key enabling technology is intelligent workload management – and we have partnered with Univa to provide our customers with a great solution.”
In this slidecast, Andrew Flint and Carolyn Hanley from Intel present: Intel Cache Acceleration Software.
Intel CAS complements our SSD data center family by providing a total caching solution that delivers even more value and capability for our customers,” said Chuck Brown, product line manager for Intel’s Non-Volatile Memory Solutions Group. “Intel CAS delivers a multi-level cache across the SSD and DRAM for optimal performance. Compared to short-stroked hard-drive technology, we’ve seen up to 50 times the improvement in I/O performance throughput for read intensive workloads by adding Intel CAS with the Intel SSD 910 series1.”
LUG is always about a real exchange of ideas. The LUG program committee would like to invite members of the Lustre community to submit presentation abstracts for inclusion in this year’s agenda. If you’ve considered it before, but put it off, we want to hear from you. We’ve made it easy; the first step simply requires one-page abstract of your proposed talk. We’re looking for deep-dives, new information, and controversial topics in all areas of Lustre development, application, or best-practices. The deadline to submit presentation abstracts is March 4, 2013.
Can Big Data analytics be used to predict which Startup companies will succeed? In this video, Thomas Thurston from Growth Science discusses the new Ironstone Venture Capital Fund, which is using Business Model Simulation to choose disruptive Startups.
The human mind is good at some things but bad at others. So we use data science and technology to help our brains with the things they weren’t designed for. This marriage between technology and the brain has allowed us to predict business behavior in ways that weren’t possible even a decade ago. It’s the future of venture capital,” said Thomas Thurston from Growth Science. “This fund is unique. First, instead of mostly using intuition, like most VCs do, we’re using powerful, proven data science to identify disruptive companies. That’s revolutionary. Second, we’re interested in seed- and early-stage companies, which is much needed as our economy rebuilds itself. Finally, unlike a lot of VCs focused on exits and quickly ‘flipping’ startups, we have a long-term view and really want to partner with people growing strong, disruptive, meaningful businesses to make the world a better place.”
In this RCE podcast, Brock Palen and Jeff Squyres speak with James Browne, Leonardo Fialho, and Ashay Rane about PerfExpert, an easy-to-use performance diagnosis tool for HPC applications with suggestions for bottleneck remediation.
In this slidecast, Jason Stowe from Cycle Computing describes how the company spun up a 10,600 instance HPC cluster in 2 hours with CycleCloud on Amazon EC2. Using just one Chef 11 server and one purpose in mind, this on-the-fly cluster was used to accelerate life science research relating to a cancer target for a Big 10 Pharmaceutical company.
To tackle this problem, we decided to build software to create a CycleCloud utility supercomputer from 10,600 cloud instances, each of which was a multi-core machine! This makes this cluster the largest server-count cloud HPC environment that we know about, or has been made public to date (the former utility supercomputing leader was our 6,732 instance cluster for Schödinger from 2012). If this cluster were a physical environment, analysts said it would occupy a 12,000 sq ft data center space, costing $44 million. Instead, we created this in 2 hours, with these 10,600 hosts, used it for 9 more, at a peak cost of $549.72 per hour, and turned it off for a total cost of $4,362. Wow!”
In podcast, Merle Giles from NCSA and I discuss the new Two-Day Industry Track on “HPC for Small and Medium Enterprises” at the upcoming ISC’13 conference. As part of the ISC Distinguished Speaker Series, Giles will present on the common needs of engineering and scientific research in regard to HPC.
The goal of the Industry Track is to help attendees from the industry, who often have different computing requirements than those at scientific institutions, make informed decisions about acquiring and operating HPC systems. This newly established track will focus on engineering and manufacturing in industry, especially on helping the industry improve product design and time-to-market through the use of HPC. The talks are also aimed at spurring a dialogue between users, technology companies, hardware vendors, software vendors and service providers. Small and medium Enterprises (SMEs) will be strongly represented.
ISC’13 will take place June 16-20 in Liepzig, Germany, a new city for the show.
In this slidecast, Floyd Christofferson from SGI describes how the combination of the company’s Infinite Storage platform and Scality Ring technology provide a new, unified scale-out storage system. The solution is designed to provide both extreme scale and high performance, allowing customers to manage storage of massive stores of unstructured data.
Scale-out object-based solutions are designed to address this particular set of problems by minimizing manual intervention for storage expansions, migrations, and recoveries from storage system failure,” said Ashish Nadkarni, research director, Storage Systems at IDC. “Such a dispersed, fault-tolerant architecture enables IT organizations to more efficiently absorb data growth in a manner that is predicable for the long term.”
In this episode of Radio Free HPC, the topic of Exascale is under the hosts’ scrutiny once again as they discuss some interesting stories released by The Exascale Report featuring opinion by Bill Gropp of NCSA and Bill Harrod of DOE.
Rule #1: You do not talk about Exascale. (Kind of like rule #1 of Fight Club, except the guys keep breaking it.) Why not? Because too many of the people talking about Exascale are having the wrong conversation about it.
What should the conversation be? Should it be about the systems themselves, or about the work that can be done only with those systems — the science that we can’t yet do? Spoiler alert: Dan and Henry disagree on this. But a peaceful vibe reigns once again as they discuss what The Exascale Report calls “The Three Noble Truths” of Exascale, which sounds kind of Zen and cool — as if it was coined by Exascale Samurai.
And finally… is it time to talk about Zetta-scale?
In this slidecast, Eric Barton, Lead Architect for Intel’s High Performance Data Division presents a progress update on the Fast Forward I/O & Storage program.
Back in July 2012, Whamcloud was awarded the Storage and I/O Research & Development subcontract for the Department of Energy’s FastForward program. Shortly afterward, the company was acquired by Intel. The two-year contract scope includes key R&D necessary for a new object storage paradigm for HPC exascale computing, and the developed technology will also address next-generation storage mechanisms required by the Big Data market.
The subcontract incorporates application I/O expertise from the HDF Group, system I/O and I/O aggregation expertise from EMC Corporation, object storage expertise from DDN, and scale testing facilities from Cray, teamed with file system, architecture, and project management skills from Whamcloud. All components developed in the project will be open sourced and benefit the entire Lustre community.
This is a fascinating presentation for those interested in how an Exascale system might handle data, and the prototype that comes out of it may well represent the roadmap to the future of supercomputing.
In this podcast, the Radio Free HPC team looks at success factors for technology Startups. Prompted by a recent interview with Sun Microsystems co-founder Andy Bechtolshiem, the discussion centers around lessons learned from Sun’s decline and eventual acquisition by Oracle.
Our lucky winner of the $100 AMEX card was webinar attendee Dr. J. Fredin. Congratulations!
As a follow-up to the event, X-ISS has published a briefing document entitled: The Business of HPC – The new business metrics HPC Pros need to consider.
Businesses that are new to using HPC today often require different performance metrics than traditional users and the industry born out of the research community. Many of these requirements include having a more granular view, as well as a more holistic view of how HPC impacts the business and specific project, including historical data. Some of these can be broken out as follows:
- Project Costs
- User Costs
- Deadline & Delay Costs
- Custom Business KPI’s
In this slidecast, CEO Bill Bain from ScaleOut Software presents: In-Memory Data Grids Enable Real-Time Analysis.
ScaleOut Software is a pioneer and leader in data grid software. Since our first products shipped in January 2005, we have consistently developed leading-edge technologies that help our customers solve scalability and performance challenges and gain competitive advantages for their businesses.”