In this video from the Lustre User Group 2013 conference, Brent Gorda from Intel presents an overview of Lustre Activities at the Intel High Performance Data Division.
The DOE Exascale Mathematics Working Group (EMWG) has issued a Call for Position Papers. Selected contributors may have the opportunity to participate in the Workshop on Applied Mathematics Research for Exascale Computing, currently planned for August 21-22, 2013 in Washington, D.C.
The ENWG was formed for the purpose of identifying mathematics and algorithms research opportunities that will enable scientific applications to harness the potential of exascale computing.
While opening up the possibility of conducting ground-breaking science, computing at the exascale will introduce difficult challenges such as extreme concurrency, memory and data motion limitations, energy control, and resilience. Substantial applied mathematics research is required to realize the full benefits of computing at the exascale.
Submissions are due April 30.
In this video from Moabcon 2013, Dick Bland and Jérôme Labat from HP present: The New Style of IT: HP Update for Moabcon 2013.
Cloud, Mobility, Security, and Big Data are transforming what the business expects from IT resulting in a “New Style of IT.” The result of alternative thinking from a proven industry leader, HP Moonshot is the world’s first software defined server that will accelerate innovation while delivering breakthrough efficiency and scale.
While the first spin of Moonshot is not targeted at HPC, Bland said that HP will be able to spin up new modules for the platform that could include FPGAs and ARM-based nodes more suited to high performance computing.
Nine months after its inauguration, an agreement was sealed for a planned system expansion to be completed by end of 2014 or early 2015. The upgrade of the LRZ supercomputer, SuperMUC, which currently delivers a peak performance of 3.185 petaflops and holds position 6 on the Top500 list, will boost the system’s performance by a factor of about 2.1, making it capable of 6.4 petaflops.
The contract for SuperMUC Phase II was signed by representatives of all parties involved: Arndt Bode of the Leibniz Supercomputing Centre (LRZ), Karl-Heinz Hoffmann (chair of Bayerische Akademie der Wissenschaften), Martina Koederitz (general manager of IBM Germany), and Andreas Pflieger (IBM) in the presence of Wolfgang Heubisch and Georg Antretter representing the Bavarian State Ministry of Sciences, Research and the Arts.
The agreement states that 74,302 Intel-Xeon processor cores will be added to the existing 155,656 processor cores of SuperMUC. Its main memory will be expanded from 340 to 538 terabytes and 9 petabytes of intermediate storage will complement the system’s existing capacity of 10 petabytes.
The LRZ HPC system has been designed for exceptionally versatile deployment. The more than 150 different applications running on SuperMUC on average per year range from solving problems in physics and fluid dynamics to a wealth of other scientific fields, such as aerospace and automotive engineering, medicine and bioinformatics, astrophysics and geophysics amongst others.
Professor Bode is confident that SuperMUC Phase II will be running as stably and reliably as the current system has done from day one – and that it will scale to the large number of cores.
Only shortly after starting operation, SuperMUC was working to full capacity. Already, there are applications that practically use the entire system, and they do this in a very efficient way. Especially in the realm of biology and life sciences, we expect a significantly higher demand of system performance in the foreseeable future. SuperMUC Phase II will be in an excellent position to meet these requirements,’ said Bode.
If IBM actually has a plan that gets it into hyperscale data centers – possibly with ARM, Atom, and Power microservers, possibly deploying some of the Power AS and torus interconnect in the BlueGene/Q supercomputers – and if IBM will use at least some of the funds from a Lenovo deal to do the engineering to make modern servers, then dumping System x might be worth it. It would be quite interesting, in fact, to see IBM become an ARM licensee and offer both ARM and Power alternatives. But IBM is probably more inclined to think it can push Power into an x86-dominated data center, and do so despite all the hype and real engineering with ARM processors for servers.
As to what such a deal means to the HPC market, I think this vendor chart from the November 2012 TOP500 is very telling. IBM clearly has the largest share of the TOP500, and even though this represents a mix of Blue Gene, Power, and x86 systems, a sale to Lenovo could result in a Chinese multinational becoming the number one vendor on the TOP500.
As you’ll recall, IBM sold off its unprofitable PC business to Lenovo back in December 2004. According to reports, IBM will not be selling its new FlexSystems in this deal.
Read the Full Story.
As an active blogger and HPC community member, Andrew Jones from NAG is a fixture at many HPC conferences worldwide. With ISC’13 coming up in Leipzig on June 16-20, I caught up with Andrew to get his perspectives on the conference, HPC trends, and an update on his 2013 predictions.
insideHPC: In your blog and various talks I’ve seen, it is obvious that you are very passionate about the topics of hardware and software in the HPC space. What are the issues that resonate with you in these areas?
Andrew Jones: Yes, as anyone who has encountered me at conferences or read my blogs (hpcnotes.com and blog.nag.com) will know, I am a passionate advocate of HPC as a tool for science and economic impact – and equally passionate about ensuring that HPC is seen as a complete ecosystem of hardware, software, people, processes, etc. and not merely the hardware that is often the default focus of HPC. Clearly the hardware matters – a supercomputer offers the promise of a big performance increase over smaller computers. But the supercomputer on its own is just a device for converting money into waste heat (via some floating point units and an oversized electricity bill). The hardware needs software (applications) to turn the potential performance into a real science tool or engineering capability etc. And in turn, those applications need supporting infrastructure (middleware) to efficiently use the resources. Underpinning all of this software and hardware is the requirement for people – to design, deliver, program, etc. this complex ecosystem which can be such a powerful tool. All parts of this ecosystem need attention (and investment) in order to achieve the maximum rewards of HPC. I am lucky that I am not merely evangelizing this “software & people deliver performance” message based on faith. At NAG we have built up a significant evidence of success stories (from over 50 projects) that demonstrate that HPC expertise applied to application innovation really does deliver increased science/engineering output – much more so than investing the same effort/money in more hardware.
insideHPC: You attend many of the same HPC events around the world as I do. The other day, you mentioned at dinner that any HPC event is really not about the technical program so much but everything else around it, such as the networking opportunities, the exhibition, etc. Can you elaborate on that?
Andrew Jones: I believe the greatest potential value for most attendees is informally meeting a diverse range of fellow HPC professionals and users. Perhaps I could illustrate this by looking at the extreme – much of the obvious content of the technical program could be acquired through reading published papers or watching recordings of the conference talks, etc. However, attending the conference itself allows the possibility of a conversation with the author, or perhaps one of the other audience members inspired by the paper, etc. To me, it is that discussion inspired by the talks that is the real opportunity of HPC events. In smaller events the technical program is critical because that is where most of the attendees will spend most of their time and thus it sparks opportunities for networking. In the bigger events (e.g., SC or ISC) only a small proportion of the attendees will spend significant time in the main technical program, the rest being spent in the exhibition or surrounding side-meetings. Indeed, it is difficult to create a program of quality in every topic required to attract the breadth of attendees at such large events. At these events, the knowledge on offer comes also from a comprehensive exhibition (an often undervalued aspect of the bigger HPC events) which allows a much broader set of ideas, products and research to be offered to catch people’s attention than a technical program could do in a sensible timeframe. In my experience, catching up with existing contacts, discussing experiences with industry practitioners and experts, and creating new relationships are the key activities at HPC events that are likely to lead to beneficial collaborations.
insideHPC: We’re well into the year 2013. How well are your those HPC predictions you blogged about coming into fruition?
Andrew Jones: I said Big Data will gradually be overtaken as the buzzword of choice for the HPC community. No sign of that yet! I predicted that some new buzz-themes (needing catchy buzzwords) would emerge, specifically energy-efficient computing and ease-of-use in HPC. There are some tentative signs of this happening, especially energy-efficient computing, but I think there is still more to come this year.
I said there would be continuing discussion of GPU vs. Phi as the accelerator of choice – especially at ISC’13. I think this one is pretty much true so far, but let’s see in Leipzig!
I also predicted that the HPC community would see a strong focus on industrial HPC this year, especially engagement between centers of HPC expertise and industry users. [Note that I say “centers of HPC expertise” – it is critical that this does not mean only supercomputer centers – there is a lot of real expertise in HPC outside of the supercomputer centers – e.g., within the main HPC vendors, or specialist HPC expertise providers such as NAG, or in some cases within the industrial end users themselves.] I think this prediction has already come true, with more on the way. I hear companies increasingly seeing the potential of HPC within their business; those who have previously invested are increasing and broadening their investments; and companies are seeking interactions with centers of HPC expertise to get a step ahead of their competitors. At least in the UK, politicians are very keen to get industry using HPC and that investments are increasingly being predicated on that.
insideHPC: What will NAG be showcasing at their ISC’13 exhibit?
Andrew Jones: As always, NAG will send several staff to ISC’13. We will be available to discuss how our team of HPC software engineers can enhance customer application codes to implement better scalability, new algorithms or other innovations to get more performance and solve more complex problems. We can also help with advice on HPC strategy and procurement, and planning application development to exploit future hardware technologies.
As well as the HPC services and consulting side of our business, NAG will be showcasing the latest in our libraries products. In particular, this year we have a new release of the NAG Library (Mark 24) including new routines in optimization, FFTs, wavelets and data fitting – well over 1,000 routines in total including the existing NAG chapters. We’ll also display the NAG routines on the Intel Xeon Phi co-processor and other parallel computer technologies.
insideHPC: What is the NAG Library for SMP & Multicore?
Andrew Jones: The NAG Library for SMP & Multicore is a full implementation of the NAG Library in which a large number of the routines have been enhanced for parallel processing using OpenMP. This means they can run significantly faster on multi-socket and multicore systems, processing larger amounts of data, etc. This offers customers an easy way to achieve the performance advantage of multicore processors – simply link to the multicore version of the NAG Library instead of the serial version.
insideHPC: Why do you continue to attend and exhibit at ISC year after year? What makes this event special?
Andrew Jones: It is a HPC event that combines the best of everything. It has scale – over a thousand attendees – while somehow managing to retain the engaging small conference atmosphere of its origins. It has one of the better technical programs of the larger conferences due to the hard work by the organizers to balance well-chosen invited talks, discussion panels and peer-reviewed papers. Most importantly, the agenda, the exhibition, and the surrounding social events are all planned with excellent opportunities for networking.
At a local level, Germany is an important market for us both in both commercial and academic sectors (e.g., we have a number of large academic site licenses for our libraries), so ISC is a good opportunity to meet some of our end users.
Overall, for NAG, ISC’13 is a great place to meeting new people, to learn from them and to understand how NAG can help them with their HPC and numerical computing.
In this video from the Lustre User Group 2013 conference, Peter Jones from Intel presents: Lustre Releases.
The good folks at the PlanetHPC project have published an update to their report: “A Strategy for Research and Innovation through High Performance Computing.” The report makes the case for investment in HPC at the European level, and suggests a strategy for HPC research, development, and innovation.
High Performance Computing (in common with the computing domain in general) is at a cross roads; technological challenges threaten to disrupt three decades of continuous exponential growth in the computational power of HPC systems. Europe must act to counter this threat. HPC is a proven technology for delivering economic and societal benefits, and many developed and emerging economies outside the European Union are investing heavily in it. Many countries have recognised that to out-compute is to out-compete.
In this follow-up podcast to the GPU Technology Conference, the Radio Free HPC team mulls over a talk by GE’s Dustin Franklin, GPU app specialist. Dustin’s topic was GPU-direct RDMA; was this a first look at real-world RDMA with GPU-to-GPU communications?
Follow along as the guys describe flow charts on technical slides that are not yet approved by viewing for the “great unwashed masses” – but make no mistake, they’re impressed by what they saw. Dan “knows a guy” who can divulge more, and offers to arrange an inquisition with Henry. Henry promised to “be nice,” whatever he means by that. Rich missed this GTC session and several others while “conducting interviews,” whatever he means by that. Dan offers another characterization. And this just in: there’s a great deal of information available on the Internet.
In retrospect, Roadrunner could be viewed as a something of a design cul-de-sac, created by the artificial goal of the petaflop milestone. But it’s notable that even in the contrived race to a quadrillion flops, something of worth endured. Although the PowerXCell 8i was a commercial dead end, x86/accelerator combo servers took off and are now sold by every HPC system vendor, IBM included. For the time being, accelerators offer the only commodity-based technology that delivers multi-petaflops of supercomputing in reasonable power envelopes, not to mention tiny systems with multi-teraflops capability. The energy efficiency of these accelerators, compared to standard processors, is driving the technology into mainstream HPC and is stretching the number of FLOPS that can be squeezed into a datacenter or into a deskside cluster.
Read the Full Story.
What Adapteva has done is create a credit-card sized parallel-processing board. This comes with a dual-core ARM A9 processor and a 64-core Epiphany Multicore Accelerator chip, along with 1GB of RAM, a microSD card, two USB 2.0 ports, 10/100/1000 Ethernet, and an HDMI connection. If all goes well, by itself, this board should deliver about 90 GFLOPS of performance, or — in terms PC users understand — about the same horse-power as a 45GHz CPU. This board will use Ubuntu Linux 12.04 for its operating system. To put all this to work, the platform reference design and drivers are now available.
Read the Full Story.
This week Univa announced the findings of its 2013 Open Source Software Use survey. Conducted online by uSAMP, the report finds that free and Open Source software (FOSS) is prominent within businesses today with 76% of companies using FOSS, while 75% have experienced a problem with using it. Businesses are relying heavily on unsupported Open Source solutions today; therefore 64% say they would pay for supported software should it solve their problems.
We have always said that users are willing to pay for quality when it comes to Open Source software, and the results of the survey have confirmed as such,” said Gary Tyreman, Univa CEO. “A large number of organizations use Open Source Grid Engine as a key ingredient in product development, but as the company grows they can’t afford to rely on unsupported Open Source Grid Engine. That is when they can turn to us for peace of mind, scalability and reliability provided by our team and proven Univa Grid Engine.”
According to the survey report, a lack of enterprise-grade support is the largest problem FOSS users experience in their company with 27% of respondents raising it as their top concern. Other troublesome issues include usability (24%), maintenance (20%), crashes (19%), bugs (18%), downtime (16%), loss off productivity (16%) and interoperability (16%).
Indeed FOSS’ importance today means that 64% are willing to pay for better quality, with the following listed as reasons to do so:
- Stability (25%)
- Enterprise-grade support (22%)
- Ease of use (20%)
- Extra functionality (18%)
- Bug reports/fixes (15%)
- Integrated solution (13%)
- Product upgrades (13%)
- Predictable lifecycles (13%)
The key product development departments of a business where most mission-critical software resides – engineering and R&D – rely most heavily on FOSS (32%). These trump executive (5%), legal (1%), finance (6%), sales (8%), HR (3%) and marketing (6%) combined. One in ten businesses uses FOSS across the board in every department, indicating how important FOSS is depended upon as the backbone of a company.
Read the Full Story.