Supermicro was first out of the gate with announcements targeted at next week’s GTC Conference in San Jose. The company said it will showcase its latest GPU-powered X9 server and workstation solutions with support for Intel Sandy Bridge processors.
Supermicro is transforming the high performance computing landscape with our advanced, high-density GPU server and workstation platforms,” said Charles Liang, President and CEO of Supermicro. “At GTC, we are showcasing our new generation X9 SuperServer, SuperBlade and latest NVIDIA Maximus certified SuperWorkstation systems which deliver groundbreaking performance, reliability, scalability and efficiency. Our expanding lines of GPU-based computing solutions empower scientists, engineers, designers and many other professionals with the most cost-effective access to supercomputing performance.”
Supermicro will exhibit at GTC in the San Jose McEnery Convention Center, May 14-17 in Booth #75. Read the Full Story.
In this video, Minesh B. Amain from MBA Sciences presents on new developments in SPM.Python.
Software developers may use SPM.Python to augment new or existing (Python) serial scripts for scalability across parallel hardware. Alternatively, SPM.Python may be used to better manage the execution of stand-alone (non-Python x86 and GPU) applications across compute resources in a fault-tolerant manner taking into account hard deadlines.
If you’d like to learn more, Minesh will be presenting at the upcoming Stanford HPC Advisory Council Workshop on December 6, 2011.
In this webinar, Satnam Singh, Professor of Reconfigurable Computing, School of Computer Science, University of Birmingham (UK), will demonstrate data-parallel programming with Microsoft’s Accelerator system. The system provides a language neutral library for expressing whole-array computations which can be dynamically compiled into code for execution on GPUs, as well as code running on multiple processor cores using SSE instructions. Professor Singh will introduce the Accelerator API and data-structures which are exposed as a domain specific library through the use of overloaded operations with specific examples in C++ and F#.
Alain Tiquet writes that some 350 attendees at last week’s GPU Technology Summit in Tel Aviv heard about the progress that’s being made within Israel’s thriving parallel-computing community.
“Two dozen speakers from NVIDIA and its partners covered topics such as GPU computing on clusters, computer vision on GPUs and image processing applications using CUDA. One of the purposes of the event – the third in a series of conferences around the world focused on start-ups working with GPUs – was to showcase how up-and-comers in Israel’s thriving startup community are tackling challenging new computing problems with the help of GPGPU. Similar events have taken place in recent weeks in Singapore and Taipei.”
Read the Full Story.
Today Nvidia announced that it’s launching multiple GPU Technology Conferences in Asia and other regions, while moving its North American flagship event from October 2011 to May 2012. The move reflects growing interest in momentum in GPU computing as an engine for scientific discovery.
With more than 2,000 attendees from more than 40 countries, GTC 2010 was the second-largest supercomputing event of the year. Building on this success, NVIDIA is adding multiple regional GTC events across the globe, including events in the following locations:
- Singapore – May 12, 2011
- Taipei – May 19, 2011
- Tel Aviv – May 30, 2011
- Tokyo – July 22, 2011
- Beijing – December 15-16, 2011
I think there’s a couple of things at work here. First, a plethora of regional GPU user group meetings have been popping up on Meetup.com. In places like Taipei, a Nvidia spokesperson told me that as many as 500 people have already expressed interest in a local GPU conference. With the way Visas work these days, there’s no way for many of these people to attend a conference in the States.
Secondly, I think Nvidia saw the amount of GPU-related content coming to the SC11 Conference in November and decided to make that it’s Fall conference. In this light, moving GTC to Spring makes perfect sense. It will also give the busy folks at Los Alamos some breathing room between shows as they prepare for LANL Accelerated HPC Symposium, which runs in conjunction with GTC.
A Tip of the Hat goes to Nvidia on this one. It’s quite amazing that a conference heading into its third year is now the second largest HPC symposium. By moving to Springtime, the GTC won’t look or feel like its competing with SC for attention, hearts, and minds.
It looks like the sponsors agree, as Adobe, AMAX, Appro, Bull, CAPS, Dell, GE Intelligent Platforms, HP, Lenovo, Los Alamos National Labs, Microsoft, NextIO, PNY, Supermicro, Synnex, and SGI have all signed on for GTC 2012.
The GPU Technology Conference 2011 has issued its Call for Submissions. The conference will take place Oct. 11-14 in San Jose, CA.
“If you’re interested in sharing your work, we want to hear from you! Entering its third year, GTC advances awareness of high performance computing. The event connects scientists, engineers, researchers, and developers who use GPUs to tackle enormous computational challenges, across a broad range of industries. We are looking for submissions from folks in industry and academia who have topics for 25 or 50 minute talks or posters. You can find more details and instructions for submitting here.”
When Nvidia kicked off the GPU conference a couple of weeks ago, one of the first things they announced was that MATLAB now supports GPUs. This is a key milestone, as it essentially means that anyone who knows math can now speed up their calculations with parallel processing on GPUs.
I sat down with Loren Dean, Director of Engineering at Mathworks to talk about MATLAB and what GPU support means for the user base of this popular software.
insideHPC: So what did Mathworks announce at the GPU conference?
Loren Dean: We announced that our users can now take advantage of NVIDIA GPUs from within the MATLAB environment. We’ve added the GPU support to our Parallel Computing Toolbox, and that lets you take advantage of GPUs without low-level C or Fortran programming.
We’ve also added GPU support to our Distributed Computing Server. That product is for scaling up. So if you want to move off of the desktop and onto a cluster, a grid, or the cloud, Distributed Computing Server does that. It requires Parallel Computing Toolbox to scale to the server, but it is just a matter of saying, instead of using my local resources, I want to run on the remote resources.
insideHPC: How will this GPU support help your user base?
Loren Dean: What we’ve really tried to pay attention to is enabling productivity. Our user base cares a lot about productivity and being able to interact with their data and with their environment as well.
If you look at what happens traditionally in the HPC community, it’s all about batch. So you submit something and a couple of days later you get your results back. I was looking recently at a project we have going with Cornell right now. They have a experimental resources on the Teragrid for MATLAB. It’s fully loaded with 512 cores and I think it’s running with a 2-3 day queue time to get your stuff in.
So what we’re trying to do with our products is bring the interactive world to this space. When we look at what typical users want, essentially it’s about interactive use of a cluster. They start out on the desktop and then move to cluster and eventually they may move to batch. But interactivity is key. What we’ve done with our GPU offering is we’ve extended the capabilities so they can interact with a GPU seamlessly, much the same way they do with our Parallel Computing Toolbox. With very few code changes, the users simply has to define which data is going to run on the GPU. So you create an array in MATLAB and say, OK, I’m going to run my FFT or whatever it is on the GPU, and MATLAB just does it.
insideHPC: So you don’t have to manage memory and get explicit about that kind of stuff?
Loren Dean: They can, but they don’t have to. A lot of our typical users, they hear about GPUs, they hear about speedups, and they want to try it out. So we’ve made it really easy for them to get access to it so they can hopefully enjoy the benefit of what GPUs offer.
insideHPC: Is this GPU support a new product or an extension of what you already offered?
Loren Dean: These are just additional capabilities for the two products we already have. So if you’re already a licensed user of Parallel Computing Toolbox or the Distributed Computing Server, it’s an additional capability that’s already in there and you’re ready to scale.
So if you want to go from doing something on the desktop talking to one GPU, I can show you how to do it with four GPUs or on the cluster and there is no code change. That’s one of the things getting back to what we have designed our products for; we care a lot about the engineer who doesn’t want to get into the details here. They’re the person who says, I want to get my work done and I know how to program MATLAB. So we spend a lot of time separating the algorithms from the infrastructure.
The model I typically use to describe this is that of a printer. If you think about printers 20 years ago, you had to know how to program Ghostscript and if you had a bug in the printer driver, you could actually go into the file and change it. You got down to that level if you had to. But now you just find a printer on the network, the device drive is installed, and if you send it to a color printer, it prints in color. It just does it.
That’s really the model we’re trying to follow, we have this idea that you have pre-defined configurations to submit work to. So with the Parallel Computing Toolbox, you have the local configuration that gives you local workers. And then when you want to scale to something like Windows HPC Server, you can set up a configuration for that. So, what we’ve done from a user perspective is that, once the IT administrators set up a config, the user doesn’t care. They just say, I want the work to go that particular resource and the code just works.
insideHPC: Is this GPU parallel programming capability shipping or downloadable today?
Loren Dean: It’s available to anyone with a license to our standard software. The release came out around September 2, so it’s been out for a few weeks. We just haven’t talked about it until this show.
insideHPC: There are a lot of GPUs out there. What do you have on your site that would help somebody see how easy this is and get themselves started?
Loren Dean: The primary place we’d like to send people to is mathworks.com/discovery/matlab-gpu.html. It’s got videos, benchmarking examples, and documentation on using GPUs.
insideHPC: Is this the first time MATLAB has supported an architecture other than x86?
Loren Dean: It’s the first time in recent memory. Actually there were really old versions of MATLAB that ran on the Cray vector architecture.
insideHPC: So GPU accelerators have been around for a long time now. Was the market there not compelling enough to do the port until now?
Loren Dean: So in the GTC opening keynote, Jen-Hsun Huang asked how many people in the room used MATLAB, and all of the hands went up. So we’ve known about the interest.
The three primary things have held us back from providing GPU support. These are all really important from the MATLAB user perspective. First is double precision. MATLAB’s base data type is double, so by default data is double precision. So having a single-precision GPU was not going to appeal to our user base because it would mean changing their code and getting different answers. So double-precision was critical to us.
The second thing is IEEE compliance. Our reputation is based very strongly on getting correct answers. Historically, for GP-GPUs doing graphics rendering, if you were off by a little bit, ok, nobody is going to notice that. But in technical computing, it’s really important and the earlier versions of the libraries were not IEEE compliant.
The final thing is cross-platform support: Windows and Linux. We need to support all those platforms for the MATLAB user base.
So those three things have all come together within the past couple of months for GPUs. And while we’ve been really interested and we’ve known that there’s been demand, we didn’t want to put something out there that we couldn’t stand behind. We want to be confident that we’re getting the right answer and providing something to our broad customer base which is using double precision.
indsideHPC: Does your product work with anything other than Nvidia GPUs?
Loren Dean: It does not, but there is a good reason for that; there’s no library support anywhere else. You need libraries, you need FFT, you need BLAS, etc. Nvidia has them and they’re part of the CUDA ecosystem. It’s not available in OpenCL. It’s just nonexistent. We’ve architected to be able to support OpenCL if and when it comes, but today it’s not there.
insideHPC: From a business standpoint, do you think this will help you sell more software licenses?
Loren Dean: Yes, we think it will. During my GTC talk, I asked the room how many of them were using Parallel Computing Toolbox. About half the hands went up. So yes, we will see growth. A lot of people here are doing parallel computing already, but this makes it a lot more accessible.
I managed to sneak in a question towards the end of the press conference at the GPU Conference last week, and I have to admit that I wasn’t prepared for what Nvidia CEO, Jen-Hsun Huang had to say in response.
insideHPC: This question is for Andy Keane. Andy, in your recent opinion piece for the Wall Street Journal, you stated that America’s competitiveness is at risk. What prompted you to write that and what kind of reaction have you gotten?
Andy Keane: Interesting reactions…
What prompted me to write is that we see all kinds of companies, or countries and institutions that have taken an attitude, and a philosophy that they are very aggressively going to completely exploit new technology. So you have two ways of thinking about it. One is an opportunity and one is a threat. One makes you fall back and one makes you lean forward.
So, we have some great institutions, some great universities and some supercomputing centers that have leaned forward into adopting a new technology. We’ve seen some great results and some success in this business that has been around since 2007. We went from zero installed base to the second largest supercomputer in the world in three years. But it wasn’t in the US. We’re going to have a full series of petaflop computers. Soon there will be more petaflop class computers outside the US than there are inside the US. And so for me that was a worrisome trend because here you have a clear technology advantage that provides a lot of benefit. Mainstream researchers are using this for great advantage and we see them accelerating that adoption.
And so, I wanted to make sure people are aware that outside the US there is this very strong adoption today that will lead to results, good results in the product side.
Jen-Hsun Huang: Can I ask a question?
I really don’t care who cures cancer. I really don’t care who cures Alzheimer’s. It doesn’t matter what country does it. Just please do it.
I don’t really care which country discovers for the first time the ability to predict weather outside of the 12 weeks or a 12 day window. I don’t care which country discovers a new way to create cars so that we can reduce our carbon footprint. I really don’t care. Just do it.
That is not really the crux of the issue. Here is the crux of the issue; it is the case that near supercomputing centers and centers where there are extraordinary amounts of high performance computing capabilities, there are clusters of smart thinkers, scientists and researchers. What if all the supercomputers were to leave one country? And what, in the case of Andy’s article, what if that country was the USA?
My question is: Why would those students come to the United States to do the research?
They will sit wherever that Supercomputer is to do the research. And they might stay because there are wonderful places everywhere. You don’t have to live in Silicon Valley. You don’t have to live in Boston. There are wonderful places all over the world. Before you know it, piece by piece by piece, competitiveness and intellectual property in this country slowly dissipates.
Also the last point is, actually I don’t really care where the smartest people are.
For a thousand dollars you can fly anywhere on the planet. I live off of my back pack and we have offices all over the world. We have them in China. We have them in India. We have them in Japan. We have them in Russia. We actually don’t care where those great people exist, just so long as the exist.
Here’s one thing we do know: competition. Competition among companies brings out the best in us. Competition among countries and in this particular case the competition for the fundamental, intellectual the creation of knowledge is something to reckon with. Holy Cow.
If you want to have one competition let’s have that. Instead of competing to create wealth, let’s have a competition of knowledge. The competition is the creation of knowledge, let’s have that.
If we start it, fine. Who cares who starts the fight? But somebody please start a fight. Somebody start a race. Somebody start a competition. I think in the end we all benefit.
Editor’s note: Then the room went quiet. It was such an amazing moment that I turned to my friend and just said, “Wow.” I don’t know about you, but this reporter thinks it’s time for this country to put up our dukes.
I think the most significant announcement at this year’s GPU Technology Conference was the one that didn’t get a press release. You have to forgive IBM, as they had a lot of Deep Things going on, I guess, but this is a big deal; Tesla M2070 GPUs are coming to BladeCenter.
A Fermi blade offering for IBM’s BladeCenter opens up a vast new market for these products given the large numbers of BladeCenter chassis in IBM’s customer base.” said Dan Olds, HPC Editor at The Register. “To me, this move by IBM, along with moves in the same direction from Dell and HP, paves the way towards wider enterprise adoption of GPU computing.”
IBM has been very smart about not changing the form factor of the BladeCenter too much since its introduction in 2002. So there are thousands of these chassis out there in the world. And now you can slide a Tesla into just about any one of them and go to town.
Big Blue has offered Nvidia GPUs on their iDataPlex platform for a while now. In fact, GPUs helped Mississippi State University attain the number one spot for x86 systems on the Green 500 list. They’ll be featured in this webinar on Wednesday, September 29, so check it out. Deep Thoughts don’t have to be Deep Secrets.
In this video, the irreverent Dan Olds from the Register interviews Andy Keane, GM of the Tesla Business for Nvidia.
In this video, Nvidia VP Rob Csongor shows us some of the highlights from Day 2 of the GPU Technology Conference.
Click here to see the Day 2 keynote on the Computational Microscope by Dr. Klaus Schulten of the University of Illinois at Urbana-Champaign. Dr. Schulten uses GPUs to increase accuracy, speed up simulations, and open new doors of discovery by exploring previously intractable computational biology problems. Great stuff!
In this video, T-Platforms’ Alexey Nechuyatov shows us the new TB2-TL blade system. Announced at the 2010 GPU Technology Conference, the TBL-T2 has attracted a lot of attention at the show with its industry-leading performance per watt.
The GPU Technology Conference started up Tuesday with a rousing keynote by Nvidia CEO Jen-Hsun Huang. The room was largely filled with developers, and Huang’s message that MATLAB, Ansys, and Amber HPC applications are all now running on the Tesla GPU platform was well received.
With demos that spanned from to real-time rendering to automotive design to robotic heart surgery, Huang did a great job of focusing his talk not on technology, but on how people are using GPUs to “change the world.” There are literally hundreds of millions GPUs out there in the wild, and I think the democratization of parallel HPC computing is here thanks to CUDA and some very smart technology bets made by Nvidia.
Don’t kid yourself. GPUs are a game-changer.” said Frank Chambers, a GTC conference attendee shopping for GPUs for his finite element analysis work. “What we are seeing here is like going from propellers to jet engines. That made transcontinental flights routine. Wide access to this kind of computing power is making things like artificial retinas possible, and that wasn’t predicted to happen until 2060.”
This conference has been a pleasant surprise for me in a number of ways. It’s only in it’s second year, yet the remarkable growth and energy here is notable:
- More than 140 press and industry analysts
- Four times the response to call for talks
- Twice the number of talks, up to nearly 300 hours
- Twice the number of products and technology being demo’ed
- Several thousand attendees from 50+ countries
- Researchers/scientists from 200+ universities, national labs and govt agencies
- Nearly 100 CEO/CTOs
I think we are witnessing the birth of a new computing ecosystem here. The exhibitors are incredibly enthusiastic and have great demos to show. Today’s news may not seem like a big deal at first look, but, as one speaker said today, the porting of MATLAB opens up GPU computing to anyone who knows math. To me, that adds up to a rising groundswell.