In this video from the 2013 HPC User Forum, John Hengeveld from Intel presents: Big Data Use Cases – The Size of the Data does not define Big Data.
Search Results for: hengeveld
Search Results for: hengeveld
In this guest feature, Intel’s John Hengeveld reviews the past year and looks ahead to the industry challenges HPC is facing 2013.
Happy New Year Everybody! For me, 2012 was very exciting and very stressful. On the one hand I had family engagements, graduations, the launch of Intel® Xeon® E5 processors, the launch of Intel® Xeon Phi™ brand and first products, strong competitive moves in the industry. On the other hand I dealt with my illness, my brother-in-laws accidental death, and the aforementioned launches and new products.
I started 2012 by predicting that it would be the year of “Practical Petascale” and expected 20 petascale class machines – I under-called by 3 – and that they would be working on real applications (they are). I predicted we would start to see the technology gnomes cranking on the dawn of the exascale era. We saw Intel, nVidia and IBM all make a statement about what the next step in exascale would look like. Intel made some key acquisitions and delivered the Intel® Xeon Phi™ products. I am excited Intel announced these coprocessors reached generally availability on 1/28. So now, pretty much anybody can get one from his or her favorite OEM.I mentioned my four challenges to exascale – Programmability; Reliability; Efficiency, and System Scalability (PRESS) and we made very visible headway on all but Reliability. The OpenMP standard moved forward on a solid standard for attached co-processing. According to the top500 list, the industry has substantially improved its performance per watt. System Scaling solutions are starting to coalesce. On the Reliability front, there have been a few items of interest, but I haven’t seen as much as I think we need.
2013 is shaping up to be a corker in technical computing with more new products from Intel and others, and major new system deployments globally. There will be maybe 50+ Petascale systems – maybe more.
The biggest challenges to come this year:
- The industry has been going at a breakneck pace for the past couple of year. I expect this to continue through 2013; I am worried that the software industry is falling behind in capabilities and services.
- I expect that this year will see much greater convergence and intersection between the role of the workstation in visualization and design and the role of HPC in simulation and modeling. This fact alone should expand the technical computing markets, but we still need to converge on means for cloud access and standards for clusters relationships with workstations.
- I think that industrial investment will pick up substantially. Competition requires computation. And Big Data Analytics will grow beyond the initial Hadoop models into something much more powerful in the long term. Defining that standard will be a big challenge as well.
- We had better see more traction on the system reliability front.
Quite a year last year – An Amazing year this year. I love this industry. I really do.
Search Results for: hengeveld
The Intel booth at SC12 includes an amazing replica of the bridge of the Starship Enterprise. In this special guest feature, Intel’s John Hengeveld writes that being part of the launch of the Xeon Phi coprocessor this week at SC12 was like something right out of the movies.
We finally launched our Intel Xeon Phi products with a wonderful presentation by our Sr. VP Diane Bryant, done from the bridge set of the Star Trek Enterprise.
The Intel team has been working closely with end customers to get the first Xeon Phi systems up and running, including an innovative effort from Glenn Brooks from NICS to build up an extremely efficient system using Intel Xeon 5110P’s while also carefully managing Xeon power. The result was the #1 power efficient system on the Top500 list (2.44 GF/watt) ahead of Nvidia, AMD, and IBM’s BlueGene.
But to quote a line from Alice’s Restaurant; “That’s not what I came here to tell you about…came to talk about the draft.” Mariane Jackson, who you last saw with a massive beer in her hands in Hamburg, is a friend of mine at Intel has played a very important role in Xeon Phi’s development. She wrote something for the internal team that I thought is reflective of what we as an industry aspire to and dream of. Thought I’d share it:
As I sit on the Star Trek Enterprise Bridge set (yes, our event team leased the set for SC12), I wonder how much of what was imagined on this television series is going to be real in my lifetime. I have my wish list of favorite Star Trek musings that I hope to see be produced from the high performance community. I do know that even if my contributions may be small, this product line will lead to amazing discoveries by those visionaries. There is an army of people who have worked on this product, but before it gets pushed out the door I am part of the marketing and execution teams that have the privilege to put the pretty bow on it and say: “Here it is world. Create something wonderful!”
Search Results for: hengeveld
The HPC industry is driving fast down a familiar road. SC12 represents a sharp turn for the industry. We are going to see inflection point in Salt Lake City in a few dimensions. I am beyond excited by the dynamics of this industry today.
Let me share what I am looking for:
In the past year, Big Data has emerged as a premier investment in business and academia. The use of HPC in the analysis of Big Data and how Big Data technology is going to evolve beyond Hadoop is going to be a major topic of discussion in the sessions and in the industry. How will storage change? How will compute change? How will this increased data bandwidth requirement be reflected in emerging interconnect models? I expect to find answers to these questions at SC12.
The top 10 supercomputers will be very interesting this time around. There has been relatively little change in the past 2 lists in the top10. It will be fascinating to see if there is a lot of change. How high up will the Titan monster go? What efficiency will it achieve? What other new systems will there be in the top 10? One very well informed person said to me in Hamburg “This top500 list is the last gasp of the dying blue gene architecture…” Is he right? Will BlueGene resurge? Or will hybrid architectures begin to retake a leadership role?
In the past year, there has emerged two competing groups are developing solutions for programming highly parallel compute devices. NVidia’s OpenACC has split off an approach to address GPU Computing and is trying to establish a competing standard to OpenMP. OpenMP last week announced its draft approach for targeting directives that support CPU, GPU and highly parallel CPU’s like Intel® Xeon Phi™ coprocessors. Very serious people are supporting each approach; Intel is supporting OpenMP 4.0, of course. Some people are trying to support both. It will be interesting to see how heavily NVidia hawks their approach.
What will the industry say about Intel® Xeon Phi products? In June, Intel announced this branding for products for the now famous Intel® Many Integrated Core (MIC) architecture. Is the industry moving these products into reality? What is the time table? What are the products? How many people are taking this architecture approach seriously?
We have seen some announcements about SGI and Cray new architectures leading up to SC12. What will we hear more about them? How will the major OEMs respond? Cray just went live with their press release on Cascade (X30). Looking forward to event that announces it.. and of course the Cray Party.
What is happening in the storage world in support of HPC and Big Data? What about any new technologies to help improve IO bandwidth?
Are there any new approaches for the missing middle? – Lots of hype so far – where are the proof points and examples?
The industry is going a thousand miles an hour towards exascale and deep Petascale. But it’s a bit cloudy and just how the path will change ahead is unclear. In about a week I think we’ll know where we’re going.
Editor’s note: insideHPC would like to send out condolences to John Hengeveld and his family. His wife Jen’s brother died in a tragic car accident this week. If you see her with John at SC12 next week, he says to be sure to give her a hug.
Search Results for: hengeveld
In this video from the Discovery Channel, Intel’s John Hengeveld describes how computers are replacing experimentation as a way to proceed down the scientific process of trial and error. Hengeveld wrote here recently about his experiences with a rare form of cancer and how researchers at Berkeley are using Big Data to save lives with the Cancer Genome Atlas.
Our thoughts go out to John, a very brave man indeed.
Search Results for: hengeveld
John Hengeveld is the HPC Segment Marketing Director for Intel’s Technical Computing Group. His Intel Developer Forum session titled “Big Data Meets High Performance Computing” will take place at 3:30 p.m. Wednesday in Room 2002 of Moscone West, San Francisco.
I’ve been hearing a lot buzz about “Big Data” … people talking in terms of mining Facebook posts for marketing data. I didn’t take all the talk seriously at first, but I do now. … Let me tell you how Big Data might just save my life.
In March, I had a major appendix attack. And it turns out that within my appendix was a material called appendiceal mucinous neoplasm, which is a very rare type of cancer. There is no cure for my cancer—not yet, anyway. I’m just hanging on and crossing my fingers and hoping things work out.
Now, the first time my doctor went over the pathology report, she told me I had a 30-60 percent chance of having less than seven years to live. But then I got some good news from my doctors. After a lot of study and analysis, they offered a more encouraging assessment. They reasoned that I had a better-than-average prognosis after all, given that I didn’t appear to have very much of the material or to have had a lengthy exposure to it. So I went back to work.
But it turns out there is a high likelihood that in the relatively near future Big Data and high-performance computing (HPC) might work together to unravel the mysteries of rare cancers like mine—and offer new hope to people like me.
I like to think of Big Data as an oil field with a lot of breadth and a lot of depth. To get value out of the field, you need a powerful pump, and that’s HPC. The HPC pump allows you to draw insights from the Big Data. Today, researchers are doing just this across a broad spectrum of fields. For me, the research being done in the field of genomics hits closest to home, because this research could eventually lead to a world of personalized therapies based on a genomic analysis of a patient’s cancer.
This is one of the topics we will dive into during a session I will lead Wednesday at the Intel Developer Forum. That session—titled “Big Data Meets High Performance Computing”—will include an appearance by Professor Michael Franklin, a computer scientist who directs the AMPLab at UC Berkeley, one of the leading teams working on applications of Big Data to a new generation of problems.
Professor Franklin will explore some of the latest innovations in five applications that combine Big Data with HPC. These applications range from genomics research to crowd-sourcing to increase battery life on your cell phone (yes, it works—I’ve done it). I, of course, will have a special interest in the discussion of the role that Big Data and HPC can play in helping researchers understand the genetics in cancers and formulate appropriate therapies.
Already, people at Berkeley are using HPC to study the public data on cancer genomes. They have accessed what’s called The Cancer Genome Atlas. This atlas shows the genomics of tumors and their hosts. The study is focused on finding the mutations that have derived the cancers from the hosts, and then using that knowledge to understand the nature of the mutations that are occurring and how they might be blocked or eliminated.
This kind of research is good news—not just for me but for many other cancer patients to come. In this sense, Big Data and HPC provide hope for the future.
From my perspective, Big Data is not about shifting through massive numbers of Facebook posts and seeing who the “likes” are. It’s really about generating insights to solve hard problems and improve the lives of people.
Search Results for: hengeveld
My wife Jennifer is a late riser. She goes to bed late after whatever fun or work she had the night before. She snoozes the morning away, and awakes noon-ish to me either making her breakfast (on the weekends) or calling her to wake her up (the rest of the time). She assumes that gnomes of morning have made ready many good things while she was in dreamland. She wakes ready to take advantage of that bequest in her new day. There is an analogy in there… someplace.
Happy New Year! For all of its hits and misses 2011 was an amazing year for the HPC industry, in my last post on SC11 and disruptive innovation I covered the highlights of the last big event of 2011. Looking at what’s ahead, I am expecting 2012 to be the year of Application and Gnomes.
Roadrunner, the first and only IBM system to reach petascale on the top500 list, was hard to use and hard to program. That’s fine for a one-of-a-kind box. But, I expect by the end of 2012 there will be 20+ petascale systems and they will be doing real work, real science.
The “Practical Petascale” era dawned at SC11 and 2012 will see a great proliferation of petaflop machines. Two years ago, a petaflop machine was over 10,000 nodes and was an expensive beast. Now, an Intel Xeon E5 based cluster will achieve a petaflop with roughly 3,000 2 socket nodes. These systems are programmable with standard tools and techniques and can be rapidly applied to a broader range of applications.
Everybody will want one. Who knows, soon it will be a measure of the Rich and Famous… I could see it now – “… and darling, in this room, we keep the Van Gogh’s, and over there… is our petaflop cluster, its being used to support famine relief and protecting endangered species in New Guinea.”
Many nations and institutions will put together something like that to solve their toughest problems. The tools are in place to make scaling applications easier. With this in mind, I am focusing the next few months on understanding practical petascale applications. What are these new systems doing? How are they contributing to science? How are they contributing to national competency?
Over the past 4-5 years a tremendous amount of technology has been developed and put in place to create this era of HPC innovation and application. Many technologies take 4-6 years to go from the first inklings of technology to its commercial deployment. If 2018-2020 is the arising of exascale (Intel has committed to an effort for 2018 for a 20MW / Exaflop – Kirk Skaugen’s ISC11 talk) then 2012 is 5AM for the Exascale Gnomes, its dawn. Time to get to work.
Practical Exascale will need solutions to the canonical “exascale problems” such as “PRESS” – Programmability, Reliability, Efficiency, System Scalability.* Each of those has to have Gnomes at the ready in 2012.
The more I look at the research into exascale applications like CFD, weather modeling, and molecular simulation, the more exascale problems don’t look like bigger versions of their petascale brothers. Data will be less organized and less monolithic. The macro and micro level simulation will be modeled and interactions between the two will drive complexity, with millions of threads running, coordinating and communicating with each other. Programming all of these and keeping all of it working across a wide variety of data will be a significant problem. Gnomes will need to continue work on the optimization of the Seven Dwarves as well (ouch, I didn’t foresee that one). Perhaps programming and system scaling people have another year or two to get their acts together, but not more than that.
Reliability Gnomes also have to begin serious work. Historically, power efficiency and reliability have been competing interests. Its physics as much as anything, smaller swings of signal means less energy to store information. Eventually errors will show up. The bigger the system, the more combinations of errors will affect system performance. Detection and recovery is expensive from a silicon perspective and weighs against the power budget as well. Of the exascale issues, this one scares me the most. If the Programming Gnomes have two years to crack their problem, Reliability Gnomes really have about one… their results need to feed into process research and design.
Work on Efficiency is really work on efficient performance. In exascale I don’t think we can discuss one without holding the other relatively constant. There are lots of wasteful parts of the exascale system: power delivery; cooling; storage; and interconnect. These all have to be power streamlined. Gnomes can already be heard singing a happy work song on all of these. The use of dedicated highly parallel architectures like MIC and radically different interconnect approaches are at least asking the right questions if they aren’t getting answers yet.
So looking forward to 2012, I expect real movement on these four key questions. So when the rest of the world awakens in, oh 2016, or so… they will find their metaphorical Exascale Breakfast plates full of what I made Jen last weekend.
*- I love memory aids and acronyms… witness “Intel® Many Integrated Core (MIC) architecture.”
Search Results for: hengeveld
In this special guest feature, Intel’s John Hengeveld reflects on the amazing week that was SC11.
There is so much to cover from SC11. It was a thrilling week of meetings, technical sessions, and new technology. I learned a lot, and appreciate how much of great and exciting ride we are in for in the years to come. The key things I was looking for from my preSC11 column are shown here.
- New CPUs and the Top500: Interlagos — the AMD Opteron 6200 launched on Monday with a focus on core count and power efficiency per core. Intel made a press announcement of the performance levels of the future Intel Xeon E5 family as shown on the top500, and further announced that Xeon E5 will support PCIe 3.0. The top500 list had listings from each of these new CPUs including Cray systems with AMD processors, and HP, Bull and Appro systems with Intel Processors. At the end of the day, with banner products from both vendors, the industry is set up for a fresh push forward.
- New Big Systems and new systems across the Globe: While the #1 system surged to over 10PF, the top10 remained unchanged. I heard about some systems in development that will be coming soon (GENCI Curie, LRZ, Titan) but it was the news that there were not any new top10 systems that surprised me. More interesting is this occurred while the top500 bottom moved up aggressively from 40.187 TF to 50.94 TF. Hopefully this pause in the Top10 is a breather while we wait for new systems.
- GPU vs. Intel MIC part 4: Kepler – Was I the only one disappointed by Jen-Hsun Huang’s keynote? We heard nothing further on Kepler. He struggled to avoid saying Intel and ARM (at one point smoothly saying NVidia when he meant Intel). He made a case that exascale in 2020 at 20MW is a key goal and that lower power solutions would be required to get there. But the substance of his talk jumped off of Clayton Christensen’s keynote from last year and talked about the “Innovators Dilemma” on the path to Exascale.
I taught corporate strategy at Portland State University for many years and have often taught the key insights in “The innovators Dilemma” and “The Innovators Solution.” So I was emotionally connected when Huang started out there. The key principles of how a “low end disruptor” captures the mainstream of the market with a lower cost “good enough” solution are valuable insights to the technology world in general and HPC in specific.
The point to the referencing Christensen – Was trying to illuminate that GPUs would represent a disruptive innovation for the mainstream of HPC. While he very clearly made the case that NVidia graphics accelerators were once a new market disruptor in gaming and a low-end disruptor in workstations. Where he went off target is trying to stretch that into the HPC space.
At 31 minutes in the keynote, Huang says: “If I can just figure out how to program it, if I can describe all my problems as a triangle. I could solve the worlds problems.”
In this comment, Huang admits that adoption of GPU technology is predicated on adopting an isomorphism. The programming model of GPU is the transformation of a problem into manipulations of triangles… and thus the debate in the industry now.
The issue in HPC is not “does the industry need the density of performance at lower power” (we do), but rather “must we adopt the isomorphism of thinking of the world as triangles to use the system with highly efficient performance levels”. This is the core of the GPU vs. Intel MIC architecture debate.
The substance of the keynote was demonstrations of the impact of increased compute density on gaming examples (BF3, Assassin’s Creed, etc) and a plug for his Maximus WS product, which while fun for the gamer in all of us left us feeling a bit… hollow.
MIC: 1TF per socket, I got excited when I found out the Knights Corner silicon would be powered on by SC11 and deeply hoped the gnomes working on it would be able to run Linpack or DGEMM on it by that time. My friend and boss Joseph Curley pressed the team to complete the demo on time. We made it, you can see in the picture of Joe with Raj Hazra (the GM of Intel’s Technical Computing Group) proudly holding up one of the first Knights Corner parts in the picture below. Joe is looking stern in the picture here, no doubt from exhaustion. He’s been a busy man of late.
The more interesting element of Raj’s talk was the discussion that an Intel MIC product appears to applications as a fully functional compute node able to run its own open source operating system. This means that many applications will port to MIC with a simple recompile. Robert Harrison stood up and presented results from porting “10’s of millions of lines of code” to the Intel MIC software development vehicle.
So the GPU vs. MIC debate is engaged in full force. NVidia and Intel are now mostly publically aligned on the goal, 20MW / Exaflop in this decade. The debate on performance is over; the debate on programming has begun.
- PCIe 3: Mellanox announced in their quarterly earnings release some news on their solutions for Infiniband. A few of the new systems in the top500 had Mellanox IB solutions. Intel announced that their future Xeon E5 processor integrates PCIe 3.0 on die. No word from AMD on this, and no announcements from other graphics cards or interconnect suppliers. The beginning of the PCIe 3.0 transition is here. Interconnect bandwidth is going to be a key element in delivering performance in some cluster architectures and in some key workloads. Again, I expected more from other IB manufacturers. This transition will accelerate in greater force in the first half of 2012.
** Footnote: A sustaining innovation is the opposite of disruption. It is the normal progress of maturing of a technology to be accessible to a larger portion of an overall market space. Unmet needs are now met such that customers see higher value in the product they are using and either pay more for it or buy more of it. Adding a feature to a product like the iphone4S speech recognition feature is an example.
Search Results for: hengeveld
In this video, Intel’s Raj Hazra showcases the company’s new Knights Corner HPC platform, which is capable of 1 Teraflop of performance on a single chip.
Intel gave a three part story yesterday in their press luncheon. Two parts Intel MIC, one part Xeon Sandy Bridge. Part 1, Intel reached an interesting milestone by demonstrating the first silicon of Knights Corner, running at a Teraflop of DGEMM. (teraflop foil ASCI RED v KNC) This is news because it establishes that Intel, like Nvidia, will have a highly parallel optimized architecture that has solid performance for supercomputing applications. Unlike GPU’s however, Intel promised to preserve the programming model of traditional clusters when MIC rolls out. Intel’s Robert Harrison presented the MIC programming model and then Jeff Nichols ORNL and Robert Glenn Brook from University of Tennessee discussed their experience porting 10’s of millions of lines of code to MIC over a few months.
To round things out, Hazra showed the performance of Xeon e5 processors (Sandy Bridge) on the Top500 list, and interestingly compared it versus the Top500 listings performance from AMD 6200 (Interlagos).
Knights Corner in its first weeks, is showing the kind of performance potential we hoped for when we introduced our Intel® MIC strategy at ISC10.” Hengeveld, HPC Segment Marketing Director. “The notion that these products are fully functional compute nodes that support MP, MPI, and standard languages has been proving out well at ORNL, TACC, Sandia, among others.. and we are excited to see our promises from last year turn into reality.”
Read the Full Story.
Search Results for: hengeveld
In this special guest feature, Intel’s John Hengeveld primes us for a great week at SC11.
Tomorrow morning, I leave for SC11 in Seattle. I will dive into an immersive experience of a long string of press events, parties, customer meetings, speeches, technical talks, birds of a feather sessions, demos and marketing pitches.
As everyone knows, Seattle is a suburb of Portland… So, if you want to know what to do in Seattle, just ask me. You can visit Paul Allen’s EMP and Science Fiction museums near the space needle or walk down Pike Place market and watch the fish mongers throw fish. The aquarium is fun as well. The food in this city is fantastic! It’s almost as good as Portland.
That being said, the Conference is going to be a thrilling experience for me. I am excited for what Intel has in store, and I am eager to hear what AMD will show.
Here are the top five things I want to learn about at the conference:
- New CPUs. AMD Interlagos and Intel Sandy Bridge are nearly here. I am personally eager to get to real products shipping to real customers. Both Intel and AMD announced that we are shipping to HPC customers. The Top500 list should tell us a lot about what performance and performance efficiency will be for these new players. Will the performance be as expected? What more will we learn? I dream of the not too distant future where I will experience the subtle joy of no longer adding “formerly known as Sandy Bridge” to “future Intel® Xeon® Processor E5 family”.
- New Big Systems and new systems across the Globe. Last time, we got an announcement of the new Japanese super computer, just recently proven over 10PF. Will we see any new systems in Europe and the US surging towards that same bar? It’s a global game, and everybody is starting to ask what their supercomputing strategy is and should be. Next year, I expect a wall of machines from most industrialized nations. When will South America jump out on the list? Africa? This year, I will walk the show floor looking at the cornucopia to be.
- GPU vs Intel MIC part 4. In June 2010, Intel announced its “Many Integrated Core” architecture and in June 2011 announced its Exascale commitment. June 2010, NVidia announced Fermi and the debate ensued… What is the optimum path forward in highly parallel applications? Will we hear anything on Kepler? Will Jen-Hsun Huang give us guidance as to where accelerators are going? (Shameless plug: Huang went to the same high school as my kids. The 2010 Oregon Football Champion, Aloha High School Alum). What is the status of Intel’s Knight Corner product?
- PCIe3? I was surprised at ISC11 because I thought I would see more on PCIgen3 solutions in interconnect. Mellanox announced in their quarterly earnings release some news on their solutions for Infiniband. What more will we see about future plans of players here?
- Technical Sessions: Exascale Applications. What new papers will be written on the computer science of Exascale? How will complex models like weather, CFD, etc get broken down and simulated on the systems coming in the future? For Exascale labs in Europe and the US that are deeply engaged in the computer science of Exascale, what more will we learn here that can help us anticipate architectural requirements and inflections?
I have friends from many companies across the industry. I look forward to seeing them all, having a drink with them, and seeing where things are headed. I’ll get back to you after SC11 to see what information I come up with.
Search Results for: hengeveld
I had a historic moment of learning at ISC10 the night Kirk Skaugen announced the Intel® MIC programs commitment. We were perhaps a little celebratory in Hamburg as I went down the street with my wife Jennifer and a collection of colleagues. Along with us was Hugo Saleh, one of Intel’s HPC gurus, and we discussed the marketing concept of want and pricing. To Hugo, it seemed that the more you made your necessity to make a deal apparent, the more you wind up paying for the deal. It occurred to me that this was a neat concept that applies broadly.
Around Intel, we use “the Hugo Rule” with the following canonical statement: “the more you seem to want, the less you are going to get, the more you will have to pay for it”.
It’s a concept well grounded in economics; Increased demand, increase price. You see it in buying a house, or buying a car. If you show that you are excited, the price goes up. It’s also well grounded in negotiating theory. Hit the silk market in Beijing, and you will see it in action. The famous Star Wars cantina scene applies here: “that’s the real trick isn’t it – then it’s going to cost extra – ten thousand, all in advance…” The thought is premised on the notion that in one time negotiated agreements, where a long term relationship of value creation isn’t being built, open communication about your interests hurts your interests.
The Hugo Rule has been imbued in Western culture (Ironic Side note: It’s my experience in Japan that it feels impolite to talk until you show that you understand the interests of the group so deeply that you don’t need to speak about them anymore).
We are in the Near Exascale Era as we go from PetaFlop to DecaPeta Flop and on. China made a statement on the process last week with their “Blue Light” announcement. If we treat this time like we are in negotiations for advantage, the Hugo Rule will apply. The more we seek, the less we get.
How much better will all of this go if we work closely together? We share our insights; we collaborate with research partners. We study the very challenging exascale problems, not as negotiators but as partners. The processing power of the world’s largest supercomputer circa 1997 nears the point of fitting on one chip. Soon, this will happen with exascale: in 2034, exascale on a chip.
I am less interested in the first exascale system than I am about how Exascale systems do real work. To do that, we need not one exascale system but hundreds and eventually, as Moore’s law pounds on, millions.
Exascale must not become a one-time negotiation. Otherwise, we will all get very little and pay a lot.
Search Results for: hengeveld
In this special guest feature, Intel’s John Hengeveld looks ahead to Data Intensive Science and other coming attractions at SC11.
In a few weeks, Super Computing 2011 (SC11) will be in Seattle. I live in Portland, Oregon, so this is basically next door. I love Seattle. I love the flying fish, I love the Mariners (yeah I know…My life is happy and I need a little pain for balance) but I especially love the Museum of Flight at Boeing Field. I love to be there among old air force ones, a blackbird spy plane, and vintage aircrafts of all sorts. My son used to think the Museum of Flight was the finest place in the world. Now, he thinks that’s a sound studio in Southern California but I digress…
There is a 2-axis flight simulator there. My son went in, got twisted, turned around and upside down as he flew around a simulated Seattle in his simulated Jet. Riding with him, I was thrilled (and maybe I was a teensy bit nauseated).
These things are great fun but it doesn’t take a lot of compute power to create THAT immersive 3D experience. It takes a lot more to drive a 3D world, or render a 3D movie.
When I was on the SC09 committee I got hooked on the technical content of the conference quite deeply. Our conference had a thrust on supercomputing’s role in 3D. I observed first-hand how the conference committee does a great job of bringing forward papers with real meat to them. No marketing fluff – real innovation.
This year’s SC11 looks at Data Intensive Science (DIS) as the primary thrust and I anticipate some great papers from it. DIS is one of the areas that strains supercomputing architecture as we look forward to the exascale era. Massive amounts of data exist in health and bioscience that can be brought to bear to see new patterns and new connections. My favorite example is the work (shown at IDF) by David Patterson (Berkeley) and David Haussler (UCSD) on the study of cancer genome mutations.
The cool thing about this DIS is that it hits cloud, new compute architectures and new storage architectures in one shot— that’s three hot topics in one. A broad thrust in the scientific community will twist and turn the HPC area almost as much my son and I twisted and turned in that simulator.
Cloud as a means of data storage and search creates great opportunity to bring together large quantities of public data. Innovative, new compute architectures facilitate understanding of this data at a high bandwidth (like Intel’s MIC architecture or GPU). Along with new storage architectures that make finding relevant data efficient, allowing analysis can proceed efficiently. DIS taxes compute bandwidth, memory bandwidth and IO bandwidth but does so in a balanced way. There are so many examples of potential applications that recoding for each will be a problem. It will be interesting to see how the different potential architectures position their solutions for the space. You can find more at the Nature.com blog on Data Intensive Science.
The last step of the DIS world is the democratization of access. Until a broad range of researchers can use the data that is publically available, the rate of breakthroughs will be slow. This is another example of why standards in HPC access simplifications are needed.
My son loves the physical sensation of flying “Immelman” — you climb, pull a half loop, then flip yourself over and you are flying in the same plane in a different direction but at a much higher altitude. DIS is kind of like that – I’m not sure, but some folks might get queasy.
Search Results for: hengeveld
In this video, Intel’s John Hengeveld presents: Accelerating the Pace of Discovery. His talk is followed by UC Berkeley’s David Patterson presenting: Big Data, HPC, and Cancer. These talks were recorded at IDF 2011 in San Francisco. Download the PDF.
A separate video shows the entire session (just the speakers, no slides) including a talk by Rob Neely of LLNL. A tip of the hat goes to John Hengeveld for providing us with the slides for these two talks so that we could edit it all together.
Search Results for: hengeveld
In this special guest feature, Intel’s John Hengeveld ponders the fate of our most powerful supercomputers.
Watching a Hollywood film the other day about the world’s fastest computer got me wondering–what happens to these colossal machines when they reach their end of life? The movie I saw was called Wargames: The Dead Code. I will save you the trouble: the world is saved from the evil 2008 supercomputer RIPLEY when Joshua, an AI personality originally from the 1983 “WOPR” Supercomputer by a dam site (and yes, I spelled that correctly), is transferred to the hero’s Laptop by…email. My wife yelled it me to turn it off because it was awful. I told her I was researching my blog.
In the original Wargames movie, WOPR was specifically designed to run simulations of “global thermonuclear war” around the clock. It was programmed with heuristic learning algorithms (no doubt in LISP), which ultimately gained sentience and became Joshua. “Would you like to play a game?”
This got me to thinking. It took a lot of work, and years of research to get WOPR (or any other purpose built machine) to do its tasks. Do these systems wind up living a long and healthy life and smoothly transfer off their science to later systems, or do they wind up off by a dam site?
A good friend and Intel colleague of mine, Dr. Mark Neidengard, was once the operator of the CalTech Touchstone Delta system. For a time, this system was #1 in the world (1991) at 13GF on the 1993 top500 list shows it as #8. It was in service until 1998, when it was decommissioned. It was the prototype for Intel’s Paragon architecture. The total memory of this system and the total processing power is exceeded by almost all core i5 based laptops. This proves quite well that if Joshua ran on WOPR in 1983, it could probably run in your smartphone, let alone a Mac.
More recently, Sandia’s ASCI Red was designed by my colleague Dr. Wheat. The first Teraflop machine, it debuted #1 in 1998. Upgraded midlife to double its performance, it managed to run for 8 years before Moore’s law rendered it obsolete. As this link describes, it was programmable and usable to the end. It passed the “bang for the operating buck” limit, and work was transferred off to the next generation of supercomputers.
So, where will systems like Roadrunner wind up? Does architectural uniqueness limit utility or its life? How does the assurance of a future generation’s compatibility impact the life expectancy of such systems?
I’d love to hear from folks on the decommissioning of once proud and unique machines. How did the process of transitioning applications go? How long did the old machine have to hang around because a fork lift upgrade couldn’t work? Ideally, you should be able to drop in a new machine the hour minute and second the ROI for operations makes sense. If the SW transition isn’t that simple, it costs money and slows the development of science. Perhaps someday, the world will be saved by AI in my smart phone. The only way to lose is not to play… More on that next time.
Search Results for: hengeveld
In this special guest feature, Intel’s John Hengeveld follows up on his recent preview of ISC’11 with some reflections on one of the biggest weeks in HPC.
Well, ISC is over. I renewed some friendships, made some new friends, and even found a really good tapas restaurant. For me, ISC was fascinating. The industry, much like my colleagues Marianne Jackson Elana Lian (Marketing Managers at Intel) and Dr. Marie-Christine Sawley (Director of Intel Paris Exascale Lab), is taking on some very big things as you can see in this photo.
There were relatively few big surprises (RIKEN, Intel’s Exascale declaration), and some things I expected (relatively little movement in the top10). There was a lot of discussion of HPC in the Cloud and the middle of the HPC market. There was more turnover in the top500 list than I expected; mostly from the middle sized systems. I view this as an extremely healthy sign for the industry. While the top10 of the list had relatively little movement (probably due to the focus on systems for the November list), AMD’s interlagos isn’t out, nor is Intel’s Sandy Bridge. However, Intel did have a demo on sandy bridge in its public booth. There was a great deal in this show on the role of storage. Xyratex made some news here, as did a few others.
But I looked for four things this year, and promised to provide my assessment of them. Hopefully others will look at the same things and reply as well.
1) It’s the Workload Stupid: How are architectural innovators performing and how is their innovation accepted to serve the distinct classes of workloads? Do Major OEMs continue to deliver new design points that target distinct HPC workloads?
We saw a few major shifts in architectural approach at this ISC. I saw some notable fat node clusters demonstrated by Bull and SGI. SGI’s Dr. Eng Lim Goh gave a talk about how architectural innovation will be required to get to exascale, and his products continue to demonstrate his different brand of thinking. HP continues to have a wide range of solutions (blade and rack, hybrid CPU/GPU et al). Intel’s new Xeon e7 processor made its first appearance on this top500 list. There were technology demonstrations galore, but it seems like there weren’t any radical shifts threatened here on the scale that we have seen in the past couple of years.
On the software side, Microsoft was notably absent. Last year at ISC, they were all over the Windows server for HPC. This seemed a lot less advocated. After several years of engagement and growth, what is happening here? It’s hard to figure out.
2) Alternate Architecture Acceptance: How have attached coprocessors like Intel’s MIC products, Nvidia and ATI GPGPU been accepted as tools for delivering performance that leads up to an exascale era?
The top500 showed a somewhat slow increase in accelerators on the list (17 to 19), most of that coming from Nvidia going from 9 to 12 systems after much larger jumps in prior listings. Nvidia’s presence was relatively muted compared to prior shows. Intel demonstrated quite a few examples of customers using its new MIC architecture development platform.
3) Will HPC get its head into the clouds?
There was a lot of discussion of HPC in the Cloud. With great interest, I listened to a panel discussion that included my friend Christian Tanasescu from SGI. The panel enumerated considerable barriers to HPC in the Cloud. Christian articulated his view on the importance of rethinking the business model for software in HPC if we are to get anywhere in the cloud, and I think he’s right. He said shifting from annual licenses to on-demand is a risky step… I think it’s one we must take on at some point – maybe not now, but soon.
On Sunday, I presented a vision and a set of requirements that addressed the middle market of HPC, with the goal of stimulating conversation and a movement towards solutions. I feel the distinction between Cloud technology and HPC in the Cloud requirements is one abstraction of distinction versus expression of differentiation. My colleague Dr. Wheat refers to a group of customers called “the missing middle” as those who underutilize HPC. I described that a key segment of those customers are the people who do technical computing based on workstation platforms, but have economic, social, and skill barriers to utilize more rich and high resolution models with HPC. Those customers have a wide range of distinct workloads that have different optimization points.
A key factor in HPC is the economic benefit of raw performance. Performance and Performance Density drives ROI and differentiation, which is the key to profitability in this sector. One size doesn’t fit all – hence we get Fat Nodes, Clusters, Blades, Accelerators, GPGPUs and MICs, all aimed to drive more performance and performance density for a range of workloads. An HPC cloud abstraction must express that differentiation.
4) Is FABRIC ripping at the seams? What technologies are going to change the game in interconnect?
I spent some time with Qlogic and Mellanox discussing the future of fabric. Both had a really strong case as to why their products are making a difference today, but I got a strong sense that we are in an era of raging incrementalism.
5) Is Efficiency the Hobgoblin? Will the top500 list show any improvement in efficiency?
I think this gets a resounding yes. Last November the top system on the top500.org list was a hybrid of Intel Xeon 5600 series processors and Nvidia GPU’s. The ratio of Rpeak to Rmax was 54%. RIKEN, the new #1 system was a very admirable 93%. In addition, the last two publications have shown tangible improvements in GF/W for the top10 and the top50. I truly hope this trend continues. Today’s #1 system is 825 GF/W. Kirk Skaugen, Intel’s VP for the Data Center Group, made a declaration to work with research partners and industry collaborators to reach an EXAFLOP in 20MW, which is 50000 GF/W. We have a long way to go here. Architectural innovation will be required to reach this kind of objective.
The old adage “don’t eat anything bigger than your own head” doesn’t apply to beer I suppose… And so, Marianne takes on the challenge step by step. At Hamburg this year, the HPC industry could be said to be much in the same place.