HEARD AT THE 26TH ANNUAL INTERNATIONAL SUPERCOMPUTING CONFERENCE
Responding to Intel’s Announcement on June 20, 2011
The Exascale Report spoke with more than two dozen ISC attendees
following an Intel briefing and their declaration of intended exascale
leadership that took place on June 20th. The reactions to Intel’s
statements were mixed, as expected, however, the majority opinion
expresses concern over the lack of substance and credibility of
connecting the MIC architecture and Knights Ferry to exascale – when
so much is still to be proven with this technology path. There is an
old saying about “any publicity is good publicity” – but I think
right now Intel would argue with that.
After returning home, I conferred with a number of colleagues. It seems
very few people in the community are sold on Intel’s declaration of
exascale leadership. Intel’s press briefing at ISC raised a lot of
eyebrows and resulted in mostly negative reactions from a broad audience
– from system architects to U.S. government funding agency
representatives. It seems like some key messages were completely
missed, and some statements were sadly misinterpreted.
I was at the Intel briefing at ISC and I understand why some of the
reactions you will find in this article are so negative.
Step back for a minute and look at the big picture. Undoubtedly, a
handful of companies, including the driving forces such as Intel, IBM,
Cray, and NVIDIA, will be at the heart of many co-design initiatives. In
order for any progress to be made, co-design will require a new, perhaps
unprecedented level of trust and cooperation. It will require a certain
level of mutual respect for each other’s positions.
This is where I think things are falling apart. Competition among HPC
companies is going down the path of election year political posturing.
The vendors and even some research organizations are failing to deliver
controlled and consistent communications.
I clearly heard Intel vice president Kirk Skaugen talk about
collaboration with partners and end users. We saw the evidence of this
on the stage at the Intel briefing. I also heard him say, “We don’t
have all the answers yet.” But, by the closing day of ISC, I was
hearing mostly negative comments about Intel’s power play and their
intent to lock everyone in to the Intel architecture.
I personally think Intel has made great strides as a collaborator over
the past few years, but they continue to struggle with market relations
and credible communications in HPC –a challenge that clearly needs to
be addressed.
Intel’s strategy as presented under NDA to community stakeholders is
one thing. And I can tell you there are some credibility challenges
there. But another part of the problem lies in ‘how’ things are
being said, and the lack of consistent messaging.
As I listened to various audio tapes from ISC11, I was surprised by this
particular one. One of The Exascale Report stringers[1] approached the
Intel booth at ISC and asked the first person who approached her, “So
what’s new? What does Intel have to talk about here at the
conference?” The response was, “Haven’t you heard about our big
announcement? We’re going to build an exascale machine by 2018.” To
which she replied, “Really? Well not at 20MW certainly.” The
response was, “Yes, at 20MW – that’s what we committed to.”
As one community luminary commented, “I see the Intel arrogance
surfacing again – and that makes me very nervous. Working with Intel
is a “damned if you do – damned if you don’t” proposition.
Right now we feel like Intel is going to tell us what they will build
– because that’s what they can sell in volume – and it’s up to
us to live with it – to figure out a way to adapt it for exascale. But
I don’t understand their posturing around exascsale. If anything,
Intel should be the voice of reason for this community.”
Who’s going to pay for those microprocessor fabs?
In all fairness to Intel, the company is in business to make a profit.
They need to keep the company moving forward with innovative
microprocessor designs that can be shipped in volume. There is no
incentive for a company like Intel to invest hundreds of millions of
dollars, dozens of highly-paid engineers, and countless hours of
research time to possibly build a one-off machine. That’s not a
formula for keeping a company profitable – and that’s not a path
that will keep their shareholders happy. Without serious government
funding, the HPC community can’t expect Intel … or IBM … or anyone
else to carry this burden.
It’s not me, it’s you!
To this reporter, it seems like Intel has an overflowing pot of messages
right now with product and research and roadmap and collaboration
messages all coming together into one pile of sludge. It’s no wonder
so many people in the community are genuinely confused – and even
frustrated – and starting to turn on Intel. There is definitely a
tone that comes across as arrogance – perhaps stemming from the
company’s own internal frustration at a lack of consensus on how to
approach exascale.
One of my publishing colleagues, not associated with The Exascale
Report, made this point when we met on the last day of the conference
over a pint of crisp, German beer. “When talking with Intel –
specific to discussions of HPC, Cloud and even Exascale, you get the
feeling of them being defensive and wanting to beat you down with their
messages and make you submit to seeing things their way.”
And, this anonymous comment is worth consideration. “Will Intel be a
leader in the exascale effort? Clearly yes. Whether measured by HPC
installations, number of people in the company working on HPC issues,
technology R&D, or effective HPC community engagement, Intel stacks up.
So there is every reason to expect Intel to be a leading player in the
path to exascale. But is Intel’s linking of democratization of HPC and
the path to exascale appropriate? Not so convinced. What exactly is
democratization of HPC anyway? Trying to reconcile “non-proprietary
HPC for the masses” and the next pioneering summit of supercomputing
achievement in the same marketing message is just playing a shotgun at
several topical issues. But this announcement is encouraging. Why?
Because it is louder, more confident and more certain than government
commitment to Exascale at this time of funding uncertainty.”
I can offer a few words of wisdom here:
“The Devil is in the Details.”
“Perception is Reality.”
I know. Sometimes I even amaze myself.
Several people at Intel have told me the company is being
“misunderstood” and even “misquoted” by its critics. In some
cases, I agree. But is that the only problem? Could it be that the
claims of leadership in the race to exascale by several Intel
spokespersons come across as overblown and arrogant given the fact that
the company doesn’t have a product at this point to back up their
claims?
I think there is no question that Intel will be one of the enablers of
exascale computing. But until it sorts out its own internal conflicts
– including head-to-head battles between engineering and the
marketeers with their fast track product roadmaps – and until the
company has tangible products to show, it is way too early to claim
leadership – especially of something that still has no rock-solid
definition – only goals. At this stage, as one Texan said to me,
Intel is all hat and no cattle.
With that, I present you with The Exascale Report’s ‘Community
Opinion’ section. Enjoy.
[1] Stringer: (definition from Wikipedia) In journalism, a stringer is a
type of freelance journalist or photographer who contributes reports or
photos to a news organization on an ongoing basis but is paid
individually for each piece of published or broadcast work. As
freelancers, stringers do not receive a regular salary and the amount
and type of work is typically voluntary. However, stringers often have
an ongoing relationship with one or more news organizations, to which
they provide content on particular topics or locations when the
opportunities arise. The term is typically confined to news industry
jargon, and in print or in broadcast terms, stringers are sometimes
referred to as correspondents or contributors. At other times, they may
not receive any public recognition for the work they have contributed.
June 20, 2011
Kirk Skaugen
Intel Vice President and Data Center Group General Manager
Excerpts:
“To get to an exascale, we have to fundamentally change the cost of
computing. Today we are announcing our declaration that we (Intel) will
be a leader – if not “the” leader in driving the world to
exascale.”
“We want to democratize highly parallel computing and we’re
encouraging the industry to avoid costly detours down proprietary
paths.”
“The other thing we’ve done in the last year is we’ve maintained
our commitment to Moore’s Law. We announced for the first time in
decades a new 3D transistor […] we’ve now figured out how to deliver
a transistor, not just in 2D, but in 3D.”
Community Opinion – Responding to Intel’s Announcement on June 20, 2011
Anonymous conference participant [1]
“Intel has yet to demonstrate a deep and credible plan to get to
exascale. This [announcement] is about claiming ‘turf’ and is more
of a strategic move for Intel – in an attempt to keep any of their
major customers and partners from defecting to AMD or NVIDIA in search
of longer-term exascale capabilities. Intel still has not been able to
adequately address the interconnect issue – and that’s become a
rather sensitive topic for them.”
Anonymous conference participant [2]
Intel’s declaration is fascinating. It shows their determination to
invest in HPC, which is encouraging, especially since they’ve abandoned
it once in the past. Today, IBM is the only US vendor that designs HPC
systems from the silicon up to the supercomputer. The declaration also
shows Intel’s concern (or fear) about what they call “proprietary
paths,” which we can read as accelerators in general and GPUs in
particular.
Democratizing HPC requires a consistent, portable programming strategy
across a wide range of performance envelopes at a reasonable cost. The
reasonable cost can only be achieved with high volume parts, and that
means either selling HPC to everyone, or using commodity chips. Intel
microprocessors certainly are commodity, though it remains unclear
whether the Knights family will benefit from that advantage.
I also find it interesting (a) how much press Intel is putting out on a
product (Knights Corner) that is still a year away, based on an
architecture they are keeping very close to the vest, and (b) how much
effort Intel is putting into parallel programming.
For point a, one could argue that Intel is worried about the attention
being grabbed by GPU computing, NVIDIA and CUDA in particular,
especially since their own Larrabee was pulled at the last minute. They
might not want to lose mindshare before their own response is ready;
their Knights marketing bandwagon started at ISC’10 and continues
relentlessly today. The advantages of being able to run a program on a
‘manycore’ using more or less standard languages and tools is a big
deal, as Intel marketing points out (repeatedly), especially compared to
the effort that has been expended porting programs to GPUs. However,
effective parallel programming is not simple or easy, regardless of your
programming model. You still have to pay attention to all performance
aspects, specifically locality and synchronization. If you get these
wrong, you’ll spend all your parallelism just keeping up, not getting
ahead. Locality involves the memory hierarchy, and it remains to be seen
how Intel will package the next Knights processor and its memory (there
are at least three options), and how much of this hierarchy must
be exposed to and managed by the programmer to get high performance.
For point b, Intel is taking the spaghetti approach to parallel
programming: throw it all up against the wall, hoping some of it will
stick. Look at all the parallel programming methods supported by,
acquired by, developed by or being explored by Intel: OpenMP (an
industry standard), distributed OpenMP (now apparently defunct), IPSC
(Intel SPMD program compiler), Threading Building Blocks (TBB, now
available as open source), Array Building Blocks (ArBB, using technology
acquired from Rapidmind), Cilk Plus (using technology acquired from the
MIT spinoff), loop vectorization, SSE intrinsics, OpenCL, Concurrent
Collections, Software Transactional Memory (STM), and probably more.
Clearly Intel realizes its future profitability depends on customers
wanting to upgrade their systems for a better user experience. In the
past, this better experience case from faster processors running more
capable software. In the future, this will come from multicore
processors running more capable parallel software, and that parallel
software has to come from developers, and those developers need a
programming strategy for parallelism. So Intel is providing or
supporting many such strategies, and no doubt will focus on and enhance
any that gain traction.
Anonymous conference participant [3]
“Intel will undoubtedly drive much of the industry direction, however,
at this point, we see no evidence that MIC will work at exascale.
It’s interesting – and might in fact be effective in the 100PF range
– but exascale is still a huge unknown.”
Anonymous conference participant [4]
“Intel has already made the investment in MIC and Knights Corner, so
we might as well face it – that’s what we’re getting. It’s not
surprising that they are trying to sell it as the basis for exascale –
they need everyone to believe in it. I think they are nervous – and
they don’t have a clue as to how they are going to get to exascale.
But they can’t admit that – or say it – for fear of losing
customers and government funding.”
“The barons revolt against the king” – illustration from John
Cassell’s illustrated history of England, 1864.
Anonymous conference participant [5]
“Intel goes through high periods of amazing leadership and low periods
of floundering aimlessly. I’ve been a champion of Intel during their
good times, but right now, I think they are their own worst enemy. Their
credibility is slipping in HPC leadership. There’s a lot more to
leadership in this community than the Top 500.”
Dr. David Kirk, NVIDIA Fellow [6]
“It was particularly interesting to see Intel advise the HPC community
to “avoid costly detours down proprietary paths…” – do they
include x86 in this? Last time I checked, it was closed. Both Thomas
Sterling and myself concluded at a debate at ISC where MIC was
announced, again, that x86 is not a part of HPC’s future. It’s a
power hog, and while using lots of CPU cores with SSE extensions will
lead to flops, it will be at a very high power cost. This approach would
be better if it used very low power CPUs like ARM with Neon SIMD
extensions – ARM, a licensable architecture, is more power efficient,
more pervasive and wide open.”
Anonymous Conference Participant [7]
“The thing that worries me most is that Intel made a number of commitments to exascale that were a bit silly – and one was that you would not have to change the programming model – and that is just complete and utter bollocks.
For related stories, visit The Exascale Report Archives.
Comments