House subcommittee hearing on university research infrastructure, advanced computing

Witnesses at Tuesday's hearingOn Tuesday the House Committee on Science and Technology’s Subcommittee on Research and Science Education held a hearing in the Rayburn House Office Building to gather information and perspectives on the research infrastructure needs of universities and colleges as part of the push to new the America COMPETES Act.

From the committee press release

“Our focus on this legislation is a direct acknowledgement of the fact that America’s science and technology enterprise underpins the long-term economic competitiveness of our country. The partnership between the federal government and our nation’s colleges and universities has been highly successful and has led to a great number of societal and economic benefits,” stated Subcommittee Chairman Daniel Lipinski (D-IL).

…The physical infrastructure for research includes not only bricks-and-mortar buildings, but also research instrumentation and a robust cyberinfrastructure. Cyberinfrastructure, which consists of computing systems, data storage systems, data repositories, and advanced instruments, has become increasingly important to all science and engineering disciplines. The Office of Cyberinfrastructure at NSF requested a budget of $228 million in FY 2011, a 6.4 percent increase from FY 2010.

Witnesses at the hearing included Dr. Leslie Tolbert (University of Arizona), Mr. Albert Horvath (Pennsylvania State University), Dr. John R. Raymond (Medical University of South Carolina), Dr. Thom Dunning (National Center for Supercomputing Applications). You can see all of their prepared statements, as well as Chairman Lipinski’s opening remarks, at the hearing web page.

Dunning focused much of his comments on the high performance computing part of the national research infrastructure, as might be expected. His written statement is (long but) interesting. I was particularly interested in his views on the state and impact of the NSF’s cyberinfrastructure efforts (the NSF falls under the House Committee on Science and Technology)

NCSA's Thom DunningNSF
 has
 been
 successful
 in 
deploying 
new 
computing 
systems
 that 
are
 delivering
 extraordinary
 value
 for
 the
 U.S.
 research
 community — the
 first
 system
 delivered
 to
 TACC
 exceeded
 the
 total
 computing
 capacity
 of
 NSF’s
 TeraGrid by  a
 factor
 of
 more
 than
 5.
 However,
 the
 focus
 of
 these
 acquisitions
 was
 on
 the
 delivery
 of
 raw
 computing
 cycles
 and
 the
 funding
 available
 to
 provide
 support
 for
 the
 users
 of
 these
 new
 high
 performance
 computer
 systems
 was
 limited.
 This
 is
 unfortunate
 because
 this
 approach
 favors
 those
 scientists
 and
 engineers
 who
 are
 already
 using
 supercomputers
 and
 need
 little
 assistance,
 while
 our
 experience
 at
 NCSA
 and
 that
 at
 many
 other
 centers
 indicates
 there
 is
 a
 growing
 need
 for
 high
 performance
 computing
 resources
 in
 almost
 all
 fields
 of
 science
 and
 engineering.

Without
 adequate
 user
 support, 
it 
will 
be 
difficult 
for 
these
 new 
researchers 
to
 make
 effective
 use
 of
 the
 available
 resources.
 High
 quality
 support
 staff
 is
 one
 of
 the
 most
 valuable
 resources
 in
 NSF’s
 supercomputing
 centers
 and
 a
 fully
 funded
 user
 support
 program 
is 
needed.

This goes back to what is becoming an regular criticism of the NSF’s computing efforts (see recent comments by Smarr and Karin): most of the funding is for the capital acquisition, with not nearly enough support for operations and sustainment. Dunning also adds to the chorus of complaints about the way the NSF runs the cyberinfrastructure program

It
 should
 also
 be 
noted
 that 
the 
prospect 
of 
continual 
competitions 
has 
a 
corrosive 
effect 
on
 the
 staff
 at
 the
 centers — it
 is
 not
 only
 difficult
 to
 hire
 quality
 staff
 with
 funding
 that
 only
 lasts
 for
 4‐5
 years,
 but
 enormous
 amounts
 of
 staff
 time
 have
 to
 be
 dedicated
 to
 preparing
 for 
the 
competitions, 
rather 
than 
assisting 
researchers. 
The
 advantages 
of
 competition 
must
  be 
carefully 
balanced 
against 
those 
of 
stability 
in 
NSF’s 
supercomputing 
centers 
program.

Dunning’s statement also discusses the need for an increased emphasis in software, and identifies “the
 dwindling
 number
 of
 supercomputer
 vendors
 in
 the
 U.S.” as an area of concern.

Of
 the
 remaining
 companies,
 only
 IBM
 and
 Cray
 are
 actively
 involved
 in
 research
 and
 development
 on
 supercomputing.
 Although
 you
 would
 have
 to
 talk
 with
 these
 companies
 to
 better
 understand
 the
 issues
 surrounding
 this
 situation,
 it
 is
 clear
 than
 the
 supercomputing
 industry 
in 
the 
U.S. 
is 
not 
as 
healthy 
as 
it 
was 
just 
a 
few
 years 
ago.

I’d have to give it more thought, but I think Dunning is significantly shortening the list of supercomputing vendors. First of all Convey has to be included, I think, and what about the Tier 1/2 cluster vendors that are out there serving a significant market? Dell? Penguin? What about the supercomputing infrastructure R&D that HP is heading up? What about Intel’s exascale efforts? And you can’t ignore NVIDIA.

I think what Dunning is really observing is that the US HPC industry is experiencing radical change — what my friend Joe Landman calls “creative destruction“, and I don’t think we can say at this point that the change won’t be for the better. I’ve not heard anyone make a case that the upheaval of the mid 1990s resulted in less effective computing.

I won’t argue that the Tier 1 majors of yesteryear are less healthy than they once were, but the vendors are in part to blame. Technology shifts aside, as a group the old line HPC vendors continue to chase marquee business at zero or low margin, a practice which depresses prices throughout the community for all vendors. One possible response to customers who want to continue growing the top of the Top500 list at roughly 5 times the growth of the average purchase price of its systems? Don’t bid.

As I finish up this post, I want to point out that I’ve highlighted a few sections of Dr. Dunning’s testimony to which I felt I had something to add, but I don’t want to suggest that I disagree with most of what he had to say. It’s far easier to edit than to create, and as they say everyone’s a critic. I appreciate the leadership and visibility Dunning and the rest of the witnesses are bringing to these issues.