Today Xyratex announced that that the company is now a strategic supplier for AMD and their SeaMicro solutions for Big Data.
AMD will use Xyratex OneStor Modular Enclosure as one of the building blocks for its big data and storage intensive solutions and optimized the SeaMicro SM15000 server to provide more than five petabytes of storage capacity in two racks for big data applications such as Hadoop and Object Storage.
SeaMicro SM15000 server with the Freedom Fabric Storage solution is known in the market for its superior computing efficiency and storage density, as well as the lowest total cost of ownership,” said Dhiraj Mallick, Corporate Vice President and General Manager of Data Center Server Solutions at AMD. “With the combination of the SM15000 and the Xyratex OneStor data storage product, we have a winning solution that is unmatched in storage density and capacity.”
The combination of Xyratex and AMD products delivers an ultra-dense, high performance platform that eliminates excess hardware costs and cabling while simplifying installation and minimizing footprint requirements.
In this video from the 2013 National HPCC Conference, Rich Brueckner from inside-BigData moderates a panel discussion on How to Talk to Your CFO about HPC and Big Data.
John C. Morris – Pfizer
Dr. George Ball – Raytheon
Henry Tufo – University of Colorado, Boulder
Dr. Flavio Villanustre – LexisNexis
As members of the HPC community, we spend a good share our time sharing our work and best practices with our colleagues. But how do we communicate the business value of high performance computing and Big Data analytics to CFOs who have little affinity to discussions of things like cores, Hadoop, and MPI? In this panel discussion, experts and Big Data and HPC will come together to share best practices and communication strategies that have proven effective when talking to CFOs and other C-level executives.”
Addison Snell will present some of the top insights from recent market intelligence studies from Intersect360 Research, including forward-looking views of the vertical markets, new applications, and technologies with the best prospects for growth in 2012 and beyond. The view from Intersect360 Research will include applications in both High Performance Technical Computing (HPTC) and High Performance Business Computing (HPBC), with an emphasis on the opportunities for HPC technologies in emerging Big Data applications. The evolving industry dynamics around accelerators, file systems, and InfiniBand will also be discussed.”
How will enormous data sets and an endless stream of ever-more granular variables drive supercomputing in the coming years? Will it be like a dust storm that buries us, or flood waters we can redirect and manage? How will it alter the evolution of architecture and subsystems? How will it change computer science education, development tools and job descriptions? And will gargantuan data form a barrier to our evolution to Exascale and beyond by sapping the shrinking resources for funding and creativity?
IDC has released its Top 10 HPC Market Predictions for 2013. The big takeaways? While the outlook is good for business, the industry goal of reaching Exascale by 2018 looks to be slipping by at least two years.
1. The Worldwide HPC Market Will Leave the Global Economic Recession Behind and Will Be in a Healthy Growth Mode. In 2009, the low point of the global economic downturn, worldwide HPC server revenue fell 13% year over year, from $9.8 billion to $8.6 billion — although the supercomputer segment for systems sold for $500,000 and above grew 35% and revenue for systems priced at $3 million and up jumped a whopping 65%. But 2010, 2011, and the first three quarters of 2012 have demonstrated sustained, record- setting growth in the global HPC market. Unlike many market segments, HPC has exited the recession.
In related news, IDC’s HPC User Forum will take place in Tucson, Arizona on Apri 29 – May 1. Session topics include processors, coprocessors and accelerators, plus advanced visualization and potentially disruptive technologies as well as a set of talks on Big Data in HPC.
At insideHPC, we want to know how we can do better to serve our readers. To gain a better understanding of the trends, issues and usage of High Performance Computing across disciplines, we have teamed up with Univa for a new Reader Survey.
Please take a minute with this survey to tell us more about yourself and your work. We’ll share the results with you and every entry will be eligible for prizes. Thanks!
This week IDC reports that worldwide factory revenue for the HPC technical server market increased by 7.7% year over year in 2012 to a record $11.1 billion, up from $10.3 billion in 2011. In their Worldwide High-Performance Technical Server QView, IDC notes that the 2012 results exceeded their forecast of 7.1% year-over-year revenue growth.
2012 was an exceptionally strong revenue year for the high-end Supercomputers segment, which grew 29.3% year over year,” said Steve Conway, IDC Research Vice President for Technical Computing. “IDC does not expect the high end of the market to continue growing at a pace this swift.”
Other highlights from the report include:
IBM led all vendors with a 32.0% share of overall factory revenue, closely followed by HP with a 30.8% share.
Dell once again was a strong third-place finisher, capturing 13.5% of worldwide technical server revenue.
Unit shipments in 2012 declined 6.8% year over year as average-selling prices grew, reflecting the continued shift to large system sales.
In this video, CUDA book author Rob Farber discusses the recent Nvidia keynote at the 2013 GPU Technology Conference. As a technologist, Rob thinks some of the things that weren’t said by Nvidia CEO Jen-Hsun Huang during the talk are very significant in terms of high performance computing and the business of accelerated computing.
Fans of the old Sun Microsystems may be wondering how the server business is doing at Oracle some three years after the acquisition. Over at GigaOm, Barb Darrow writes that Oracle’s gamble on hardware just isn’t paying off.
Here’s the problem, since it entered the hardware business, Oracle hasn’t sold enough engineered systems to make up for lost sales of lower-end machines, according to third-party researchers. Its hardware revenue and unit share is headed south. For the fourth calendar quarter of 2012, Oracle server revenue was down 18 percent year over year according to both Gartner and IDC. Meanwhile, as GigaOM’s Jordan Novet reported last week, the “other” server vendors — companies like Quanta and Wistron – saw their aggregate revenue rise nearly 22 percent in the fourth quarter compared to the year-ago period.
Can government-sponsored HPC successfully foster Small and Medium businesses? Over at ZD Net, Liau Yun Qing writes that ERS Industries is using HPC resources at Singapore’s A*Star Institute to slash its product development time.
Modeling and simulation helped the company cut down development time and prototyping costs. Cheong said previously it would take about 1 to 2 years to come up with a new product as it needed to build and test prototypes. Through HPC-enabled simulation and modeling, the time was cut down to 3 to 4 months, he said.
The A*STAR Computational Resource Centre (ACRC) recently doubled its capacity with 100 Teraflop IBM Blue Gene/Q supercomputer. Read the Full Story.
Over at The Register, Timothy Prickett Morgan writes that SGI has rejiggered its credit facility with Wells Fargo Capital Finance ahead of a possible sale of intellectual property or other assets.
SGI CEO Jorge Titinger, who came on board a year ago, has made no secret that the company has been doing a comprehensive review of its assets and intellectual property to gauge its worth and potential items that it might sell. The company did a slew of low-margin deals, many of them in Europe, that saw Mark Barrenechea leave SGI in December 2011. Two months later, after SGI did the math and saw how these deals were not very for the bottom line, the company did yet another restructuring in its long history of such maneuvers, and a few weeks later, Titinger took over and tried to limit the damage that the nine deals, worth $87m, did to the company’s bottom line.
Morgan goes on to speculate that SGI might be looking to license its NUMAlink technolgoy to companies like Nvidia or Cisco. Read the Full Story.
With the rise of Big Data and increasing emphasis on data-intensive computing, you may be wondering how the tape vendors are doing. Well, very well, thank you, as evidenced by Spectra Logic’s announcement that that the company installed more than half an exabyte of tape storage capacity in the past six months alone. With revenue up 9 percent year-over-year, the company has a very bullish outlook indeed.
Our financial strength combined with strong growth in the file archive market is enabling us to invest significantly in R&D, which will be reflected in a strong slate of new products and technologies this coming year. In fact, over the next 12 months we plan to launch a large number of new data storage products – more than were released in the past five years combined,” said Nathan Thompson, Spectra Logic’s chief executive officer. “The value of tape technology and its essential role in today’s modern data centers is indisputable for both traditional backup as well as the tape-based file archive market. This value is recognized and embraced by leading organizations worldwide – and is reflected in Spectra Logic’s consistent growth and momentum.”
Today OpenSFS released a new Request for Proposals for Lustre feature development feature development, parallel file system tools, addressing Lustre technical debt, and new efforts in parallel file system development (incubators). In 2012, OpenSFS invested over $2 million dollars in open source scalable file system technologies, including significant investments in maintaining the canonical Lustre tree, new feature development, and testing and development infrastructure.
Goals for this investment are to:
Further the Lustre roadmap to meet the highest priority requirements defined by the community
Develop production quality tools to ease administration and use of open source scalable file systems
Address Lustre technical debt to improve the code base and documentation thereof
Encourage new efforts in open source scalable file systems for high performance and data intensive computing to broaden the set of solutions available to the community
Interested parties are advised to review the RFP documentation at the OpenSFS website.
For those that choose not to respond to the RFP but would be interested in making this effort a success, OpenSFS encourages joining the OpenSFS technical working group.
Can Big Data analytics be used to predict which Startup companies will succeed? In this video, Thomas Thurston from Growth Science discusses the new Ironstone Venture Capital Fund, which is using Business Model Simulation to choose disruptive Startups.
The human mind is good at some things but bad at others. So we use data science and technology to help our brains with the things they weren’t designed for. This marriage between technology and the brain has allowed us to predict business behavior in ways that weren’t possible even a decade ago. It’s the future of venture capital,” said Thomas Thurston from Growth Science. “This fund is unique. First, instead of mostly using intuition, like most VCs do, we’re using powerful, proven data science to identify disruptive companies. That’s revolutionary. Second, we’re interested in seed- and early-stage companies, which is much needed as our economy rebuilds itself. Finally, unlike a lot of VCs focused on exits and quickly ‘flipping’ startups, we have a long-term view and really want to partner with people growing strong, disruptive, meaningful businesses to make the world a better place.”
HP’s Marc Hamilton writes that Nvida’s Kepler GPU is off to a great start in the HPC marketplace.
Before we shipped USC their 264 node cluster, HP’s High Performance Computing benchmarking team had a few days to use the system in our factory to get a couple of Linpack runs completed. While I won’t share the exact numbers, the system would easily rank in the Top50 on the current Top500 list. That is a pretty amazing testament to the power of Nvidia’s Kepler technology when with only 264 servers you can build one of the 50 fastest supercomputers in the world.