New SGI CEO Builds Firewall Around Unprofitable Sales

Print Friendly, PDF & Email
By Timothy Prickett Morgan • Get more from this author

SGI tapped a new CEO, Jorge Titinger, back at the end of February to get the company back on an even keel, and in the wake of SGI’s reporting its financial results for its third quarter of fiscal 2012, Titinger conceded that he has his work cut out for him because SGI, like many other server makers from time to time, has been focusing a little too much on revenue growth and not enough on the bottom line.

Revenues in the quarter ended in March were up 38.8 per cent, to $199.4m, but that was less revenue than expected and rising costs on all fronts plus another $19m in restructuring costs pushed SGI to a loss of $1.2m (still better than the $1.7m loss a year ago but clearly not what SGI had been planning for).

In the quarter, hardware and software products together accounted for $150.2m of sales (up 43.2 per cent), and compute products brought in $130.6m and storage drove $19.6m. Services accounted for $49.2m. SGI had two customers that accounted for more than 10 per cent of its revenues each, but declined to name them.

Across all revenue, the public sector drove 61 per cent of sales in fiscal Q3, compared to 19 per cent for hyperscale cloud operators (Amazon is SGI’s biggest customer in this area) and 8 per cent for manufacturers. Cloud revenues were up 15 per cent in the quarter thanks to a big order, Titinger said on a call with Wall Street analysts going over the numbers.

Titinger and Jim Wheat, SGI’s outgoing CFO, have identified nine deals worth a combined $87m that will book in calendar 2012 that have low margins that it has to ride out; most of the deals are for its ICE X X86-based clusters, Titinger said, and the deals have single-digit margins, which he was obviously not happy with.

“In the past, we focused on the top line and growing key customers, which has led to price pressure in some instances,” explained Titinger. “The side effect has been an adverse effect on our margins, which will be in effect until the end of the calendar year. Therefore we are implementing a more rigorous deal review process.”

The other thing that SGI will be looking at is what it can do to get paid a little quicker, Titinger said, because on some big deals it can take three or four quarters to get machinery qualified and accepted by customers.

SGI is in the middle of restructuring its European operations as well, a process that will continue until the end of the calendar year but which should make SGI’s EMEA operations profitable by the end of fiscal 2013 as it eliminates around $7.5m in costs from the region.

Wheat said on the call that SGI, under his successor CFO Bob Nikl, (hired at the end of April) would be giving quarterly guidance from here on out instead of annual guidance as SGI has been doing for the past couple of years. To that end, SGI warned Wall Street that while it will begin shipping its next-generation UV2 massively parallel supercomputers, presumably based on Intel’s forthcoming Xeon E5-4600 processors (which are expected soon) and a rev on the NUMAlink 5 interconnect created by SGI, in the current quarter, sales of the first-generation UV machines took a hit because everyone is waiting to see what the new machines pack in terms of punch.

Moreover, Intel’s Xeon E5-2600 processors, which were announced in March, were a few months later than expected (something on the order of four to six, depending on who you ask) and that had an adverse impact on the ICE and Rackable system sales in fiscal Q3. All of this affected fiscal 2012 revenues and profits.

And thus SGI say that it anticipates sales of between $177m and $197m in the fourth quarter with a loss per share of 56 cents to 71 cents. For the full year, SGI is guiding to between $750, to $770m in revenues, down $20m at the low end and $30m at the high end from its previous guidance for fiscal 2012 and is happening in part because a large deal expected in Q4 fiscal 2012 has slipped into Q1 fiscal 2013.

The company now expects to lose somewhere between 75 cents and 90 cents a share for the full year, which is a lot deeper than the 15 cents to 30 cents loss Wall Street was expecting based on the previous guidance.

SGI has over 600 patents and its own supercomputing interconnect, the former of which you can bet Titinger is looking to peddle on the open market and the latter of which it may feel compelled to do, too, as Cray has done.

The plan is for SGI to be profitable on a non-GAAP basis in fiscal 2013, Titinger was squeezed a little to say on the Wall Street call, but he did not provide revenue or profit guidance beyond that. He expects to have his company review done soon and a battle plan ready for the call in August going over the Q4 fiscal 2012 numbers.

In a separate but no doubt related announcement, the US Department of Defense is shelling out $27.8m to upgrade the supercomputing facilities of the Air Force Research Lab as part of its High Performance Computing Modernization Program. The deal involves the Air Force installing a 32-rack ICE X cluster with 2,304 half-width, double-stuffed “Gemini” IP-115 system boards.

These blade servers are designed to mount one on top of the other – it’s more snuggling back to belly than missionary, so you get the right idea – with a total of 9,216 Xeon E5-2600 processors from Intel running at 2.6GHz for a total of 73,728 cores. This box will pack a 1.5 petaflops peak theoretical performance punch. The Air Force is also getting InfiniteStorage arrays crammed with 6.72PB of capacity for feed this beast.

The Air Force Research Lab currently has a fairly new Cray XE6 super with 2,732 nodes and 43,712 Opteron cores on its current “Raptor” cluster, which weighs in at 410 teraflops. The lab also has an Altix 4700 Itanium-NUMAlink 4 cluster, called “Hawk,” rated at 59 teraflops. This machine is a bit long in the tooth and it is interesting that the Air Force has not upgraded it to UV1; maybe it will get a UV2 machine, maybe not.

The Air Force has another cluster based on Appro International’s Xtreme-X blade server design and using Opteron processors from Advanced Micro Devices as well that is rated at 27 teraflops, which is called the Utility Server in a very un-Air Force way. ®

This article originally appeared in The Register. It appears here in its entirety as part of a Richard Chirgwin • Get more from this author

Looking at the fundamental properties of matter can take some serious computing grunt.

Take the calculation needed to help understand kaon decay – a subatomic particle interaction that helps explain why the universe is made of matter rather than anti-matter: it soaked up 54 million processor hours on Argonne National Laboratory’s BlueGene/P supercomputer near Chicago, along with time on Columbia University’s QCDOC machine, Fermi National Lab’s USQCD (the US Center for Quantum Chromo-Dynamic) Ds cluster, and the UK’s Iridis cluster at the University of Southampton and the DIRAC facility.

The reason so much iron was needed: the kaon decay spans 18 orders of magnitude, which this Physorg article describes as akin to the size difference between “a single bacterium and the size of our entire solar system”. At the smallest scale, the decays measured in the experiment were 1/1000th of a femtometer.

The actual kaon decay described by the calculation spans distance scales of nearly 18 orders of magnitude, from the shortest distances of one thousandth of a femtometer — far below the size of an atom, within which one type of quark decays into another — to the everyday scale of meters over which the decay is observed in the lab,” Brookhaven explains in its late March release.

Back in 1964, a Nobel-winning Brookhaven experiment observed CP (charge parity) violation, setting up a long-running mystery in physics that remains unsolved.

The present calculation is a major step forward in a new kind of stringent checking of the Standard Model of particle physics — the theory that describes the fundamental particles of matter and their interactions — and how it relates to the problem of matter/antimatter asymmetry, one of the most profound questions in science today,” said Taku Izubuchi of the RIKEN BNL Research Center and BNL, a member of the research team hat published their findings in Physical Review Letters.

The research is seeking to quantify how much the kaon decay process departs from Standard Model predictions. This “unknown quantity” will then be hunted in calculations in the next generation of IBM supercomputers, BlueGene/Q. ®

This article originally appeared in The Register. It appears here in its entirety as part of a cross-publishing agreement.