In this video, Wei Lin Guay from Oracle presents: Prototyping Live Migration with SR-IOV-Supported InfiniBand.
Recorded at the Open Fabrics Workshop on March 27, 2012 in Monterey, CA.
In this video, Wei Lin Guay from Oracle presents: Prototyping Live Migration with SR-IOV-Supported InfiniBand.
Recorded at the Open Fabrics Workshop on March 27, 2012 in Monterey, CA.
In this video, Galen Shipman from ORNL conducts a panel discussion entitled: Big Data – Shared vs. Unified, Local vs. Remote.
Recorded at the Open Fabrics Workshop on March 27, 2012 in Monterey, CA.
Chris Dagdigian over at BioIT World writes that Grid Engine is alive and well, with a number of varient and support options.
So where are we in 2012? In a pretty good position, actually. Grid Engine users now have two sources for commercially-licensed and commercially supported products—both Oracle and Univa supply this. Free software fans and other related open-source projects that depend upon access to an unrestricted resource manager also have two different projects from which to chose. Even better, a new company called Scalable Logic has announced its intent to provide commercial support and consulting services for one of the free Grid Engine variants. The ability to buy a support contract or even per-incident assistance for a free version of Grid Engine closes my last personal “must-have” feature wishlist.
Read the Full Story.
Analysis If you want to get into the server processor racket, here’s some advice: Don’t bring a knife to a gun fight. And when you whip out your guns, you better have a piece stashed in each of your boots, maybe another high-caliber rifle on your back, and a few knives while you are at it for price-cutting when the bullets run out.
With Intel getting ready to launch its “Sandy Bridge” Xeon E5 processors in March and revving up its 22 nanometer processes to eventually field “Ivy bridge” kickers, Advanced Micro Devices is going to have to engineer some pretty impressive new Opteron server chips. It’ll have to cook up those chips pretty sharpish, in conjunction with its wafer-baking partners, if it hopes to gain ground in the ongoing x86 server chip war – much less hold the hard-fought ground it has attained in high performance computing and server virtualization.
Everybody loves an underdog and most people like to see a bully take one on the chin and go down to his knees. So a lot of companies were rooting for AMD as it was designing the Opteron processors and trying to build an ecosystem of server vendors who would peddle machines based on them in the early and middle 2000s.
Back in the early 2000s, Intel was trying to protect its high-end 64-bit Itanium server business and push its Xeon processors down into the 32-bit volume server space, and AMD brilliantly shot the gap between the Xeon and Itanium to create the 64-bit Opterons, eventually pushing its server market share as high as 25 per cent.
But it has been a long time since x86 server chip juggernaut Intel was hammered – SledgeHammered, to be specific – by longtime rival AMD with its 64-bit, low-power, multicore Opteron processors. Intel shifted to the Core microarchitecture, added 64-bit memory addressing and processing, and a slew of key features such as the QuickPath Interconnect to its Xeon processors and hit back hard against the Opteron upstart. The “Nehalem” Xeon architecture announced in 2009 had everything that Opterons had, and when the Great Recession hit just in the wake of yet another Opteron delay, server makers put most of their effort into build Xeon war machines, not Opteron battlewagons, and AMD has been losing ground ever since.
Because server chip profits help pay the bills at Intel, AMD, IBM, Oracle, and Fujitsu, the loss of market share by AMD is one of the key reasons why CEO Dirk Meyer resigned in January 2011. In hindsight, we can also see that Meyer and the bulk of the management team that handles chip development and manufacturing have been replaced since new CEO Rory Read came aboard last July. AMD has a new CTO – Mark Papermaster, formerly of IBM, Apple, and Cisco Systems – and has replaced its former marketing, products, and operations bosses, and has tapped ex-Intel engineer Rajan Naik as senior vice president and chief strategy officer.
So, AMD is no doubt drawing up new war plans for the x86 server battlefield, but the company has not said much to date about its plans. Perhaps it will enlighten us during its Analyst Day this week. But we can conjecture about what AMD might do by looking at what Intel is about to do in the x86 racket.
While Intel never publicly promised that the “Sandy Bridge-EP” Xeon E5 processors would launch last fall for shipments in the fourth quarter, the circumstantial evidence – and comments from motherboard and server makers like Super Micro – indicate that this was indeed the plan. But with AMD having its own issues shipping its “Interlagos” Opteron 6200 processors for two-socket and four-socket servers and its “Valencia” Opteron 4200s for single-socket and dual-socket machines, Intel did not have to rush to market. (The speculation is that a SAS controller bug similar to the one in the C200 chipset that delayed the launch of “Sandy Bridge-DT” E3 processors and various PC chips of similar design has been found in the “Patsburg” C600 chipset for the Xeon E5s. Intel has not confirmed this.) Frankly, with Intel turning in the best fourth quarter and fiscal year in its history, in terms of profits and revenues, as 2011 came to a close, despite a PC slowdown and whatever issues stalled the Xeon E5s, it is hard to argue that Intel made the wrong call.
Intel is just starting to talk to press and analysts under embargo this week about the forthcoming Xeon E5s, and it is no coincidence that it is doing so just ahead of AMD’s Analyst Day. (El Reg is reporting this to you from coach on a Delta flight to Portland, Oregon, ahead of a briefing by Intel from its Beaverton chip and server development labs.)
As El Reg exclusively disclosed last May, the plan with the Xeon E5s is to take what would have normally been a chip for general-purpose two-socket workhorses and bifurcate the line into multiple processor and chipset variants to address very precise market segments. This is, of course, what AMD did two years when it created two different two-socket server families: the Opteron 4100s – which could also scale down to single socket machines aimed at small, power-sensitive workloads – and the Opteron 6100s, which could scale up to four processor sockets.
Anything AMD can do, Intel can do. (The market decides if Intel can do it better, or at least well enough to allow IT managers to fall back on the “nobody ever got fired for buying Intel” insurance policy.)
Intel is actually cutting its server market into eight pieces with the Xeon E5 launch. That’s Itanium 9300s and Xeon 7500s and E7s at the high-end (and eventually the “Sandy Bridge-EX” E8s). That’s two segments of the market that share chipsets and memory cards, but that have different motherboards and sockets. At least until Intel finally delivers, as it is rumored to be in the works, the long-promised common Xeon-Itanium socket. That could happen with the E8s, but it is far more likely to happen with the “Ivy Bridge-EX” Xeon E9s years hence. At the low-end, there’s the single-socket Xeon E3 and Atom processors, depending on how wimpy or brawny your workload is. That’s four addressable server segments in total.
The Xeon E5s will also span four different server types and will cover the middle and overlap with the high and low ends. The Xeon E5-2600, as the first of the “Romley” server platforms are expected to be called, will use the “EP” variant of the Xeon E5 chip that plugs into the new “Socket R” CPU socket. This socket is not compatible with the current Xeon 5500 and 5600 processors, but has all sorts of goodies, including two QPI links between the processors, support for unregistered, registered, and load-reduced (LR) DDR3 main memory, and integrated PCI-Express 3.0 controllers on the processor. This is the chip that Intel has presumably been shipping under NDA to selected supercomputer and hyperscale data center customers since last fall. This chip is clearly aimed at two-socket Opteron 6200 machines.
For two-socket machines that don’t need all of these capabilities, Intel is expected to roll out its “Sandy Bridge-EN” chips, rumored to be called the Xeon E5-2400s. These chips will plug into the new “Socket B2″ socket and will sport only one QPI link between processors as well as fewer memory channels, fewer DIMMs per core, and fewer PCI-Express 3.0 slots. This chip is fired directly at two-socket Opteron 4200 iron.
If the rumors are right, then Intel will also ship a variant of the Sandy Bridge-EP chip that will be able to span four processor sockets in a single system image. This chip is expected to be called the Xeon E5-4600 and is obviously targeting the four-socket Opteron 6200.
And finally, Intel will field a Xeon E5-1600 chip, aimed at single-socket servers and workstations and based on the Sandy Bridge-EN chip that will zero in on single-socket Opteron 4200 servers and whatever plans AMD has to revive its single-socket server biz with the Opteron 3000 series, which it said it was working on back in November. The first Opteron 3000 chip, code-named “Zurich” and presumably to be named the Opteron 3200 to be consistent with the 2012 series of Opteron processors, is basically a cut-down Opteron 4200 with six or eight cores that will plug into an AM3+ socket instead of a C32 socket.
In any event, Intel appears to be looking to chase the microserver segment with the Xeon E5-1600 as AMD is looking to pursue with the Opteron 4200 and 3200 chips. The word on the street is that the Xeon E5-1600 will plug into the Socket R socket, but it would make more sense for it to use the lower-cost Socket B2 socket.
Should all of this come to pass in 2012, it is safe to say that Intel has a weapon to match everything that AMD can throw at it – and then some. AMD only has one flavor of four socket machine, and Intel has three if you count Itanium. AMD has only two kinds of single-socket boxes it can bring into the field, Intel has three if you count Atom. AMD has two two-socket boxes, but Intel has four if you count Itanium.
It must have been such fun to run AMD when Intel’s server and PC chips were misaligned with the market needs. It must be daunting to come into work every day at AMD and see the lead in process technology, cash, clout, and chip and market coverage that Intel currently has not just over AMD, but over anyone who is making processors for anything larger than a smartphone or tablet.
AMD has been clever in a lot of ways to survive the Intel onslaught despite being behind in process technology. With the Opteron 4100s and 6100s, the company had to do its own full platforms – chipsets and processors – for the first time, which is a lot of change to manage all at once. Moreover, with the Opteron 6200s, AMD took its eight-way server architecture, beefed it up with more and faster HyperTransport links across the CPU sockets, and then double-stuffed six-core processors into a single socket and convinced the software vendors of the world that this was indeed a four-socket, rather than an eight-socket, machine. For systems and application software that is socket-based, this little maneuver cuts software feeds in half.
AMD has also been winning the core count skirmish against Intel and positioning its two-core “Bulldozer” module used in the Opteron 4200s and 6200s as two strong physical threads against Intel’s weaker HyperThreaded cores. However, with a shared scheduler, on workloads that make heavy use of 256-bit floating point instructions, half of the 16 cores in an Opteron 6200 will often sit idle and the net effect is that the performance should be about the same as the forthcoming Xeon E5 with eight cores running 256-bit floating point. AMD has two stronger cores, but only if you want to do 128-bit math or integer work.
So what is AMD to do?
Go back to the drawing board and exploit whatever weaknesses it can find in Intel’s armor, just as always. Or, start a fight on a new battlefield where Intel is not going to be so strong.
Back in November 2010, two months before the management shakeup at AMD, the company said that its plan for this year was to bring out replacements for the C32 socket used for Opteron 4100 and 4200 processors and the G34 socket used with Opteron 6100 and 6200 processors.
The plan calls for the high-end Opterons, code-named “Terramar” and presumably called the Opteron 6300, to have 20 Bulldozer cores based on a next-generation core, code-named “Piledriver”. The low-end will get the “Sepang” Opteron 4300, a ten-core chip that is essentially what gets double-stuffed into a socket to make the Terramar chip package. Rumor has it that AMD will boost memory capacity with these forthcoming Opterons as well as support PCI-Express 3.0 peripherals. The Terrarmar and Sepang chips will be etched in the 32 nanometer processes used by GlobalFoundries, AMD’s spun out former chip manufacturing operations.
Presumably there is a process shrink to 28 nanometers to boost clock speed and therefore single-threaded application performance of these Opteron 4300 and 6300 chips in the works, but AMD has not said yet and will no doubt lay out its plans at Analyst Day this week.
As was the case during the Great Recession, now would be a particularly bad time for AMD to force a socket transition onto its smaller band of server customers, and the new management at AMD must be looking pretty hard at that roadmap, wondering if they can change as little as possible now to buy time to do a lot more radical engineering for the future.
If I were running AMD, I would be looking very hard at that “Bobcat” core that is the alternative to Intel’s Atom and start thinking about servers, and also go back and look at the“Trinity” low-power Fusion chip, which is based on the Bulldozer cores.
When AMD was kicking Intel in the chips in the mid-2000s, Chipzilla relatively quickly (okay, it took years) shifted over to the Core laptop chip architecture for its PCs and servers and not only saved its chip business, but blunted the AMD attack. Intel has copied most of the ideas that made the Opteron better or different and is now using its wafer-baking process technology and its ability to set market prices to force AMD to compete mostly on lower price for roughly equivalent performance and features.
This is not an enviable position to be in for AMD, obviously. But there’s always the ARM option, and AMD could do something radical like buy Applied Micro or Calxeda and turn the x86 chip war into a two-front war for Intel to have to fight. ®
The high-performance networking market just got a whole lot more interesting, with Intel shelling out $125m to acquire the InfiniBand switch and adapter product lines from upstart QLogic.
Intel has made no secret that it wants to bolster its Data Center and Connected Systems business by getting network equipment providers to use Xeon processors inside of their networking gear – that Intel division posted $10.1bn in revenues in 2011, and the company wants to break $20bn in the next five years.
The plan is to kill off mainframes and RISC machines, and to get Xeons inside of storage and network gear – but it also includes Intel being a major supplier of chips used in high speed switches.
Last July, Intel paid an undisclosed amount to get its hands on Fulcrum Microsystems, a maker of the FocalPoint family of ASICs for Ethernet switches and routers that run at 10GbE and 40GbE speeds. Fulcrum’s most famous customer was Arista Networks, the low-latency networking switch-maker founded by Sun Microsystems cofounder Andy Bechtolsheim. Intel never said what it paid for Fulcrum, but the company had raised $102m in venture capital since it was founded, and the price was very likely a multiple of that figure.
Despite the improvements in 10GbE and 40GbE switch chips over the past several years, InfiniBand still has important niches where even lower latency and still higher bandwidth are crucial – the supercomputing racket, for instance, or in database clustering. Just ask Oracle, which uses InfiniBand silicon from Mellanox Technologies in its Exadata database clusters and Exalogic web application server clusters, and which took a 10.2 per cent stake in the chip and switch-maker back in October 2010.
At the time, Mellanox assured Wall Street that Oracle had no intention of taking over the chipmaker, but with QLogic’s upstart InfiniBand biz snapped up by Intel, some systems or networking companies might now be tempted to take a run at Mellanox. But if Oracle or IBM or Cisco Systems are tempted to eat Mellanox, all that will do is eventually drive everyone into the loving arms of Intel, with its own Ethernet or InfiniBand ASICs. So, in a funny way, Intel is probably praying that someone does eat Mellanox.
And the funniest thing of all would be if AMD actually woke up and smelled the systems biz, and did it. By doing so, AMD would have the SwitchX two-timing Ethernet and InfiniBand ASICs and the ConnectX-3 switch-hitting server adapters, and could start integrating these deeper into its chipsets and eventually onto its chips.
InfiniBand has its roots in the Next Generation I/O project supported by Intel, Sun Microsystems, and Microsoft, along with the Future I/O alternative supported by IBM, Compaq, and HP. These specs were merged back in 1999, with Intel and IBM largely steering the process.
The idea was to provide a single switched fabric that would link computers and storage to each other from the desktop to the data center, and be an alternative to Ethernet networks for server-to-server and PC-to-server links, and to PCI-Express and Fibre Channel for linking peripherals.
Academically, InfiniBand was probably the right answer for a unified switch fabric – but markets don’t study in schools, they live on the mean streets and give and take hard knocks. And thus, InfiniBand has been relegated to a niche and, more importantly, the key technologies that made InfiniBand better, stronger, and faster than Ethernet have been borged onto Ethernet, closing the gap.
For now, Intel is saying that its acquisition of the InfiniBand chip, adapter, and switch business from QLogic is all about HPC, but it may be looking further down the road, when PCI-Express runs out of gas.
“At the International Supercomputing Conference 2011, Intel unveiled a bold vision to redefine HPC performance and break the exascale barrier by 2018,” said Kirk Skaugen, the outgoing general manager of Intel’s Data Center and Connected System Group, said in a statement. “The technology and expertise from QLogic provide important assets to provide the scalable system fabric needed to execute on this vision. Adding QLogic’s InfiniBand product line to our networking portfolio will bring increased options and exceptional value to our datacenter customers.”
Last week, Skaugen – who has been pushing Intel’s expansion into switching and storage chippery for the past several years – was tapped to run Chipzilla’s PC Client Group. Diane Bryant, who has worked for Skaugen in the past and who was most recently Intel’s CIO, has replaced Skaugen and will be driving Intel’s server, storage, and networking strategies.
By selling its InfiniBand biz to Intel, QLogic will be able to double down on its Fibre Channel and Ethernet switches and adapters. QLogic has had some success with its InfiniBand gear, landing the 2,000-node “Sierra” cluster with Dell at Lawrence Livermore National Labs and also being the switch supplier for the 20,000-node procurement awarded to Appro International last June by the US Department of Energy’s Tri-Labs: Lawrence Livermore, Los Alamos, and Sandia National Laboratories.
“The sale of these InfiniBand assets will benefit our shareholders by enabling us to provide better focus and greater investment in growth opportunities for the data center with our converged networking, enterprise Ethernet, and storage area networking products,” said QLogic’s president and CEO, Simon Biddiscombe, in his statement. “After the sale, our cash position will be further strengthened and we expect the impact on earnings per share to be neutral. In addition, the sale of these assets to a leading technology innovator and recognized HPC leader will provide a greater investment stream in high performance fabrics for InfiniBand partners and customers.”
Speaking to El Reg two weeks ago apropos of nothing about the InfiniBand racket, QLogic’s head of global alliances and solutions marketing for HPC Joe Yaworksi said that the reason why QLogic was winning more InfiniBand deals is that its TruScale chips offer better performance running at Quad Data Rate (QDR) 40Gb/sec speeds than do Mellanox’ SwitchX products running at Fourteen Data Rate (FDR) 56Gb/sec speeds.
The big reason for this, said Yaworksi, was that QLogic bought compiler-maker PathScale in early 2006, and it has a networking stack that was designed to handle millions of messages per second. (PathScale was sold to SciCortex in 2007, and when it went bust, Cray picked up the PathScale pieces in 2009 and an open source PathScale has emerged from the ashes with a license from Cray.) The combination of the TruScale InfiniBand ASICs and PathScale messaging stack and compilers is what gave QLogic the idea it could take on Mellanox and win.
Yaworksi told El Reg that QLogic was “taking a hard look at whether or not we will ship FDR InfiniBand,” although with Intel picking up the company, there will be more funds to do whatever might seem appropriate. The company was thinking that in the second half of 2013 or the first half of 2014 it might jump straight to Eight Data Rate (EDR) speeds, which runs the InfiniBand lanes at 25Gb/sec.
That would be a long time to wait between products and to live on QDR, and a gap that Intel is probably not likely to tolerate. But it all depends on what Intel’s plans are, and the company isn’t saying anything right now. If QLogic weren’t a public company, both would have probably said less.
Intel expects the QLogic InfiniBand deal to close by the end of March, and added that a “significant number” of the employees associated with the business were expected to accept job offers from Chipzilla. ®
In this video, Luke Kanies of Puppet Labs describes the company’s datacenter automation and configuration management tool..
Puppet Labs’ IT automation software enables system administrators to deliver the operational agility and efficiency of cloud computing at enterprise-class service levels, scaling from handfuls of nodes on-premise to tens of thousands in the cloud. Puppet powers thousands of companies, including Twitter, Yelp, eBay, Zynga, JP Morgan Chase, Bank of America, Google, Disney, Citrix, Oracle, and Viacom.
The new year in IT always begins around now, when the IEEE puts out the advance program for the International Solid State Circuits Conference, which takes place in San Francisco in February. This time around, it runs from February 19 through 23, and while there are not a large number of server-class processors coming out, there are some very interesting system-on-chip and memory technologies that chip makers will be showing off at the upcoming 2012 event.
First out of the gate will be Intel with a preview of the “Ivy Bridge” processors for PCs, which are made in its new 22 nanometer Tri-Gate process and which will cram a multicore CPU and a GPU onto the same sliver of silicon. Intel will also be showing off a new dual-core Atom processor implemented in its current and well-established 32 nanometer processes sporting on-chip Wi-Fi networking. This is presumably the “Cedar Trail” family of Atom processors that were originally expected around September, then November, and now sometime next year, according to the rumor mill. Intel will also be showing off a 32-bit x86 chip that has an operating range of between 280 millivolts to 1.2 volts that is implemented in its 32 nanometer processes.
Oracle will be on hand to talk about the eight-core Sparc T4 processor that was announced back at the end of September and that just started shipping in systems back in November. Oracle might slip a bit and talk about the future Sparc T5 processor, which will be socket-compatible with the Sparc T4 processor and which will ship by late 2012. Then again, Oracle doesn’t want to screw up Sparc T4 system sales, so maybe it won’t say anything. Especially considering that the Sparc T5 will have 16 cores running at around 3GHz or so and scale up to eight sockets in a single system – yielding about 2.5 times the aggregate oomph on thread-happy workloads like databases and middleware.
IBM is not saying anything about its future Power7+ or Power8 processors for its Unix and proprietary systems. But Big Blue will be showing off a prototype 3D system-on-chip design that will use through silicon via (TSV) technology that it perfected with Micron Technology for hybrid cube memory. IBM will be demonstrating that the techniques that can be used to stack up DRAM chips and lash them together into a parallel memory cluster (well, that is what HMC memory is, more or less) can be used to link embedded DRAM to processor cores. Such technology will be needed to make more powerful and energy-efficient parallel systems.
Researchers at the Georgia Institute of Technology, Korea Advanced Institute of Science and Technology, and Amkor Technology will be showing off a similar stacked chip called3D-MAPS, which is a massively parallel processor with stacked memory. In this case, the chip in question has 64 cores running at mere 277MHz and 256KB of SRAM memory mated to it. This is a tiny chip in terms of raw chip performance, but it delivers 64GB/sec of memory bandwidth and only consumes 5 watts of juice, and on memory-intensive workloads with a certain degree of parallelism, 3D-MAPS could scream. The next generation 3D-MAPS chip will have two logic tiers with a total of 128 cores and three DRAM tiers instead of one SRAM tier for memory.
The University of Michigan will be stacking up chips, too, with its Centip3De project, which will put 64 ARM Cortex-M3 embedded processors into a cube. The Wolverines have been talking about (PDF) a seven-layer 3D chip that has 128 Cortex-M3 cores and 256MB of stacked DRAM all glued together, so this appears to be a chip off the old block.
Advanced Micro Devices will be showing off a “resonant clock design” for a 64-bit x86-processor towards the end of the day, and clearly there will be a need for some coffee during that one. Fudan University of China will be showing off a 16-core, 320 milliwatt, 800MHz processor it has cooked up with message passing and shared-memory inter-core communications – all cooked up in an ancient and cheap 65 nanometer process. Cavium will be showing off its latest 32-core MIPS-based processors, which sport network accelerators and which are sold under the Octeon II brand. Fujitsu will be there to show off its current K massively parallel supercomputer, powered by the eight-core Sparc64-VIIfx processor and currently the most powerful super in the world.
Hynix Semiconductor and Samsung Electronics will be showing off their respective 2Gbit and 4Gbit DDR4 SDRAM memory chips, which will eventually make their way into PCs and servers. ®
In this video, IBM’s Keith Olsen describes the company’s BladeCenter H systems for HPC. Over 10 percent of the TOP500 systems are based on the BladeCenter H platform.
With the November 2011 TOP500 list, IBM is once again #1 in leadership with the:
- Most installed aggregate throughput with over 20,234 out of 74,064 Teraflops, taking the lead for 25 lists in a row
- Most systems in the TOP500 with 223 (HP had 142. Oracle had 10, a decrease from June when they had 12.)
- Most energy-efficient system with the IBM Blue Gene/Q
- 5 Most energy-efficient systems
Recorded at SC11 in Seattle. Read the Full Story.
In the enterprise world, few things are as competitive as TPC (Transaction Processing) benchmarks. When the fastest machine wins the deal, the rules, procedures, and setup of individual benchmarks translates into truly high stakes.
In this video, Meikel Poess, co-chair of the TPCTC 2011 Council, presents: The Future of Benchmarking - Three Novel Concepts Presented at the Transaction Processing Performance Council’s Technical Conference (TPCTC 2011).
The ever-evolving technological landscape is challenging industry experts and researchers alike to develop innovative and increasingly powerful compute systems. Rapid improvements in transistor density, disk capacity and performance enable new innovations in system designs, system architectures and algorithms that can manage and query large amounts of data very efficiently.
As new fields of research emerge, adapting existing benchmarks (or creating entirely new benchmarks) becomes necessary. This paper examines three novel concepts for expanding the Transaction Processing Performance Council’s existing set of benchmarks (TPC-C, TPC-E and TPC-H), as presented at the TPC’s 2011 Technical Conference.
The ideas proposed within this paper are actively influencing the TPC’s direction, in terms of future benchmark development, and include:
- Extending TPC-E to Measure Availability in Database Systems
- Introducing Skew into the TPC-H Benchmark
- Metrics for Measuring the Performance of the Mixed Workload CH-benCHmark
Researchers and industry experts who would like to offer feedback, or propose new ideas for TPC benchmark development, are encouraged to contact the author, Meikel Poess, at (meikel [dot] poess [at] oracle.com).
If the number of news releases coming out this week before SC11 are any indication, the HPC ecosystem has never been more vibrant. Here’s the SC11 News with Snark for Saturday, Nov 11, 2011.
Scientific Computing has posted an excellent profile of Happy Sithole, Director of the Center for High Performance Computing in South Africa. When I worked back at Sun, CHPC was one of our showcase HPC customers with the first Top 500 system listing for Africa.
Built upon a great vision and entirely from scratch, the CHPC has become a prestige center in South Africa providing large scale compute resources to hundreds of researchers, institutes and industrial partners. One of the major influences on Happy’s objectives is his participation at a global level in key industry events, such as the upcoming SC11 conference in Seattle. Through Happy’s deep participation in key technical forums, like the SC conferences, CHPC has become a critical a connection, linking with the global research community as much as SC is in supporting Happy’s own plans around large scale HPC, networking infrastructures and data storage.
It’s me again–Dr. Lewey Anton. I’ve been commissioned by insideHPC to get the scoop on who’s jumping ship and moving on up in high performance computing.
New Updates from our Readers:
Have you moved or know of HPC folks in new positions? Let us know by sending an email to: [email protected] In the meantime, keep up with the HPC community’s movers and shakers by subscribing to insideHPC today.
The good folks at Univa have been very busy since acquiring the Grid Engine development team from Oracle last year. Now you can learn what’s new with this popular open source resource broker.
Join us for a webinar to learn about what Univa has done to Grid Engine. Hear about the more than 200 differences between Sun Grid Engine 6.2U5 – the last open source version of Grid Engine – and Univa Grid Engine 8.0.1, our latest release. That’s nearly 1 improvement per day since Univa hired the core Grid Engine engineering team. We will also describe the exciting new features and capabilities we are actively developing to improve uptime and throughput in Grid Engine clusters.
Two sessions of the webinar are scheduled, so be sure to Register:
While I’d like to give this obviously nascent port the benefit of the doubt, its current state is frankly embarrassing. It’s very clear now why Oracle wasn’t demonstrating this at OpenWorld last week: it doesn’t stand up to the mildest level of scrutiny. It’s fine that Oracle has embarked on a port of DTrace to the so-called unbreakable kernel, but this is months away from being usable. Announcing a product of this low quality and value calls into question Oracle’s credibility as a technology provider. Further, this was entirely avoidable; there were other DTrace ports to Linux that Oracle could have used as a starting point to produce something much closer to functional.
Read the Full Story.
While you may have to wait for DTrace on Linux for a while longer, Brendan Gregg has just done a post on how you can use DTrace today to speed up your Mac.
insideHPC.com is a production of insideHPC, LLC. © 2006-2013 Sitemap