Did you know that 3Q2012 was the biggest quarter of revenue in the history of HPC? In this video from SC12, Earl Joseph from IDC presents an HPC Market Update. Topics include: Top Trends in HPC, Vendor Revenue, and HPC Forecasts as well as an overview of a new IDC Study on Creating an Economic Model for HPC and ROI. View the slides (PDF) or check out the Full Story at The Register.
In this video, Dan Olds from Gabriel Consulting sits down with Jack Dongarra (ORNL/University of Tennessee and Dona Crawford (Assoc Director LLNL) at SC12 to discuss the challenges facing HPC on the road to exascale. Along the way, they describe their TOP500-list-topping systems: Titan and Sequoia.
With Moab HPC Suite — Remote Visualization Edition, you can improve the productivity, collaboration and security of the design and research process by only transferring pixels instead of data to users to do their simulations and analysis. This enables a wider range of users to collaborate and be more productive at any time, on the same data, from anywhere without any data transfer time lags or security issues. Users also have improved immediate access to specialty applications and resources‒like GPUs‒ they might need for a project so they are no longer limited by personal workstation constraints or to a single working location.”
Download the whitepaper on Technical Visualization Workload Optimization (PDF).
Ceph is well-positioned to capture greenfield distributed storage opportunities through its so-called object storage approach. In my experience, it’s this greenfield characteristic that should be a great catalyst for Inktank, as it gives Inktank the chance to grow under the radar of the big, incumbent vendors. Since Ceph will be stealing greenfield sales opportunities that never actually hit the radar of the proprietary vendors’ respective sales teams, they won’t know they’re bleeding until Ceph’s momentum is difficult to impossible to stop.
Read the Full Story or check out our interview below with Neil Levine from Inktank at SC12.
The latest version of Moab was designed to recognize and work with the new Intel Xeon Phi coprocessors, based on the Intel Many Integrated Cores (MIC) technology. This ability to automatically detect Intel Xeon Phi coprocessors– and determine their location and availability — improves processor utilization to more intelligently schedule jobs and removes the need for extensive reprogramming to integrate Intel Xeon Phi coprocessors into existing systems. It also allows for policy-based scheduling, optimizing the choice of accelerators and coprocessors. As Intel Xeon Phi coprocessors are introduced into existing systems, this keeps costs and management efforts at a minimum, while maximizing utilization to ensure the most efficient job processing — by utilizing metrics including the number of cores and hardware threads, physical and memory available (total and free), max frequency, architect and load.”
Read the Full Story.
In this video from the Adaptive Computing booth at SC12, Jenett Tillotson from Indiana University presents: Configuring Moab to Fairly Share a Supercomputer while Preventing Starvation in a University Setting.
As a reminder, the Swiss Supercomputing Centre will host the HPC Advisory Council Switzerland Conference 2013 in Lugano, Switzerland March 13-15, 2013.
Adaptive Computing, a cloud management and high performance computing outfit in Utah, needed something really cool to bring to their trade shows. Something that makes order out of chaos, and demonstrates their attention to detail in the midst of miles of wiring. They decided building the largest non-commercial LED cube would be a good project, and thus the 16x16x16 All Spark Cube was born. The All Spark Cube was constructed using 10 mm RGB LEDs wired together with three-foot lengths of 16 ga pre-tinned copper wire. In this video, [Kevin] shows off the process of constructing a single row; first the LEDs are placed in a jig, the leads are bent down, and a bus wire is soldered to 16 individual anodes per row.”
The IBM Blue Gene/Q pushes the edge of technology by providing a leadership-class supercomputer that has a homogenous multi-core architecture and relatively low power consumption. On the June 2012 TOP500® list of supercomputers, four of the top ten supercomputers were Blue Gene/Q’s. Since February 2012, TotalView users at Lawrence Livermore National Laboratory (LLNL), which was named the top supercomputer on the list, have been utilizing a pre-release version of the TotalView debugger for porting codes to take advantage of the new system. TotalView has a precedent of being the code and memory debugger of choice with users of IBM Blue Gene supercomputers, including JuQueen, the Blue Gene/Q at Forschungszentrum Jülich.”
Read the Full Story.
In this video from the Adaptive Computing booth at SC12, Andrew Howard from Purdue discusses how the community cluster program has moved forward with the help of Moab software at the Rosen Center for Advanced Computing.
In this video from SC12, Nick Ihli from Adaptive Computing demonstrates how the company’s Torque resource manager works with Intel Xeon Phi. By relaying Intel Xeon Phi instrumentation such as memory availability to the company’s Moab workload manager, the system is able to schedule coprocessor resources efficiently.
With the amazing capabilities of the latest supercomputing coprocessors such as the Intel Xeon Phi coprocessor, it’s vital to make it as simple as possible to integrate them into existing supercomputers,” noted Robert Clyde, CEO of Adaptive Computing. “The latest iteration of Moab was designed to maximize the investment being made by today’s HPC providers.”
In this video from SC12, Sha Chaoqun from Sugon describes the company’s products for high performance computing. As a leading Chinese server vendor, Sugon helped sponsor the SC12 Student Cluster Challenge.