Our customers operate technical computing environments where infrastructure software like Univa Grid Engine is a key component. This partnership allows us to support our customers on all levels, giving them more options to use their compute clusters in the most efficient manner,” says Gerd-Lothar Leonhart, CEO of s+c. “Additionally, the possibility to integrate Univa Grid Engine with Hadoop systems opens up new opportunities to optimize the usage of Big Data installations.”
Over at Tom’s Hardware, Niels Broekhuijsen writes that new information has surfaced regarding Intel’s upcoming Xeon Phi coprocessors.
Intel’s product database has been updated, and it now shows five new Xeon Phi co-processors. These five are followups of the original Xeon 5110P, SE10P, and SE10X models. Two lighter Xeon Phi 3100 parts have shown up: a mid-end part, the 5120D, and two premium 7100 series parts. The main differences between the current Xeon Phi co-processors and the previous ones are the Xeon CPUs that are aboard, as well as the cooling blocks. Any model with the extension “*P” in the name has the passively cooled cooler, while others have the active drum cooler. The “*D” will not ship with a cooler.
If rumors hold, the new Xeon Phi coprocessors may hit the market this month. Read the Full Story.
Today DataDirect Networks announced that University College London has selected DDN technolgy to provide up to 3,000 researchers with a safe and resilient storage solution for sharing, reusing and preserving project-based research data.
In an effort to better support researchers, UCL sought to remove the burden of storing and preserving research data from individual users. They selected the combination of DDN’s distributed WOS and GRIDScaler technology to provide the desired scalability, performance, reliability, portability and management simplicity.
DDN is empowering us to deliver performance and cost savings through a dramatically simplified approach. Add in the fact that DDN’s resilient, extensible storage technology provided evidence for seamless expansion from a half-petabyte to 100PBs, and we found exactly the foundation we were looking for.”
Read the Full Story.
You can check out more OFA videos in our Open Fabrics Worshop Video Gallery.
A new whitepaper from Intel looks at Truescale InfiniBand performance for HPC applications.
There are two types of InfiniBand architectures available today in the marketplace, the first being the traditional InfiniBand design, created as a channel interconnect for the data center. The latest InfiniBand architecture was built with HPC in mind. This enhanced HPC fabric offering is optimized for key interconnect performance factors, featuring MPI message rating, end-to-end latency and collective performance, resulting in increased HPC application performance. enhanced intel True Scale Fabric Architecture – Offers 3x to 17x the MPI (Message Passing Interface) message throughput of the other InfiniBand architecture. For many MPI applications, small message rate throughput is an important factor that contributes to overall performance and scalability.
Intel tested a number of MPI applications and found that they performed up to 11 percent better on the cluster based Intel True Scale Fabric QDR-40 (dual-channel) than the traditional InfiniBand-based architecture running at FDR (56 Gbps). Download the whitepaper (PDF).
While we may not get to Exascale by 2020, ground-breaking compute technologies for the SKA telescope are already under development (without involvement of the U.S. Government, by the way). In this video from the 2013 HPC User Forum, Ronald P. Luijten from IBM Research presents: The IBM-DOME Microserver Demonstrator.
The computational and storage demands for the future Square Kilometer Array (SKA) radio telescope are signiﬁcant. Building on the experience gained with the collaboration between ASTRON and IBM with the Blue Gene based LOFAR correlator, ASTRON and IBM have now embarked on a public-private exascale computing research project aimed at solving the SKA computing challenges. This project, called DOME, investigates novel approaches to exascale computing, with a focus on energy efficient, streaming data processing, exascale storage, and nano-photonics. DOME will not only beneﬁt the SKA, but will also make the knowledge gained available to interested third parties via a Users Platform. The intention of the DOME project is to evolve into the global center of excellence for transporting, processing, storing and analyzing large amounts of data for minimal energy cost.”
The Colorado School of Mines has announced plans to install a new 155 teraflop hybrid IBM supercomputer dubbed “BlueM” to run large simulations in support of energy research. The new machine will be housed at NCAR’s Mesa Lab in Boulder and operate on the Mines’ computing network.
As the first supercomputer of its kind, BlueM features a dual architecture system combining the IBM BlueGene Q and IBM iDataplex platforms – the first instance of this configuration being installed together.
BlueM’s predecessor, RA, has been hugely successful but Mines has outgrown its 23 teraflops. BlueM will provide a greater number of flops dedicated to Mines faculty and students than are available at most other institutions with high performance machines. Researchers will be able to run higher fidelity simulations than in the past, get more time on the machine and break new ground in terms of algorithm development.
Read the Full Story.
In this video from the 2013 HPC User Forum, Scott Schultz from Mellanox presents an overview of Mellanox and HPC.
In this video from the 2013 HPC User Forum, Stephen Wheat from Intel presents: Future Directions for IA … and more.
You can check out more presentations at the HPC User Forum Video Gallery.
Today Mellanox announced plans to acquire photonics leader Kotura, Inc. for approximately $82 million. The acquisition is expected to expand Mellanox’s ability to deliver cost-effective, high-speed networks with next generation optical connectivity, allowing data center customers to meet the growing demands of high-performance, Web 2.0, cloud, data center, database, financial services and storage applications. Mellanox believes that the Kotura acquisition will enhance its ability to provide leading technologies for high speed, scalable and efficient end-to-end interconnect solutions.
Operating networks at 100 Gigabit per second rates and higher requires careful integration between all parts of the network. We believe that silicon photonics is an important component in the development of 100 Gigabit InfiniBand and Ethernet solutions, and that owning and controlling the technology will allow us to develop the best, most reliable solution for our customers,” said Eyal Waldman, president, CEO and chairman of Mellanox Technologies. “We expect that the proposed acquisition of Kotura’s technology and the additional development team will better position us to produce 100Gb/s and faster interconnect solutions with higher-density optical connectivity at a lower cost. We welcome the great talent from Kotura and look forward to their contribution to Mellanox’s continued growth.”
Read the Full Story.
Over at the Xcelerit Blog, Jörg Lotze and Hicham Lahlou write that code portability is the key to success in a hybrid computing world with so many available processing architectures.
Therefore, often compromises are taken: typically easy maintenance is favoured and performance is sacrificed. That is, the code is not optimised for a particular platform and developed for a standard CPU processor, as maintaining code bases for different accelerator processors is a difficult task and the benefit is not known beforehand or does not justify the effort. The best solution however would be a single code base that is easy to maintain, written in such a way that it can run on a wide variety of hardware platforms – for example using the Xcelerit SDK. This allows to exploit hybrid hardware configurations to the best advantage and is portable to future platforms.
Read the Full Story.
In this video from the Lustre User Group 2013, Giuseppe Bruno from the Bank of Italy presents: Performance & Functionality Testbed for Clustered Filesystems.
In this video from the Lustre User Group 2013, Makia Minich from Xyratex presents: Managing and Monitoring a Scalable Lustre Infrastructure. Download the slides (PDF) or check out our LUG 2013 Video Gallery.
During his talk, Makia mentions an excellent presentation from John West entitled What’s Missing from HPC.