The Colorado School of Mines has announced plans to install a new 155 teraflop hybrid IBM supercomputer dubbed “BlueM” to run large simulations in support of energy research. The new machine will be housed at NCAR’s Mesa Lab in Boulder and operate on the Mines’ computing network.
As the first supercomputer of its kind, BlueM features a dual architecture system combining the IBM BlueGene Q and IBM iDataplex platforms – the first instance of this configuration being installed together.
BlueM’s predecessor, RA, has been hugely successful but Mines has outgrown its 23 teraflops. BlueM will provide a greater number of flops dedicated to Mines faculty and students than are available at most other institutions with high performance machines. Researchers will be able to run higher fidelity simulations than in the past, get more time on the machine and break new ground in terms of algorithm development.
Read the Full Story.
In this video from the 2013 HPC User Forum, Stephen Wheat from Intel presents: Future Directions for IA … and more.
You can check out more presentations at the HPC User Forum Video Gallery.
Over at the Xcelerit Blog, Jörg Lotze and Hicham Lahlou write that code portability is the key to success in a hybrid computing world with so many available processing architectures.
Therefore, often compromises are taken: typically easy maintenance is favoured and performance is sacrificed. That is, the code is not optimised for a particular platform and developed for a standard CPU processor, as maintaining code bases for different accelerator processors is a difficult task and the benefit is not known beforehand or does not justify the effort. The best solution however would be a single code base that is easy to maintain, written in such a way that it can run on a wide variety of hardware platforms – for example using the Xcelerit SDK. This allows to exploit hybrid hardware configurations to the best advantage and is portable to future platforms.
Read the Full Story.
Over the Intel Datastack Blog, Winston Saunders writes considering the rapidly expanding efficiency and performance capability of supercomputing systems, it may be time to upgrade just for the electricity savings alone.
You can see system-level annualized energy costs in the Figure. From this point it is pretty straight forward to calculate a payback time for replacing inefficient servers. It’s interesting they work out to be vertical lines. It’s interesting that they times for return on investment show up as vertical lines. It’s astounding that they are so short. In several cases, less than a year!
Read the Full Story.
A high-performance server cluster is enabling researchers at the Institute for Computational Cosmology (ICC), based at Durham University and throughout the wider UK astrophysics community, to better understand the universe by allowing them to model phenomena ranging from solar flares to the formation of galaxies.
The cluster is part of the DiRAC (Distributed Research using Advanced Computing) national facility. As such, members of the UKMHD consortium, ICC members and their national and international collaborators also use the cluster. In total, the cluster is used by researchers at universities in the UK including Leeds, Liverpool, Manchester, St Andrews, Sussex and Warwick, and from abroad by people in Australia, China, Germany and the Netherlands.
The cluster is known as The Cosmology Machine (Cosma) and is a combination of Cosma5, a new IBM and DDN technology infrastructure integrated with Durham University’s existing cluster, Cosma4 (originally installed in January 2011).
Boosted by new infrastructure, Cosma now has 9,856 CPU cores and 4,096 GPU cores. It includes 71,000 Gigabytes (GB) of RAM and the peak performance of the system is 182T/Flops. Cosma has 3.5 petabytes of storage for the data produced by cosmology applications.
The server cluster and storage has been designed, built, installed and will be supported by Durham University’s data processing, data management and storage partner, OCF.
Today Cray introduced the Cray XC30-AC supercomputer as an air-cooled addition to its series of Cray XC30 (Cascade) systems. Shipping now, the new Cray XC30-AC supercomputer includes all of the advanced HPC technologies offered in the Cray XC30 system, and features aggressive price points intended to attract a new a class of HPC users – the technical enterprise.
Innovation is not limited to Fortune 100 companies. There are many Fortune 1000 companies, and even departments within Fortune 100 companies, with a growing need for a supercomputing system that provides a critical tool for taking advantage of performing complex simulations,” said Peg Williams, Cray’s senior vice president of high performance computing systems. “With all of the features and functionality of our high-end Cray XC30 systems, our new Cray XC30- AC supercomputer is perfectly suited for technical enterprise customers, giving them the ability to leverage all of the world-class computational resources of a Cray supercomputer at much lower starting price points.”
In case you’re wondering, the Cray XC30-AC does not incorporate Appro technology. Cray acquired Appro late last year, and that company was known for its innovative system cooling.
With prices starting at $500,000, the Cray XC30-AC does feature the same key traits of the Cray XC30 system – the Aries system interconnect and the Cray Linux Environment. The system has ability to handle a wide variety of processor types, including Intel Xeon processors, Intel Xeon Phi coprocessors, and NVIDIA Tesla GPU accelerators.
It’s hard to understate the importance of Gordon Bell to supercomputing as we know it today. While he was known as an architect and as an entrepreneur, for me personally his great charm and greatest contribution has been his ability to understand and then communicate in a very pithy, often funny and understandable manner very deep or complex trends in computing – for example, comments attributed to him include ‘the network becomes the system’ or ‘the most reliable components are the ones you leave out,’ which often popped into my head this past year as we struggled with integrating a 20PF system,” said Michel McCoy, head of LLNL’s Advanced Simulation and Computing Program. “He has also been a part of the Lab’s history in supercomputing, showing us today that his passion for supercomputers and his belief in their importance in advancing human civilization is undiminished.”
In a guest lecture, Bell used his own “Bell’s Law of Computer Classes,” the subject of a 1972 article he authored, as the framework for discussing the evolution of supercomputing since the 1960s. The emergence in the 60s of a new, lower cost computer class based on microprocessors formed the basis of Moore’s Law. Bell posited that advances in semiconductor, storage and network technologies brought about a new class of computers every decade to fulfill a new need. Classes include: mainframes (1960s), minicomputers (1970s), networked workstations and personal computers (1980s), browser-web-server structure (1990s), palm computing (1995), web services (2000s), convergence of cell phones and computers (2003), and Wireless Sensor Networks aka motes (2004).
Read the Full Story.
The inauguration of the computer in the town of Kajaani this week brought together representatives of the European HPC community, which is hoping that the machine will provide researchers with extremely high performance computing capability and pave their way towards scientific innovations.
Sisu will offer researchers resources to investigate such subjects as nanotechnology, fusion energy and climate change. At the second stage of the installation, in 2014, Sisu’s computing power will reach the petaflop class – capable of one quadrillion floating point operations per second.
As a part of Datacenter CSC Kajaani, the new supercomputer supports Ministry’s goal of Finland being in the vanguard of knowledge by the year 2020. The Finnish researchers will have access to a state-of-the-art research infrastructure that will also support the internationalisation of research,” said Riitta Maijala, from the Finnish Ministry of Education and Culture.
CSC’s new supercomputer Sisu is the first Cray XC30 server in production in Europe. The processors are provided by Intel.
In this video from the 2013 Open Fabrics Developer Workshop, Mark Seager from Intel presents: Criteria for a Scalable Achitecture.
Today Indiana University unveiled the Big Red II supercomputer, a hybrid petascale Cray system.
There are other universities that hold legal title to computers as fast or faster than Big Red II, but IU is the first in the world to have its own one petaFLOPS supercomputer as a dedicated university resource,” said Craig Stewart, IU Pervasive Technology Institute executive director and associate dean of research technologies. “Big Red II will be used by IU, for IU to support IU’s activities in the arts,humanities and sciences, and to support the economic development of Indiana, without any constraints from an outside funding agency.”
The new system is a next-generation Cray XK supercomputer, specifically crafted for IU’s needs. Housed in the university’s state-of-the-art Data Center, Big Red II has more than 21,000 computer processor cores (compared to Big Red’s 4,100). Big Red II will support big data applications in computational research. To further advance Big Data research, IU is also implementing a new disk storage system called the Data Capacitor II (DCII), a five petabyte, high speed/high bandwidth storage system.
Read the Full Story.
Nine months after its inauguration, an agreement was sealed for a planned system expansion to be completed by end of 2014 or early 2015. The upgrade of the LRZ supercomputer, SuperMUC, which currently delivers a peak performance of 3.185 petaflops and holds position 6 on the Top500 list, will boost the system’s performance by a factor of about 2.1, making it capable of 6.4 petaflops.
The contract for SuperMUC Phase II was signed by representatives of all parties involved: Arndt Bode of the Leibniz Supercomputing Centre (LRZ), Karl-Heinz Hoffmann (chair of Bayerische Akademie der Wissenschaften), Martina Koederitz (general manager of IBM Germany), and Andreas Pflieger (IBM) in the presence of Wolfgang Heubisch and Georg Antretter representing the Bavarian State Ministry of Sciences, Research and the Arts.
The agreement states that 74,302 Intel-Xeon processor cores will be added to the existing 155,656 processor cores of SuperMUC. Its main memory will be expanded from 340 to 538 terabytes and 9 petabytes of intermediate storage will complement the system’s existing capacity of 10 petabytes.
The LRZ HPC system has been designed for exceptionally versatile deployment. The more than 150 different applications running on SuperMUC on average per year range from solving problems in physics and fluid dynamics to a wealth of other scientific fields, such as aerospace and automotive engineering, medicine and bioinformatics, astrophysics and geophysics amongst others.
Professor Bode is confident that SuperMUC Phase II will be running as stably and reliably as the current system has done from day one – and that it will scale to the large number of cores.
Only shortly after starting operation, SuperMUC was working to full capacity. Already, there are applications that practically use the entire system, and they do this in a very efficient way. Especially in the realm of biology and life sciences, we expect a significantly higher demand of system performance in the foreseeable future. SuperMUC Phase II will be in an excellent position to meet these requirements,’ said Bode.
In retrospect, Roadrunner could be viewed as a something of a design cul-de-sac, created by the artificial goal of the petaflop milestone. But it’s notable that even in the contrived race to a quadrillion flops, something of worth endured. Although the PowerXCell 8i was a commercial dead end, x86/accelerator combo servers took off and are now sold by every HPC system vendor, IBM included. For the time being, accelerators offer the only commodity-based technology that delivers multi-petaflops of supercomputing in reasonable power envelopes, not to mention tiny systems with multi-teraflops capability. The energy efficiency of these accelerators, compared to standard processors, is driving the technology into mainstream HPC and is stretching the number of FLOPS that can be squeezed into a datacenter or into a deskside cluster.
Read the Full Story.
In this video from Moabcon 2013, Bill Kramer from NCSA presents: Blue Waters and Resource Management – Now and in the Future.
Looking for a computer that can really take some punishment? Then look no farther than GE’s line of ruggedized systems that are designed to handle temperature extremes and shocks up to 40G.