This week Mellanox announced that its end-to-end FDR InfiniBand technology is powering the Stampede supercomputer at the TACC. As the most powerful supercomputing system in the NSF XSEDE program, the 10 Petaflop Stampede system integrates thousands of Dell servers and Intel Xeon Phi coprocessors with Mellanox FDR 56Gb/s InfiniBand SwitchX based switches and ConnectX-3 adapter cards.
The InfiniBand network was easy to deploy and delivers incredible application performance on a consistent basis,” said Tommy Minyard , director of Advanced Computing Systems, TACC. “Utilizing Mellanox FDR 56Gb/s InfiniBand provides us with extremely scalable, high performance — a critical element as Stampede is designed to support hundreds of computationally- and data-intensive science applications from around the United States and the world.”
Stampede supports national scientific research into weather forecasting, climate modeling, drug discovery and energy exploration and production. Read the Full Story.
Our Video Sunday feature continues with this time-lapse movie of the construction of NCSA’s Blue Waters supercomputer and the National Petascale Computing Facility. NCSA launched Blue Waters this week in an official dedication ceremony.
The 683,000-pound computer has a sustained speed of more than 1 petaflop and is capable of performing more than 1 quadrillion calculations per second. It is built with more than 235 Cray XE6 cabinets and more than 30 cabinets of the Cray XK6 supercomputer with NVIDIA Tesla GPU computing capability, all housed in the National Petascale Computing Facility off Oak Street in Champaign.
The book benefits software engineers, scientific researchers, and high performance and supercomputing developers in need of high-performance computing resources, by:
Providing a guide to exploiting the parallel power of the Intel Xeon Phi coprocessor for high-performance computing
Presenting best practices for portable, high-performance computing and a familiar and proven threaded, scalar-vector programming model
Including simple but informative code examples that explain the unique aspects of this new highly parallel and high performance computational product
Covering wide vectors, many cores, many threads and high bandwidth cache/memory architecture
I got my hands on a preliminary copy of the book back in November at SC12, and I can tell you that Jim and James did a great job.
The book release coincides with the formal dedication of the Stampede supercomputer at the Texas Advanced Computing Center in Austin. Stampede is currently ranked number seven on TOP500, with over 6400 Intel Xeon Phi coprocessors. Jeffers and Reinders have today committed several hundred books to support TACC’s training efforts for Stampede.
In this time-lapse video, the Mira supercomputer is assembled at Argonne National Laboratory. With an amazing 1048 nodes per rack, the 10 Petaflop IBM Blue Gene/Q system was reportedly brought up and running production codes in record time.
This week SGI announced that Total has selected the SGI ICE X technology for its new 2.3 Petaflop Pangea supercomputer. In what is described as the largest commercial HPC system in the world, Pangea will give Total’s in-house engineers and geologists an extremely powerful tool to enable the application of analytical and numerical models that support the development of three dimensional visualizations of underground geological formations, key to identifying potential deposits of oil and gas and to determining optimal extraction methods.
Total is committed to leveraging technological innovation and high performance computing to provide the best response to growing global energy demand,” said Philippe Malzac, CIO Exploration and Production for Total. “The efficiency of the SGI ICE X system, which represents high computational power using a minimal amount of energy, gives Total the smallest footprint and lowest TCO possible. This was a key factor in our selection of SGI ICE X for the Pangea system.”
To maximize energy efficiency, Total selected an innovative water-cooled SGI ICE X solution based on its M-Cell design. M-Cells utilize closed-loop airflow and warm-water cooling to create embedded hot-aisle containment, thereby lowering overall cooling requirements and significantly reducing overall energy consumption as compared to traditional HPC designs. The 2.3 PFlop system is based on the Intel Xeon E5-2670 processor that consists of 110,592 cores and contains 442 terabytes of memory. The data management solution for seven petabytes of storage includes SGI InfiniteStorage 17000 disk arrays, SGI DMF tiered storage virtualization, and a Lustre file system integrated by SGI professional services.
The Navy DoD Supercomputing Resource Center (DSRC) recently added three new IBM iDataPlex supercomputers to its operations. Located at the John C. Stennis Space Center in Mississippi, all three machines are named after NASA astronauts who have served in the Navy. At a dedication ceremony in February, one of those computers was dedicated in honor of naval aviator and Apollo 13 astronaut Fred Haise, who attended the ceremony. The other two IBM systems are named for retired Navy Cmdr. Susan Still Kilrain, a naval aviator and space shuttle pilot, and retired Navy Capt. Eugene Cernan, a naval aviator and the last man to stepfoot on the moon.
Installation of the new systems expanded the installed supercomputing capability of the Navy DSRC, which now peaks at 866 trillion floating point operations (teraFLOPS) per second. Future upgrades are expected to further increase that capacity to 5,200 teraFLOPS by 2016.
The Navy DSRC provides unique value within our supercomputing system,” observed John West, HPCMP director. “In addition to serving the users from the research, development, test and evaluation communities of the department served by all of our centers, the Navy DSRC has a unique mission to assist the Navy in delivering wind, wave and other oceanographic forecasts to the fleet on a 24/7 basis. We are proud of the work of our partners, and the men and women of the Navy DSRC, that have brought this added capability online for the department.”
DSRC is one of the five supercomputing centers in the Department of Defense High Performance Computing Modernization Program (HPCMP). Currently Director of HPCMP, John West, pictured above, is the founder and former owner of insideHPC. Read the Full Story.
Today ClusterVision announced the installation of a 200 Teraflop supercomputer a the University of Paderborn. With 614 compute nodes and 10,000 cores, the hybrid system will run a wide range of commercial and open source HPC applications in technology and science. As a hybrid system, the supercomputer also includes 32 NVIDIA K20 GPUs and 8 Intel Xeon Phi coprocessors, providing an additional 40 Teraflops of compute power.
This system is a powerful compute resource for all researchers in the region of East Westphalia and Lippe, and our partners in Germany and Europe,” Prof.Dr. Holger Karl, head of the PC2 board.
With a system interconnect powered by Mellanox QDR InfiniBand, the Paderborn cluster uses Dell PowerVault MD3200 storage components powered by the FraunhoferFS FhGFS the parallel file- system. Read the Full Story.
Today Fujitsu announced the launch of operations of the purpose-built Atacama Compact Array (ACA) Correlator supercomputer system, which will process images from the Atacama Large Millimeter/submillimeter Array (ALMA) project, a Chile-based radio telescope featuring unprecedented sensitivity and resolution.
With the observations from ALMA, we hope to gain insights into such mysteries as how galaxies have formed and evolved, how planetary systems orbiting around a Sun-like star are formed, and whether the origin of life is to be found in the universe. The data processing performed by the ACA Correlator system is essential for these types of radio astronomy research. I am confident that ALMA will open new horizons for astronomy.”
Fujitsu and the National Astronomical Observatory of Japan (NAOJ) worked together to develop the ACA Correlator, a purpose-built supercomputer responsible for processing data from the Atacama Compact Array, which can make high sensitivity observations.
Set at 5,000 meters above sea level in the Chilean Andes, ALMA is a massive radio telescope developed through a partnership among East Asia (led by NAOJ), North America and Europe. The telescope is capable of producing astronomical radio wave images with the world’s highest resolution. The facility consists of 66 antennas arranged in a 18.5 km-diameter array, equivalent to the span of the Yamanote railway loop encircling the central part of Tokyo, and by processing millimeter/submillimeter wave(1) signals from each antenna, it is possible for the antennas to act as a single, giant telescope that can generate radio wave images with the same resolution as those produced by a massive 18.5 km-diameter parabolic antenna. This makes it possible to see the dark regions of the universe that cannot be observed at optical wavelengths, such as galaxies that were formed shortly after the beginning of the universe, the birth of stars, planetary systems like our solar system, and matter related to the origin of life, such as of organic molecules.
A ceremony was held in Chile to commemorate the inauguration of ALMA on March 13. Read the Full Story.
Stampede is one of the largest computing systems in the world for open science research. Stampede system components are connected via a fat-tree, FDR InfiniBand interconnect. One hundred and sixty compute racks house compute nodes with dual, eight-core sockets, and feature the new Intel Xeon Phi coprocessors. Additional racks house login, I/O, big-memory, and general hardware management nodes. Each compute node is provisioned with local storage. A high-speed Lustre file system is backed by 76 I/O servers.
One of the missions of the National Renewable Energy Laboratory (NREL) is to advance renewable energy research. So when it came time to build their new HPC datacenter, they decided to “walk the talk” and push the limits of energy-efficient supercomputing.
Well, so far, so good. With the first petascale system to use warm-water liquid cooling and reach an annualized average power usage effectiveness (PUE) rating of 1.06 or better, the new HPC data center ranks with the most efficient supercomputers in the world.
We took an integrated approach to the HPC system, the data center, and the building as part of the ESIF project,” said Steve Hammond, NREL’s Computational Science Center Director. “First, we wanted an energy-efficient HPC system appropriate for our workload. This is being supplied by HP and Intel. A new component-level liquid cooling system, developed by HP, will be used to keep computer components within safe operating range, reducing the number of fans in the backs of the racks.”
The first phase of the HPC installation began in November 2012, and the system will reach full capacity in the summer of 2013. Read the Full Story.
The JUQUEEN supercomputer at Jülich in Germany is now the powerful supercomputer in Europe. With 458,752 compute cores in 28 racks, the 6 Petaflop system is now fully operational.
JUQUEEN is targeted to tackle comprehensive and complex scientific questions, called Grand Challenges”, explains Prof. Thomas Lippert, Director of JSC. “Projects from various scientific areas can profit from the system’s performance, e.g. in the areas of neuroscience, computational biology, energy, or climate research. Moreover, it enables complicated calculations in quantum physics, which were not possible before.“
The JUQUEEN supercomputer is also extremely efficient with a performance/power ratio of approximately 2 Gigaflop/s per Watt, ranking it #5 on Green500. Read the Full Story.
Today Asetek announced that the company’s liquid cooling technology been chosen by the University of Tromsø in Norway for a pilot installation at the university’s HPC facility. Asetek will reduce energy consumption of the data center and enable waste heat from servers to heat the university campus.
As the northern most university of the world, UiT’s campus needs constant heating for its buildings year round. Climate change, the exploitation of resources and environmental threats are all factors that have been considered when developing the new data center. Asetek’s RackCDU can take advantage of free outdoor ambient air cooling in almost any climate in the world and this is especially true for Tromsø due to its unique geographical position. No power will be used to actively chill the water and the heated liquid generated from the data center servers that would otherwise be wasted will be used to heat the university campus.
Asetek’s RackCDU is a hot water, direct-to-chip, data center liquid cooling system that enables cooling energy savings exceeding 50% and density increases of 2.5x when compared to modern air cooled data centers. RackCDU removes heat from server components, memory modules and other hot spots within servers and takes it all the way out of the data center using liquid. Read the Full Story.
NCSA has gifted the Institute for Genomic Biology a highly parallel, shared memory supercomputer. Named Ember, the SGI system has become part of the IGB biocluster, adding 1536 cores and eight terabytes of memory spread across four nodes.
We’ve been using Ember for a while now through the NCSA, mainly in computational genomics,” said Victor Jongeneel, Director of HPCBio. “It can perform a lot of tasks that our existing systems just can’t. Having it under our own management will allow us better access and faster results.”
Ember was installed at the IGB after being decommission by NCSA in October. The two-year-old Ember system will be available to anyone on campus for a service fee, which will be placed in a fund to replace the infrastructure as it becomes dated. Read the Full Story.
This week Mellanox announced that India’s Centre for Development of Advanced Computing (C-DAC) is using the company’s end-to-end FDR 56Gb/s InfiniBand solutions for PARAM Yuva – II, the fastest supercomputer in India. As the premier R&D organization of the Department of Electronics and Information Technology, C-DAC chose Mellanox’s robust, high-speed interconnect solution due to its performance, scalability, low power consumption, and high-efficiency data handling.
C-DAC’s HPC programs are focused towards creating an eco-system to derive full benefits from HPC systems to address grand challenge problems and advancing fundamental science, research and industrial competitiveness,” said Dr. Pradeep Sinha, Senior Director, High Performance Computing at C-DAC. “Utilizing Mellanox FDR 56Gb/s InfiniBand interconnect solutions, the new PARAM Yuva – II cluster can provide our users with superior application performance to further research and development.”
The 360 Teraflop Yuva II was built in conjunction with Netweb Technologies using Tyrone-based servers. Read the Full Story.
Technical computing provider SGI has announced that Total has selected it to provide a high-performance computing solution capable of delivering compute power of 2.3 petaflops per second.
The system will use the latest generation of SGI ICE X servers, and greatly increases the data processing power previously available to Total at its Jean Féger Scientific and Technical Centre (CSTJF) in Pau, southwest France.
The aim of the new system is to help Total in the quest to identify and develop new oil and gas prospects.
The needs for compute-intensive data processing in the oil and gas industry are constantly increasing,’ said SGI interim CEO Ron Verdoorn. “With data files exceeding 10 petabytes, technological innovation for reservoir modelling and simulation relies not only on compute architectures but also on storage architectures, as well. Within this framework, SGI offers a complete integrated solution including compute, storage, and services.”