In this video from the 2013 HPC User Forum, Burak Yenier presents: The HPC Experiment – Paving the way to HPC as a Service.
For the 2nd Round of the HPC experiment, we will apply the cloud computing service model to workloads on remote Cluster Computing resources in the areas of HPC, Computer Aided Engineering, and the Life Sciences.
In related news, the HPC Experiment site has just added an online exhibit area as one-stop interactive service directory for Cloud users and service providers with focus on High Performance Computing, Big Data, Digital Manufacturing, and Computational Life Sciences.
Over at Brendan’s Blog, Brendan Gregg writes that response time – or latency – is crucial to understand in detail, but many of the common presentations of this data hide important details and patterns.
When I/O latency is presented as a visual heat map, some intriguing and beautiful patterns can emerge. These patterns provide insight into how a system is actually performing and what kinds of latency end-user applications experience. Many characteristics seen in these patterns are still not understood, but so far their analysis is revealing systemic behaviors that were previously unknown.
While we may not get to Exascale by 2020, ground-breaking compute technologies for the SKA telescope are already under development (without involvement of the U.S. Government, by the way). In this video from the 2013 HPC User Forum, Ronald P. Luijten from IBM Research presents: The IBM-DOME Microserver Demonstrator.
The computational and storage demands for the future Square Kilometer Array (SKA) radio telescope are signiﬁcant. Building on the experience gained with the collaboration between ASTRON and IBM with the Blue Gene based LOFAR correlator, ASTRON and IBM have now embarked on a public-private exascale computing research project aimed at solving the SKA computing challenges. This project, called DOME, investigates novel approaches to exascale computing, with a focus on energy efficient, streaming data processing, exascale storage, and nano-photonics. DOME will not only beneﬁt the SKA, but will also make the knowledge gained available to interested third parties via a Users Platform. The intention of the DOME project is to evolve into the global center of excellence for transporting, processing, storing and analyzing large amounts of data for minimal energy cost.”
Over at ExtremeTech, Joel Hruska writes that the daunting challenges of achieving exascale compute levels by the end of the decade were brought home recently in a presentation by Horst Simon, the Deputy Director at NERSC. In fact, Simon has wagered $2000 of his own money that we wont get there by 2020.
But here’s the thing: What if the focus on “exascale” is actually the wrong way to look at the problem?
FLOPS has persisted as a metric in supercomputing even as core counts and system density has risen, but the peak performance of a supercomputer may be a poor measure of its usefulness. The ability to efficiently utilize a subset of the system’s total performance capability is extremely important. In the long term, FLOPS are easier than moving data across nodes. Taking advantage of parallelism becomes even more important. Keeping data local is a better way to save power than spreading the workload across nodes, because as node counts rise, concurrency consumes an increasing percentage of total system power.
The Services Department Head (Computer Systems Manager II) will have the opportunity to lead an organization with a world-wide reputation for excellence and innovation. The Services Department serves as the primary point of contact for NERSCs scientific users and is responsible for enhancing their scientific productivity. Key activities include supporting users through the transition to exascale-class architectures; providing services to optimize application performance; providing services to store, analyze, manage, and share data; understanding HPC architecture trends; benchmarking; user communication; user training; and requirements gathering.
Are you paying too much for your job ads?Not only do we offer ads for a fraction of what the other guys charge, our insideHPC Job Board is powered by SimplyHIred, the world’s largest job search engine.
As a reminder, we are offering FREE job listings for .EDU and .GOV domains, so email us at: info @ insideHPC.com for a special discount code.
The fundamental unit of quantum computation is the “qubit”, the quantum analogue of the ordinary “bit” in a standard machine. Like ordinary bits, qubits can take the value of 1 or 0. Unlike ordinary bits, their quantum nature also lets them exist in a strange mixture—a “superposition”, in the jargon—of both states at once, much like Erwin Schrödinger’s famous cat. That means that a quantum computer can be in many states simultaneously, which in turn means that it can, in some sense, perform many different calculations at the same time. To be precise, a quantum computer with four qubits could be in 2^4 (ie, 16) different states at a time. As you add qubits, the number of possible states rises exponentially. A 16-bit quantum machine can be in 2^16, or 65,536, states at once, while a 128-qubit device could occupy 3.4 x 10^38 different configurations, a colossal number which, if written out in longhand, would have 39 digits. Having been put into a delicate quantum state, a quantum computer can thus examine billions of possible answers simultaneously.
In this slidecast, Scott Gnau from Teradata Labs presents: Teradata Intelligent Memory.
The introduction of Teradata Intelligent Memory allows our customers to exploit the performance of memory within Teradata Platforms, which extends our leadership position as the best performing data warehouse technology at the most competitive price,” said Scott Gnau, president, Teradata Labs. “Teradata Intelligent Memory technology is built into the data warehouse and customers don’t have to buy a separate appliance. Additionally, Teradata enables its customers to buy and configure the exact amount of in-memory capability needed for critical workloads. It is unnecessary and impractical to keep all data in memory, because all data do not have the same value to justify being placed in expensive memory.”
How does Intelligent Memory work? This animation video does a good job of making this advanced technology look simple.
SC13, the international conference for high-performance computing, networking, storage and analysis, is accepting nominations for three distinguished awards that will be presented at the conference in November.
The IEEE Seymour Cray Computer Science and Engineering Award, the IEEE Sidney Fernbach Memorial Award and the ACM-IEEE Ken Kennedy Award will be announced at SC13, to be held from 17 to 22 November at the Colorado Convention Center, US. Nominations should be made via the SC13 website.
Established in 1997, the IEEE Computer Society Seymour Cray Computer Engineering Award recognises innovative contributions to high-performance computing systems that best exemplify the creative spirit demonstrated by Seymour Cray. Previous winners have been recognised for design, engineering and intellectual leadership in creating innovative and successful HPC systems.
The IEEE Computer Society Sidney Fernbach Award was established in 1992 in honour of Sidney Fernbach, one of the pioneers in the development and application of high-performance computers for solving large computational problems. Nominations that recognise creation of widely-used and innovative software packages, application software and tools are especially solicited. The Fernbach award winner receives a certificate and $2,000.
The ACM/IEEE Ken Kennedy Award, established in 2009, is presented for outstanding contributions to programmability or productivity in computing, together with significant community service or mentoring contributions. The award was established in memory of Ken Kennedy, the founder of Rice University’s nationally ranked computer science program and one of the world’s foremost experts on high-performance computing. Awardees receive a certificate and a $5,000 honorarium.
Over at the Washington Post, Jason Samenow writes that an infusion of funding into the National Weather Service from Hurricane Sandy relief legislation promises to facilitate massive upgrades to key supercomputers, dramatically improving local, national, and global weather forecasts.
This is a breakthrough moment for the National Weather Service and the entire U.S. weather enterprise in terms of positioning itself with the computing capacity and more sophisticated models we’ve all been waiting for,” said Louis Uccellini, director of the National Weather Service.
The $23.7 million in improvements to NWS’s forecasting systems from the Sandy supplemental will facilitate a more than ten-fold increase in the capacity of the supercomputer running the GFS model, ramping compute capacity from 213 teraflops to 2,600 teraflops by the 2015 fiscal year. Read the Full Story.
The Colorado School of Mines has announced plans to install a new 155 teraflop hybrid IBM supercomputer dubbed “BlueM” to run large simulations in support of energy research. The new machine will be housed at NCAR’s Mesa Lab in Boulder and operate on the Mines’ computing network.
As the first supercomputer of its kind, BlueM features a dual architecture system combining the IBM BlueGene Q and IBM iDataplex platforms – the first instance of this configuration being installed together.
BlueM’s predecessor, RA, has been hugely successful but Mines has outgrown its 23 teraflops. BlueM will provide a greater number of flops dedicated to Mines faculty and students than are available at most other institutions with high performance machines. Researchers will be able to run higher fidelity simulations than in the past, get more time on the machine and break new ground in terms of algorithm development.
￼The HLRS High Performance Computing Center Stuttgart has signed up for a 4 Petaflop Cray XC30 supercomputer. Scheduled for full deployment in 2014, the Hornet supercomputer will boast 100,000 compute cores, 500 TB of Main Memory, and about 6 PB of storage.
The Cray ‘Hermit’ supercomputer has proven to be a highly valuable HPC resource for the broad HLRS user community as well as for scientists and researchers across Europe through the PRACE initiative, and we are excited that the Cray XC30 system will be a powerful successor,” says Dr. Ulla Thiel, Vice President Cray Europe. “The Hornet system will be one of the largest Cray XC30 supercomputers in the world, providing HLRS’ users, including engineers in the automotive and aerospace industries, with our most advanced supercomputing system. We have enjoyed a successful, long-term relationship with HLRS and we are very excited that our joint collaboration will continue.”
As with Hermit, the system expansion at HLRS is funded through project PetaGCS with support of the Federal Ministry of Education and Research and the Ministry of Higher Education, Research and Arts Baden-Württemberg. Read the Full Story.