The Notur project fulfills a pronounced and sustainable vision for a Norwegian infrastructure for High Performance Computing (HPC) and computational science. The vision of the project is to provide a modern, national HPC infrastructure in an international and competitive setting, and stimulate computational science as the third scientific path. The project serves the Norwegian computational science community by providing the infrastructure to individuals or groups involved in education and research at Norwegian universities and colleges, and research and engineering at research institutes and industry who contribute to the funding of Notur.
The International Supercomputing Conference (ISC’13) is the largest and most significant conference and networking in Europe for scientists, researchers and vendors within the HPC community. Visit www.isc13.org for details.
Over at the Student Cluster Competition Blog, Dan Olds writes that the first Battle of Liepzig in 1813 can’t really compare to what’s coming in June: The Second Battle of Leipzig, aka the ISC’13 Klusterkampf. Now in its second year, the ISC Student Cluster competition will feature nine teams of university undergrad students in a quest to build the fastest supercomputer on the show floor.
Chemnitz University of Technology is only 51 miles from Leipzig and thus the hometown favorite. The school, founded in 1836, was a fast starter in terms of science and technology. During one pre-1900 period, Chemnitz generated more patent registrations than any other institution in the world. Currently, the university is ranked near the top of German technical schools, a reputation they’ll be putting on the line at the student cluster challenge. Judging by their entry application, Chemnitz (or TUC, which stands for Technische Universitat Chemnitz) has a deep HPC curriculum, including courses and research on FPGA and GPU application acceleration. Team TUC has partnered with German hardware vendor MEGWARE GmbH to build “TurboTUC,” the schlachtkreuzer they hope to ride to victory.
Read the Full Story.
IDC will once again sponsor its annual HPC Breakfast Briefing at ISC’13 on June 18 in Hall 4 of the Leipzig Congress Center. The event will feature the latest HPC revenue numbers, market forecast and trends; international competition and initiatives, and exascale plans as well as awards for ROI in high performance computing.
Attendance and full breakfast are complimentary, so be sure to Register Now.
In this video from the 2013 HPC User Forum, Burak Yenier presents: The HPC Experiment – Paving the way to HPC as a Service.
For the 2nd Round of the HPC experiment, we will apply the cloud computing service model to workloads on remote Cluster Computing resources in the areas of HPC, Computer Aided Engineering, and the Life Sciences.
In related news, the HPC Experiment site has just added an online exhibit area as one-stop interactive service directory for Cloud users and service providers with focus on High Performance Computing, Big Data, Digital Manufacturing, and Computational Life Sciences.
Over at Brendan’s Blog, Brendan Gregg writes that response time – or latency – is crucial to understand in detail, but many of the common presentations of this data hide important details and patterns.
When I/O latency is presented as a visual heat map, some intriguing and beautiful patterns can emerge. These patterns provide insight into how a system is actually performing and what kinds of latency end-user applications experience. Many characteristics seen in these patterns are still not understood, but so far their analysis is revealing systemic behaviors that were previously unknown.
Read the Full Story.
While we may not get to Exascale by 2020, ground-breaking compute technologies for the SKA telescope are already under development (without involvement of the U.S. Government, by the way). In this video from the 2013 HPC User Forum, Ronald P. Luijten from IBM Research presents: The IBM-DOME Microserver Demonstrator.
The computational and storage demands for the future Square Kilometer Array (SKA) radio telescope are signiﬁcant. Building on the experience gained with the collaboration between ASTRON and IBM with the Blue Gene based LOFAR correlator, ASTRON and IBM have now embarked on a public-private exascale computing research project aimed at solving the SKA computing challenges. This project, called DOME, investigates novel approaches to exascale computing, with a focus on energy efficient, streaming data processing, exascale storage, and nano-photonics. DOME will not only beneﬁt the SKA, but will also make the knowledge gained available to interested third parties via a Users Platform. The intention of the DOME project is to evolve into the global center of excellence for transporting, processing, storing and analyzing large amounts of data for minimal energy cost.”
Over at ExtremeTech, Joel Hruska writes that the daunting challenges of achieving exascale compute levels by the end of the decade were brought home recently in a presentation by Horst Simon, the Deputy Director at NERSC. In fact, Simon has wagered $2000 of his own money that we wont get there by 2020.
But here’s the thing: What if the focus on “exascale” is actually the wrong way to look at the problem?
FLOPS has persisted as a metric in supercomputing even as core counts and system density has risen, but the peak performance of a supercomputer may be a poor measure of its usefulness. The ability to efficiently utilize a subset of the system’s total performance capability is extremely important. In the long term, FLOPS are easier than moving data across nodes. Taking advantage of parallelism becomes even more important. Keeping data local is a better way to save power than spreading the workload across nodes, because as node counts rise, concurrency consumes an increasing percentage of total system power.
The Services Department Head (Computer Systems Manager II) will have the opportunity to lead an organization with a world-wide reputation for excellence and innovation. The Services Department serves as the primary point of contact for NERSCs scientific users and is responsible for enhancing their scientific productivity. Key activities include supporting users through the transition to exascale-class architectures; providing services to optimize application performance; providing services to store, analyze, manage, and share data; understanding HPC architecture trends; benchmarking; user communication; user training; and requirements gathering.
Are you paying too much for your job ads? Not only do we offer ads for a fraction of what the other guys charge, our insideHPC Job Board is powered by SimplyHIred, the world’s largest job search engine.
As a reminder, we are offering FREE job listings for .EDU and .GOV domains, so email us at: info @ insideHPC.com for a special discount code.
The fundamental unit of quantum computation is the “qubit”, the quantum analogue of the ordinary “bit” in a standard machine. Like ordinary bits, qubits can take the value of 1 or 0. Unlike ordinary bits, their quantum nature also lets them exist in a strange mixture—a “superposition”, in the jargon—of both states at once, much like Erwin Schrödinger’s famous cat. That means that a quantum computer can be in many states simultaneously, which in turn means that it can, in some sense, perform many different calculations at the same time. To be precise, a quantum computer with four qubits could be in 2^4 (ie, 16) different states at a time. As you add qubits, the number of possible states rises exponentially. A 16-bit quantum machine can be in 2^16, or 65,536, states at once, while a 128-qubit device could occupy 3.4 x 10^38 different configurations, a colossal number which, if written out in longhand, would have 39 digits. Having been put into a delicate quantum state, a quantum computer can thus examine billions of possible answers simultaneously.
Read the Full Story.
We should probably note that while D-Wave systems are not really quantum computers in the classical sense, they do use quantum effects. How do they do it? Check out this paper on Quantum Annealing with More than One Hundred Qubits.
In this slidecast, Scott Gnau from Teradata Labs presents: Teradata Intelligent Memory.
The introduction of Teradata Intelligent Memory allows our customers to exploit the performance of memory within Teradata Platforms, which extends our leadership position as the best performing data warehouse technology at the most competitive price,” said Scott Gnau, president, Teradata Labs. “Teradata Intelligent Memory technology is built into the data warehouse and customers don’t have to buy a separate appliance. Additionally, Teradata enables its customers to buy and configure the exact amount of in-memory capability needed for critical workloads. It is unnecessary and impractical to keep all data in memory, because all data do not have the same value to justify being placed in expensive memory.”
How does Intelligent Memory work? This animation video does a good job of making this advanced technology look simple.
SC13, the international conference for high-performance computing, networking, storage and analysis, is accepting nominations for three distinguished awards that will be presented at the conference in November.
The IEEE Seymour Cray Computer Science and Engineering Award, the IEEE Sidney Fernbach Memorial Award and the ACM-IEEE Ken Kennedy Award will be announced at SC13, to be held from 17 to 22 November at the Colorado Convention Center, US. Nominations should be made via the SC13 website.
Established in 1997, the IEEE Computer Society Seymour Cray Computer Engineering Award recognises innovative contributions to high-performance computing systems that best exemplify the creative spirit demonstrated by Seymour Cray. Previous winners have been recognised for design, engineering and intellectual leadership in creating innovative and successful HPC systems.
The IEEE Computer Society Sidney Fernbach Award was established in 1992 in honour of Sidney Fernbach, one of the pioneers in the development and application of high-performance computers for solving large computational problems. Nominations that recognise creation of widely-used and innovative software packages, application software and tools are especially solicited. The Fernbach award winner receives a certificate and $2,000.
The ACM/IEEE Ken Kennedy Award, established in 2009, is presented for outstanding contributions to programmability or productivity in computing, together with significant community service or mentoring contributions. The award was established in memory of Ken Kennedy, the founder of Rice University’s nationally ranked computer science program and one of the world’s foremost experts on high-performance computing. Awardees receive a certificate and a $5,000 honorarium.
Over at the Washington Post, Jason Samenow writes that an infusion of funding into the National Weather Service from Hurricane Sandy relief legislation promises to facilitate massive upgrades to key supercomputers, dramatically improving local, national, and global weather forecasts.
This is a breakthrough moment for the National Weather Service and the entire U.S. weather enterprise in terms of positioning itself with the computing capacity and more sophisticated models we’ve all been waiting for,” said Louis Uccellini, director of the National Weather Service.
The $23.7 million in improvements to NWS’s forecasting systems from the Sandy supplemental will facilitate a more than ten-fold increase in the capacity of the supercomputer running the GFS model, ramping compute capacity from 213 teraflops to 2,600 teraflops by the 2015 fiscal year. Read the Full Story.