In this NPR Podcast, Geoff Brumfiel takes a look at D-Wave quantum computer. Some skeptics are saying that the machine does not really work with the quantum effects that D-Wave claims it does.
It’s not exactly science, what they’re doing,” says Christopher Monroe, a physicist with the Joint Quantum Institute at the University of Maryland. “It’s high-level engineering, and I think it’s high-level salesmanship, too.” But Monroe remains skeptical. He believes that the D-Wave team has never demonstrated that entanglement is happening on the chips in its machine. He believes that D-Wave’s supposedly quantum bits are actually working instead as tiny electromagnets. Those magnets, Monroe believes, could be interacting in ways to solve a certain problem very quickly without quantum mechanics. “There’s no evidence that what they’re doing has anything to do with quantum mechanics,” he says. If he’s right, then D-Wave’s machine may be far more narrow in its abilities than the company believes.
Today the OpenFabrics Alliance (OFA) announced it will host two training classes at the University of New Hampshire Interoperability Lab. The two-day courses, instructed by Dr. Robert D. Russell and Rupert Dance, will provide attendees with hands-on training for managing InfiniBand Fabric in addition to coding applications directly to the VERBS API using OpenFabrics Enterprise Distribution (OFED) and related software tools.
Who Should Attend: System Administrators, System Architects, Planners and Technologists. A background in IP-based or Ethernet-based management is helpful and familiarity with the target operating system is assumed. This course does not assume RDMA specific knowledge and programming for RDMA is not a prerequisite.
This week IBM announced that the Philippine government has chosen an IBM Blue Gene/Q supercomputer to support R&D projects focused on reducing poverty, improving government processes, and enabling smarter weather management.
The IBM Blue Gene supercomputer will be most applicable to DOST’s major programs such as NOAH and Smart Agriculture, “said DOST Secretary Mario G. Montejo. “First we will work toward Blue Gene’s integration to Project NOAH to provide more advanced seven-day local weather forecasts. We can also use it to run various weather models and validate the accuracy of results almost real-time.”
A new whitepaper from Intel looks at Truescale InfiniBand performance for HPC applications.
There are two types of InfiniBand architectures available today in the marketplace, the first being the traditional InfiniBand design, created as a channel interconnect for the data center. The latest InfiniBand architecture was built with HPC in mind. This enhanced HPC fabric offering is optimized for key interconnect performance factors, featuring MPI message rating, end-to-end latency and collective performance, resulting in increased HPC application performance. enhanced intel True Scale Fabric Architecture – Offers 3x to 17x the MPI (Message Passing Interface) message throughput of the other InfiniBand architecture. For many MPI applications, small message rate throughput is an important factor that contributes to overall performance and scalability.
Intel tested a number of MPI applications and found that they performed up to 11 percent better on the cluster based Intel True Scale Fabric QDR-40 (dual-channel) than the traditional InfiniBand-based architecture running at FDR (56 Gbps). Download the whitepaper (PDF).
The Notur project fulfills a pronounced and sustainable vision for a Norwegian infrastructure for High Performance Computing (HPC) and computational science. The vision of the project is to provide a modern, national HPC infrastructure in an international and competitive setting, and stimulate computational science as the third scientific path. The project serves the Norwegian computational science community by providing the infrastructure to individuals or groups involved in education and research at Norwegian universities and colleges, and research and engineering at research institutes and industry who contribute to the funding of Notur.
Over at the Student Cluster Competition Blog, Dan Olds writes that the first Battle of Liepzig in 1813 can’t really compare to what’s coming in June: The Second Battle of Leipzig, aka the ISC’13 Klusterkampf. Now in its second year, the ISC Student Cluster competition will feature nine teams of university undergrad students in a quest to build the fastest supercomputer on the show floor.
Chemnitz University of Technology is only 51 miles from Leipzig and thus the hometown favorite. The school, founded in 1836, was a fast starter in terms of science and technology. During one pre-1900 period, Chemnitz generated more patent registrations than any other institution in the world. Currently, the university is ranked near the top of German technical schools, a reputation they’ll be putting on the line at the student cluster challenge. Judging by their entry application, Chemnitz (or TUC, which stands for Technische Universitat Chemnitz) has a deep HPC curriculum, including courses and research on FPGA and GPU application acceleration. Team TUC has partnered with German hardware vendor MEGWARE GmbH to build “TurboTUC,” the schlachtkreuzer they hope to ride to victory.
IDC will once again sponsor its annual HPC Breakfast Briefing at ISC’13 on June 18 in Hall 4 of the Leipzig Congress Center. The event will feature the latest HPC revenue numbers, market forecast and trends; international competition and initiatives, and exascale plans as well as awards for ROI in high performance computing.
Attendance and full breakfast are complimentary, so be sure to Register Now.
In this video from the 2013 HPC User Forum, Burak Yenier presents: The HPC Experiment – Paving the way to HPC as a Service.
For the 2nd Round of the HPC experiment, we will apply the cloud computing service model to workloads on remote Cluster Computing resources in the areas of HPC, Computer Aided Engineering, and the Life Sciences.
In related news, the HPC Experiment site has just added an online exhibit area as one-stop interactive service directory for Cloud users and service providers with focus on High Performance Computing, Big Data, Digital Manufacturing, and Computational Life Sciences.
Over at Brendan’s Blog, Brendan Gregg writes that response time – or latency – is crucial to understand in detail, but many of the common presentations of this data hide important details and patterns.
When I/O latency is presented as a visual heat map, some intriguing and beautiful patterns can emerge. These patterns provide insight into how a system is actually performing and what kinds of latency end-user applications experience. Many characteristics seen in these patterns are still not understood, but so far their analysis is revealing systemic behaviors that were previously unknown.
While we may not get to Exascale by 2020, ground-breaking compute technologies for the SKA telescope are already under development (without involvement of the U.S. Government, by the way). In this video from the 2013 HPC User Forum, Ronald P. Luijten from IBM Research presents: The IBM-DOME Microserver Demonstrator.
The computational and storage demands for the future Square Kilometer Array (SKA) radio telescope are signiﬁcant. Building on the experience gained with the collaboration between ASTRON and IBM with the Blue Gene based LOFAR correlator, ASTRON and IBM have now embarked on a public-private exascale computing research project aimed at solving the SKA computing challenges. This project, called DOME, investigates novel approaches to exascale computing, with a focus on energy efficient, streaming data processing, exascale storage, and nano-photonics. DOME will not only beneﬁt the SKA, but will also make the knowledge gained available to interested third parties via a Users Platform. The intention of the DOME project is to evolve into the global center of excellence for transporting, processing, storing and analyzing large amounts of data for minimal energy cost.”
Over at ExtremeTech, Joel Hruska writes that the daunting challenges of achieving exascale compute levels by the end of the decade were brought home recently in a presentation by Horst Simon, the Deputy Director at NERSC. In fact, Simon has wagered $2000 of his own money that we wont get there by 2020.
But here’s the thing: What if the focus on “exascale” is actually the wrong way to look at the problem?
FLOPS has persisted as a metric in supercomputing even as core counts and system density has risen, but the peak performance of a supercomputer may be a poor measure of its usefulness. The ability to efficiently utilize a subset of the system’s total performance capability is extremely important. In the long term, FLOPS are easier than moving data across nodes. Taking advantage of parallelism becomes even more important. Keeping data local is a better way to save power than spreading the workload across nodes, because as node counts rise, concurrency consumes an increasing percentage of total system power.
The Services Department Head (Computer Systems Manager II) will have the opportunity to lead an organization with a world-wide reputation for excellence and innovation. The Services Department serves as the primary point of contact for NERSCs scientific users and is responsible for enhancing their scientific productivity. Key activities include supporting users through the transition to exascale-class architectures; providing services to optimize application performance; providing services to store, analyze, manage, and share data; understanding HPC architecture trends; benchmarking; user communication; user training; and requirements gathering.
Are you paying too much for your job ads?Not only do we offer ads for a fraction of what the other guys charge, our insideHPC Job Board is powered by SimplyHIred, the world’s largest job search engine.
As a reminder, we are offering FREE job listings for .EDU and .GOV domains, so email us at: info @ insideHPC.com for a special discount code.