The fundamental unit of quantum computation is the “qubit”, the quantum analogue of the ordinary “bit” in a standard machine. Like ordinary bits, qubits can take the value of 1 or 0. Unlike ordinary bits, their quantum nature also lets them exist in a strange mixture—a “superposition”, in the jargon—of both states at once, much like Erwin Schrödinger’s famous cat. That means that a quantum computer can be in many states simultaneously, which in turn means that it can, in some sense, perform many different calculations at the same time. To be precise, a quantum computer with four qubits could be in 2^4 (ie, 16) different states at a time. As you add qubits, the number of possible states rises exponentially. A 16-bit quantum machine can be in 2^16, or 65,536, states at once, while a 128-qubit device could occupy 3.4 x 10^38 different configurations, a colossal number which, if written out in longhand, would have 39 digits. Having been put into a delicate quantum state, a quantum computer can thus examine billions of possible answers simultaneously.
In this slidecast, Scott Gnau from Teradata Labs presents: Teradata Intelligent Memory.
The introduction of Teradata Intelligent Memory allows our customers to exploit the performance of memory within Teradata Platforms, which extends our leadership position as the best performing data warehouse technology at the most competitive price,” said Scott Gnau, president, Teradata Labs. “Teradata Intelligent Memory technology is built into the data warehouse and customers don’t have to buy a separate appliance. Additionally, Teradata enables its customers to buy and configure the exact amount of in-memory capability needed for critical workloads. It is unnecessary and impractical to keep all data in memory, because all data do not have the same value to justify being placed in expensive memory.”
How does Intelligent Memory work? This animation video does a good job of making this advanced technology look simple.
SC13, the international conference for high-performance computing, networking, storage and analysis, is accepting nominations for three distinguished awards that will be presented at the conference in November.
The IEEE Seymour Cray Computer Science and Engineering Award, the IEEE Sidney Fernbach Memorial Award and the ACM-IEEE Ken Kennedy Award will be announced at SC13, to be held from 17 to 22 November at the Colorado Convention Center, US. Nominations should be made via the SC13 website.
Established in 1997, the IEEE Computer Society Seymour Cray Computer Engineering Award recognises innovative contributions to high-performance computing systems that best exemplify the creative spirit demonstrated by Seymour Cray. Previous winners have been recognised for design, engineering and intellectual leadership in creating innovative and successful HPC systems.
The IEEE Computer Society Sidney Fernbach Award was established in 1992 in honour of Sidney Fernbach, one of the pioneers in the development and application of high-performance computers for solving large computational problems. Nominations that recognise creation of widely-used and innovative software packages, application software and tools are especially solicited. The Fernbach award winner receives a certificate and $2,000.
The ACM/IEEE Ken Kennedy Award, established in 2009, is presented for outstanding contributions to programmability or productivity in computing, together with significant community service or mentoring contributions. The award was established in memory of Ken Kennedy, the founder of Rice University’s nationally ranked computer science program and one of the world’s foremost experts on high-performance computing. Awardees receive a certificate and a $5,000 honorarium.
Over at the Washington Post, Jason Samenow writes that an infusion of funding into the National Weather Service from Hurricane Sandy relief legislation promises to facilitate massive upgrades to key supercomputers, dramatically improving local, national, and global weather forecasts.
This is a breakthrough moment for the National Weather Service and the entire U.S. weather enterprise in terms of positioning itself with the computing capacity and more sophisticated models we’ve all been waiting for,” said Louis Uccellini, director of the National Weather Service.
The $23.7 million in improvements to NWS’s forecasting systems from the Sandy supplemental will facilitate a more than ten-fold increase in the capacity of the supercomputer running the GFS model, ramping compute capacity from 213 teraflops to 2,600 teraflops by the 2015 fiscal year. Read the Full Story.
The Colorado School of Mines has announced plans to install a new 155 teraflop hybrid IBM supercomputer dubbed “BlueM” to run large simulations in support of energy research. The new machine will be housed at NCAR’s Mesa Lab in Boulder and operate on the Mines’ computing network.
As the first supercomputer of its kind, BlueM features a dual architecture system combining the IBM BlueGene Q and IBM iDataplex platforms – the first instance of this configuration being installed together.
BlueM’s predecessor, RA, has been hugely successful but Mines has outgrown its 23 teraflops. BlueM will provide a greater number of flops dedicated to Mines faculty and students than are available at most other institutions with high performance machines. Researchers will be able to run higher fidelity simulations than in the past, get more time on the machine and break new ground in terms of algorithm development.
￼The HLRS High Performance Computing Center Stuttgart has signed up for a 4 Petaflop Cray XC30 supercomputer. Scheduled for full deployment in 2014, the Hornet supercomputer will boast 100,000 compute cores, 500 TB of Main Memory, and about 6 PB of storage.
The Cray ‘Hermit’ supercomputer has proven to be a highly valuable HPC resource for the broad HLRS user community as well as for scientists and researchers across Europe through the PRACE initiative, and we are excited that the Cray XC30 system will be a powerful successor,” says Dr. Ulla Thiel, Vice President Cray Europe. “The Hornet system will be one of the largest Cray XC30 supercomputers in the world, providing HLRS’ users, including engineers in the automotive and aerospace industries, with our most advanced supercomputing system. We have enjoyed a successful, long-term relationship with HLRS and we are very excited that our joint collaboration will continue.”
As with Hermit, the system expansion at HLRS is funded through project PetaGCS with support of the Federal Ministry of Education and Research and the Ministry of Higher Education, Research and Arts Baden-Württemberg. Read the Full Story.
Argonne National Lab just wrapped up a two-day event celebrating 30 years of parallel computing. The event hosted many of the visionaries at the lab and at other institutions who initiated and contributed to Argonne’s history of advancing parallel computing and computational science.
Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future.
The tradition continues as Argonne explores new paths and paves the way toward exascale computing. Read the Full Story.
Today Italian HPC solution provider NICE announced the release of the EnginFrame 2013.0 software. Designed for technical computing users in a broad range of markets, EnginFrame simplifies engineering and scientific workflows, increasing productivity and streamlining data and resource management.
With EnginFrame 2013.0 we have further strengthened our technology leadership in the HPC Portal market” , said Giuseppe Ugolotti, CEO of NICE. “NICE EnginFrame is a critical component for anyone who wants to create a technical Cloud that can run at the same time both HPC and interactive workload.”
As an HPC Portal, EnginFrame 2013.0 now offers built-in management of 3D and 2D remote visualization sessions, improved data transfer capabilities and a great number of new features and enhancements addressing both end users’ and system administrators’ needs. Leveraging all the major HPC job schedulers and remote visualization technologies, EnginFrame translates user clicks into the appropriate actions to submit HPC jobs, create remote visualization sessions, and monitor workloads on distributed resources.
Today Mellanox announced plans to acquire photonics leader Kotura, Inc. for approximately $82 million. The acquisition is expected to expand Mellanox’s ability to deliver cost-effective, high-speed networks with next generation optical connectivity, allowing data center customers to meet the growing demands of high-performance, Web 2.0, cloud, data center, database, financial services and storage applications. Mellanox believes that the Kotura acquisition will enhance its ability to provide leading technologies for high speed, scalable and efficient end-to-end interconnect solutions.
Operating networks at 100 Gigabit per second rates and higher requires careful integration between all parts of the network. We believe that silicon photonics is an important component in the development of 100 Gigabit InfiniBand and Ethernet solutions, and that owning and controlling the technology will allow us to develop the best, most reliable solution for our customers,” said Eyal Waldman, president, CEO and chairman of Mellanox Technologies. “We expect that the proposed acquisition of Kotura’s technology and the additional development team will better position us to produce 100Gb/s and faster interconnect solutions with higher-density optical connectivity at a lower cost. We welcome the great talent from Kotura and look forward to their contribution to Mellanox’s continued growth.”
Think of digital computers, the Internet, lasers, and genome sequencing, all of which are underpinned by basic science, and all of which received federal funding in their early stages. The silliest part of the proposed legislation is that it mandates that the research be “ground breaking,” an attribute that is impossible to predict. It’s like saying unless the research will win a Noble Prize, it’s not worth doing. Such wording reflects a fundamental misunderstanding of how science works.