While we may not get to Exascale by 2020, ground-breaking compute technologies for the SKA telescope are already under development (without involvement of the U.S. Government, by the way). In this video from the 2013 HPC User Forum, Ronald P. Luijten from IBM Research presents: The IBM-DOME Microserver Demonstrator.
The computational and storage demands for the future Square Kilometer Array (SKA) radio telescope are signiﬁcant. Building on the experience gained with the collaboration between ASTRON and IBM with the Blue Gene based LOFAR correlator, ASTRON and IBM have now embarked on a public-private exascale computing research project aimed at solving the SKA computing challenges. This project, called DOME, investigates novel approaches to exascale computing, with a focus on energy efficient, streaming data processing, exascale storage, and nano-photonics. DOME will not only beneﬁt the SKA, but will also make the knowledge gained available to interested third parties via a Users Platform. The intention of the DOME project is to evolve into the global center of excellence for transporting, processing, storing and analyzing large amounts of data for minimal energy cost.”
The fundamental unit of quantum computation is the “qubit”, the quantum analogue of the ordinary “bit” in a standard machine. Like ordinary bits, qubits can take the value of 1 or 0. Unlike ordinary bits, their quantum nature also lets them exist in a strange mixture—a “superposition”, in the jargon—of both states at once, much like Erwin Schrödinger’s famous cat. That means that a quantum computer can be in many states simultaneously, which in turn means that it can, in some sense, perform many different calculations at the same time. To be precise, a quantum computer with four qubits could be in 2^4 (ie, 16) different states at a time. As you add qubits, the number of possible states rises exponentially. A 16-bit quantum machine can be in 2^16, or 65,536, states at once, while a 128-qubit device could occupy 3.4 x 10^38 different configurations, a colossal number which, if written out in longhand, would have 39 digits. Having been put into a delicate quantum state, a quantum computer can thus examine billions of possible answers simultaneously.
Argonne National Lab just wrapped up a two-day event celebrating 30 years of parallel computing. The event hosted many of the visionaries at the lab and at other institutions who initiated and contributed to Argonne’s history of advancing parallel computing and computational science.
Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future.
The tradition continues as Argonne explores new paths and paves the way toward exascale computing. Read the Full Story.
Think of digital computers, the Internet, lasers, and genome sequencing, all of which are underpinned by basic science, and all of which received federal funding in their early stages. The silliest part of the proposed legislation is that it mandates that the research be “ground breaking,” an attribute that is impossible to predict. It’s like saying unless the research will win a Noble Prize, it’s not worth doing. Such wording reflects a fundamental misunderstanding of how science works.
Over at NICS, Scott Gibson writes that researchers have applied HPC to produce a highly efficient graphics engine that reveals in 3D what’s going on in very complicated astrophysical flows. These simulations also allow researchers to present their results to a wider audience.
McKinney and his research team colleagues convey in recent a Science paper how, through the use of simulations, they discovered that the behavior of black holes that have thick accretion disks differs from longstanding assumptions. The belief has been that accretion disks lie flat along the outer edges of black holes while the relativistic jets shoot out perpendicularly to the disks. However, the simulations showed that the configuration becomes more complex at large distances from the black hole spin axis, with the jets becoming parallel to, but offset from, the accretion disk’s rotational axis; in the process, the disk warps and the jet bends, influencing what one sees at different viewing angles. McKinney explained that key in making this discovery was being able to reduce the symmetry of the problem in their numerical code. To do that, the researchers used spherical polar coordinates that employ radius and two different angles to describe the coordinates. As a result of their approach, they were able to capture the black hole’s asymmetrical shape.
Over at HPC for Energy, Carl Bauer writes that High Performance Computing is the key to meeting the daunting energy challenges that face the nation.
U.S. high-performance computing capabilities resident at our national laboratories can turn these challenges into an opportunity for competitive advantage. What was once only available for unique, extremely important and expensive government research projects or the largest corporations is now available to benefit society on a greater scale. Furthermore, the breadth and depth of an educated and talented work force to utilize these tools is expanding. The world-wide competitive advantage this will provide is beginning to be realized across various domestic and international industry sectors. The HPC for Energy initiative is a very important and timely program that can accelerate the realization of the benefits of better-informed deployment of HPC across all aspects of the U.S. energy supply chain.
The 1000 bull genomes project aims to provide a large database of genetic variants for genomic prediction and genome wide association studies in all cattle breeds for the bovine research community.
Over at the Texas Advanced Computing Center, writes that researchers from Iowa State University are using TACC supercomputing resources to better understand bovine DNA.
Harnessing information from DNA sequences in buffalo and cattle is an important step in meeting the growing world’s demand for food. As the world’s population approaches nine billion people in 2050, the demand for food will double. Researchers are hoping new DNA variants will be identified for use in breeding programs to increase milk and meat production. Advances in DNA sequencing technologies are generating a stampede of sequence data for both the water buffalo and bovine research communities.
With help from computational experts at TACC, the researchers were able to sequence data that previously required three weeks of computing time in only 8 to 10 hours. Read the Full Story.
D-Wave Systems, a commercial quantum computing company, has announced the formal launch of its US business.
Industry expert and supercomputing veteran, Robert “Bo” Ewald will lead the new business as president and will head up global customer operations as the company’s chief revenue officer. New offices and R&D facilities have opened in Palo Alto, California and others are expected in the near future.
Bo Ewald joining us is huge validation of our business,’ said Vern Brownell, CEO of D-Wave Systems. “Bo is a legendary figure in the supercomputing industry. His knowledge and influence reach a wide array of sectors, where he has delivered state-of-the-art high performance solutions for research, defence and intelligence, energy, manufacturing, financial services and genomics. Throughout Bo’s career he has been dedicated to helping organisations solve their most difficult challenges, which perfectly matches the mission of D-Wave. Today we launch our formal presence in the US and will start to expand our business globally. It is gratifying to have Bo at the helm.
Ewald added: “I’ve been in pioneering technology organisations for a long time with companies that did things that had never been done before and that allowed their customers to do the same. The quantum computers being developed by D-Wave and the applications that will be used by our customers will be an even more revolutionary step than I’ve seen in the industry. People will be able to solve problems that they can only dream about today, on systems that are turning science fiction into science fact.”
In a special session at ISC’13, scientists working on the Human Brain Project will discuss their vision and roadmap for computing. Featuring Dr. Henry Markram of EPFL, the June 18 keynote will be entitled Supercomputing & the Human Brain Project – Following Brain Research & ICT on their 10-Year Quest.
The Human Brain Project, recently awarded a 10 year grant by the EU Commission, will pull together all our existing knowledge about the human brain and to reconstruct the brain, piece by piece, in supercomputer-based models and simulations. Federating more than 80 European and international research institutions, the Human Brain Project is estimated to cost 1.19 billion euros. It will be coordinated at the Ecole Polytechnique Fédérale de Lausanne (EPFL) in Switzerland, by neuroscientist Henry Markram with co-directors Karlheinz Meier of Heidelberg University, Germany, and Richard Frackowiak of Centre Hospitalier Universitaire Vaudois and the University of Lausanne. The project will also associate some important North American and Japanese partners.
Japan News reports that the country’s science ministry is considering development of an exascale supercomputer that would be 100 times faster than K computer, which is currently the nation’s fastest machine. With a goal of completing the machine by about 2020, the Education, Culture, Sports, Science and Technology Ministry is preparing to request funding for conceptual designs and other areas in next fiscal year’s budget, the sources said.
Exascale computer projects are already under way in the United States, Europe and China, all aiming for completion around 2020. The working group decided to enter the fierce international race to develop an exascale supercomputer because “it would aid scientific and technological development, and help improve industrial competitiveness,” the sources said.
Electronics Weekly reports that the Barcelona Supercomputing Center is working with Intel to set up a research lab in Spain to develop technologies needed for future exascale supercomputers with up to 100 million processor cores
The BSC Exascale Laboratory will research scalable parallel run-time systems that are needed to support these very high levels of parallel computing.
BSC is one of Europe’s most renowned HPC labs and offers very interesting technology to scale run time systems, tools and applications up to exascale level,” said Stephen Pawlowski, Intel senior fellow.
Processing the vast quantities of data produced by the SKA will require very high performance central supercomputers capable of 100 petaflops per second processing power. This is about 50 times more powerful than the most powerful supercomputer in 2010 and equivalent to the processing power of about one hundred million PCs.
The challenges for the Grid were three-fold. The main one was to understand how best to manage the LHC data and use the Grid’s heterogeneous environment in a way that physicists could concern themselves with analysis without needing to know where their data were. A distributed system is more complex and demanding to master than the usual batch-processing farms, so the physicists required continuous education on how to use the system. The Grid needs to be fully operational at all times (24/7, 365 days/year) and should “never sleep”, meaning that important upgrades of the Grid middleware in all data centres must be done on a regular basis. For the latter, the success can be attributed in part to the excellent quality of the middleware itself (supplied by various common projects, such as WLCG/EGEE/EMI in Europe and OSG in the US, see box) and to the administrators of the computing centres (coordinated by EGI in Europe and OSG in North America), who keep the computing fabric running continuously.